Abstract
Background
The COVID-19 pandemic has accelerated the shift toward e-learning and online education in surgical training. With the increasing prevalence of end-stage chronic kidney disease, kidney transplantation is in high demand. Donor safety is crucial in nephrectomy procedures, highlighting the importance of effective training. This study evaluates the quality and effectiveness of YouTube videos focusing on laparoscopic and robotic donor nephrectomy for surgical education.
Methods
On October 24, 2023, searches on YouTube for “laparoscopic live donor nephrectomy” and “robotic live donor nephrectomy” returned 121 videos, with 63 included in the study. Popularity was evaluated using the Video Power Index (VPI), while reliability and quality were assessed using the LAP-VEGaS Video Assessment Tool and Journal of the American Medical Association (JAMA) benchmark criteria. Additionally, a structured descriptive tool called the “Live Donor Nephrectomy Completeness (LDNC)” was created to evaluate the completeness and educational value of procedural technical steps.
Results
Out of 63 videos reviewed, laparoscopic surgical procedures were depicted in 71.4% of them, while robotic approaches were shown in 28.6%. Academic backgrounds were associated with 54% of the videos, and individual physician backgrounds with 46%. Mean scores were LAP-VEGaS 9.79 ± 3.87, VPI 6.32 ± 3.31, and LDNC 9.68 ± 1.97. JAMA scores varied, with 34.9% receiving 1 point, 34.9% receiving 2 points, 17.5% receiving 3 points, and 12.7% receiving 4 points. Academic videos scored significantly higher in LAP-VEGaS and LDNC (all p < .01). While LAP-VEGaS, VPI, and LDNC scores correlated significantly (all p < .05), no correlation was found between JAMA score and other scoring systems. Videos with more clicks and likes showed significantly better scores across all measures (all p < .05).
Conclusion
Amidst the challenges posed by the pandemic on surgical education, YouTube has emerged as a valuable resource for learning about laparoscopic and robotic donor nephrectomy for living kidney donation. However, the quality and reliability of these videos vary greatly, and many lack thorough reviews, leading to incomplete information. To enhance their educational value, it's proposed that videos undergo professional evaluation before publication and adhere to standardized, structured, and validated scoring systems, ensuring logical structure and improved quality.
Keywords
Background
The landscape of surgical education has undergone a profound transformation since its inception in the early twentieth century. Nowadays, the training of competent and professional surgeons requires continuous innovation and modernization of educational methods and possibilities. 1 In this context, a crucial impact on establishing new educational methods was the COVID-19 pandemic in 2020, which had a severe impact on the quality and quantity of surgical education. 2 Due to the interruption of face-to-face education and elective surgical activities for education and training skills, the boom for integration of e-learning and online curricula and platforms has emerged as a solution to overcome obstacles in surgical education and training in the challenging COVID-19 pandemic situation.3–6
The worldwide prevalence of patients with end-stage chronic kidney disease (CKD) is enormously increasing. 7 Therefore, the persistent gap between the need for organs from deceased donors and the number of patients waiting for kidney transplantation has resulted in increased use of kidneys from living donors.8,9 Since the first living donor nephrectomy in the 1950s, the surgical technique has evolved significantly and revolutionized the face form surgery from open donor nephrectomy (ODN) to laparoscopic procedures, such as laparoscopic donor nephrectomy (LDN), hand-assisted LDN (HALDN), retroperitoneoscopic LDN (RDN), and robot-assisted donor nephrectomy (RADN). 10
Nonetheless, acquiring the proficiency to conduct and oversee these laparoscopic procedures poses added challenges for trainees, demanding a distinct set of psychomotor skills compared to traditional open surgery. These skills include enhanced hand-eye coordination, an adaptation from a three-dimensional to a 2-dimensional perspective, and adjustments to tactile and haptic perception. Mastering these skills necessitates additional training duration, experience, and practice.11,12 Those skills are especially essential for the living donor nephrectomy, as healthy, asymptomatic patients undergo a potentially high-risk surgical procedure for the benefit of someone else. So, it is crucial to minimize the complications as far as possible and facilitate the patient's recovery.
Consequently, there is growing interest among young surgeons in exploring various training approaches such as animal laboratories, simulation sessions, and video tutorials to ensure these requirements for operative quality and patient safety in this setting. 13 The closure imposed during the pandemic has underscored the significance of online video platforms, with YouTube emerging as the most readily available option for assistants, fellows, students, and even surgeons.14,15 In recent years, there has been an increase in the uploading of medical content on YouTube. That is reflected in the database “PubMed,” where over 2000 publications have been published since 2007, obtained by searching for the search term “YouTube Videos”—(https://pubmed.ncbi.nlm.nih.gov/?term = youtube + videos). However, the videos are not always peer-reviewed and checked for quality, reliability, and technical surgical aspects. The study aims to assess the utility, reliability, and educational value as well as technical aspects of minimal-invasive donor nephrectomy videos uploaded on the “YouTube” platform for surgical training and education.
Materials and Methods
Search strategy and screening
On October 24, 2023, the search terms “laparoscopic live donor nephrectomy” and “robotic live donor nephrectomy” were conducted on the online platform “YouTube” (www.youtube.com). Results were sorted by relevance using the website's default setting “view count” and sorting options. Advertisements, patient experiences, videos with non-English content, lectures, and educational as well as theoretical content videos were excluded (Figure 1). Each video was analyzed and reviewed for content from 2 experienced surgeons in laparoscopic and transplant surgery together.

Flowchart of video selection.
We analyzed each video according to the following characteristics: length of duration, likes and dislikes, clicks, type of surgery (laparoscopic vs robotic surgery), video resolution, voice or written commentary, and number of views. The video provider was classified as academic or private/ individual. Channels exclusively featuring the doctor's personal information in both the channel name and description were labeled as individual. Conversely, channels associated with medical associations, hospitals, universities, and medical journals were categorized as originating from academic centers.
For evaluating the quality, performance, educational value, and reliability of videos following scoring guidelines and criteria were assessed:
Video Power Index (VPI)
To evaluate video popularity, we employed the “Video Power Index” (VPI), which considers both view ratios and like ratios. The VPI was calculated as follows: first, calculate the like ratio like*100/[likes + dislikes]) and the view (number of views/day). Then, the VPI is equal to the like ratio*view ratio/100.16,17
Journal of American Medical Association (JAMA) benchmark criteria
These benchmark criteria score was used to evaluate and check the basic quality and reliability of videos.16–19 The JAMA benchmark score consists of 4 assessment factors (authorship, attribution, disclosure, currency), each of which can be assigned 1 point for evaluation.
Laparoscopic Surgery Video Educational Guidelines (LAP-VEGaS)
This score and guidelines were evaluated by Celentano et al in 2018 and aimed to provide consensus advice on how to report a surgical laparoscopic video for educational purposes. The LAP-VEGaS score includes 9 criteria, with each criterion being rated on a scale of 0 to 2 points. This system permits a maximum attainable score of 18 points. 20
Live donor nephrectomy completeness (LDNC)
Currently, there is no tool in the literature to provide a specific assessment of technical aspects and quality of organ removal for donor nephrectomy-related surgical videos. For a more detailed evaluation, the authors introduced a new descriptive tool—Live donor nephrectomy completeness (LDNC). This assessment focuses on how thoroughly the critical surgical steps of the procedure are presented in the video, evaluating its overall completeness from a professional perspective on donor nephrectomies. The LDNC criteria consist of 13 key items, each scored with 1 point (Table 1). Two independent reviewers evaluated the videos, and the final score for each video was determined by averaging the scores from both reviewers.
Live donor nephrectomy completeness (LDNC) for YouTube videos.
Statistical analysis
For the statistical analyses, we used the software program SPSS version 29. Absolute and relative frequencies (n) were employed to describe the categorical variables. For the ordinal and continuous variables, means, and standard deviation were calculated. Spearman's rho as well as Pearson, Kendell-Tau, or chi-square test—as appropriate—were applied to assess the degree of correlation between performance and quality measures and video parameters.
Further, for comparison between the study groups, the appropriate statistical significance test was used, including Student's t-test, χ2, analysis of variance (ANOVA), Kruskal-Wallis, Wilcoxon-Mann-Whitney, Shapiro-Wilk, and Kolmogorov-Smirnov test.
A p-value less than .05 was considered as significant.
Results
Of the 142 screened videos, 63 videos met the inclusion criteria and were selected for further evaluation.
Video characteristics
The descriptive characters of the analyzed videos are shown in Table 2. The mean length of duration for the videos was 16 min and 7 s ± 16 min and 27 s, and the mean view of count was 23,453.14 ± 138,773.10. The mean video age was 6.06 ± 0.38 years and ranged from 0.32 to 15.57 years (Figure 2).

Time of uploaded videos.
Characteristics of the videos included in analysis.
Note. JAMA, Journal of Medical Association; VPI, Video Power Index, LDNC, live donor nephrectomy completeness; LAP-VEGaS, Laparoscopic Surgery Video Educational Guidelines.
The number of likes ranged from 0 to 1387 (mean: 54.56 ± 210.70), and that of dislikes ranged from 0 to 9 (mean: 0.4 ± 1.6). The quality of the videos was high definition in 42 cases (66.6%), and that of 21 (33.3%) was non-HD. Most of the videos (n = 45; 71.4%) included audio commentary, while 18 (28.6%) of them lacked audio commentary, and only 15 (23.8%) included written commentary. Among the 63 videos analyzed, 45 (71.4%) depicted laparoscopic procedures, while 18 (28.6%) showcased robotic interventions. The majority of videos showcased left donor nephrectomies (81%), followed by right donor nephrectomies (12.7%), with a small percentage (6.3%) featuring both sides.
According to the video source, 34 (54%) were uploaded and originated from academic institutions, while 29 (46%) were uploaded by an individual medical professional.
Video quality assessment and evaluation of technical aspects
The mean VPI score was 6.32 ± 3.31, and the mean JAMA score was 2.08 ± 1.02. Regarding JAMA score, there were 22 videos (34.9%) with 1 point, 22 (34.9%) with 2 points, 11 videos (17.5%) with 3 points, and 8 videos (12.7%) with 4 points. The mean Lap-VEGaS score was 9.79 ± 3.87, and the mean LDNC score was 9.68 ± 1.97. Interrater reliability was 0.91.
The LDNC and LAP-VEGaS scores of academic videos were significantly higher than those of individual/commercial accounts (p < .01). However, no significant differences were observed in terms of LDNC score between the different types of surgical technique (robotic vs laparoscopic procedures, p = .641) as well as the side of the removed kidney (both vs right and left; p = .178) (Table 3).
Comparison of video characteristics and quality aspects among LDNC.
Note. VPI, Video Power Index; LDNC, live donor nephrectomy completeness, JAMA, Journal of Medical Association.
When the LDNC scores were classified as either >9 or <9, no significant differences were observed among the surgical technique/type of surgery (robotic vs laparoscopic; p = .575), the side of organ removal (both vs left vs right; p = .493), video resolution (low definition vs high definition, p = .209) or length (p = .336). However, videos from academic institutions (p < .001), as well as videos with audio commentary (p < .01), scored significantly higher LDNC scores than videos from single-use or without audio commentary. Further, videos with higher VPI values scored significantly higher LDNC scores than those with lower VPI values (10.21 ± 34.62 vs 0.89 ± 1.57; p = .04).
Correlation Analysis
Upon analyzing the relationship among key variables, we could show that the major criteria scores LDNC, VPI, and LAP-VEGaS were significantly correlated (all p < .05).
However, no significant correlation was observed between the JAMA score and the other scoring systems LDNC and VPI, but between JAMA and LAP-VEGaS scores (p < .05).
Meanwhile, videos with higher clicks and likes showed significantly better results in LDNC, Jama, VPI, and LAP-VEGaS scores (all p < .05). Videos with an earlier upload date were significantly negatively correlated with video length (p < .001) (Table 4).
Correlation between major key variables.
Bold values represent significant results.
Further, commented videos significantly outperformed in all 3 scores (JAMA p < .001, LDNC p = .13, LAP-VEGaS p < .001). Additionally, videos presented in high-definition quality (JAMA, p = .05; LAP-VEGaS, p = .043) and those with an academic background (JAMA and LAP-VEGaS; p < .001) exhibited significantly enhanced JAMA and LAP-VEGaS scores.
Discussion
Historically, surgical training has followed a “watch first, perform later” approach. 21 While the introduction of cadaveric and animal model-based surgical training has partially shifted this paradigm, it remains impractical for all surgical students and young fellows to partake in these forms of training, which are typically reserved for specialized or novel procedures. In today's technologically advanced era, surgical simulation models and online training videos have emerged as significant tools for bridging this gap. These models now afford surgeons the chance to actively engage in simulated operations before observing them, marking a significant evolution in surgical education practices. 16 However, the high cost of these technologies limits their accessibility in general and brings us back to the beginning- “the watch first and perform later” model. In the age of social media, accessing videos of various interventional procedures has become effortless, and video-based learning has proven to be an efficient and beneficial method and tool for surgical training and education outside the operating room to shorten the learning curve and further increase patient safety.16,22–25
However, despite their growing use, the quality of these surgical videos for educational purposes varies significantly, necessitating a thorough evaluation of their efficacy as educational tools in all areas of surgery. The general surgery literature did not fail to question the usability of these videos in terms of education, and the use of various procedures on video platforms was to be questioned.15,26,27 Recently, LAP-VEGaS guidelines have been developed for the reporting of laparoscopic surgery for training, as a modified form of JAMA, DISCERN, and Global Quality Score (GQS). 20 Following the LAP-VEGaS guidelines and recent studies, it is clear that comprehensive instructional videos that detail every step of a procedure, highlight key anatomical points, and show in detail the technical aspects of the individual surgical procedures are critical for enhancing the knowledge and skill sets of surgical residents and fellows in a digital learning context.20,28,29
This study is pioneering in its examination of the educational caliber of LDN for kidney transplant training videos online, employing a validated and structured assessment tool for video quality evaluation and surgical educational purposes. In addition, our study also evaluates the appropriateness of such media for use as an educational video by including a new developed technical descriptive tool (“LDNC”) for LDN.
In this study, the major scoring systems to evaluate the quality and reliability of surgical videos LAP-VEGaS, VPI, and LNDC were significantly positively correlated with each other, whereas no correlations were found between JAMA and the other score systems/tools. However, our findings suggest online training videos on LDN are of poor educational quality, with an average LAP-VEGaS score of 9.79 ± 3.87 falling far below the recommended score of 15 for suitable educational resources. 20 The majority of videos analyzed met 4 out of the 9 LAP-VEGaS guidelines, namely the author's details and a detailed procedure guide, aligning with past findings on laparoscopic sleeve gastrectomy (LSG), jejunostomy, laparoscopic cholecystectomy (LC) and liver resection instructional videos.20,29–31 Nonetheless, a considerable number of the LDN videos reviewed lacked substantial key elements. These missing elements included references to anatomy during the operation, patient positioning, a structured case overview, and information on the duration of the operation, thereby casting doubts on their educational adequacy.
Our assessment, coupled with other studies, also suggests online platforms such as YouTube may not be an adequate resource for surgical residents and fellows to supplement their laparoscopic procedural training, particularly in such a specialized field as transplant surgery.30,32–34 So, surgical videos available to trainees on Youtube on LDNC and other minimally invasive procedures require standardization with LAP-VEGaS or other structured procedure-related descriptive tools (such as our new “LDNC” tool) to improve their suitability as an educational resource.30,32–34
Academic institutions were considered to provide higher quality and valuable videos.35–37 This could be confirmed by the results of our study where the majority (67%) of the 64 included videos about LDN were provided by academic sources with high-definition quality (67%). Further, we could show and underline that surgical training videos (eg living donor nephrectomy) from academic sources scored higher on LAP-VEGaS and LDNC scores when compared to those by medical individual accounts, highlighting their potential as more effective educational tools for general surgery trainees.
This discrepancy may stem from the involvement of a greater number of educators and trainees in creating and evaluating the content of videos from academic institutions. Despite this, many LDN training videos from these academic origins still fall short of incorporating crucial LAP-VEGaS criteria and technical aspects and standards (by evaluating by LDNC descriptive tool), such as detailed case presentations, patient positioning, insights on intraoperative observations, use of graphical aids, and reporting of operative duration. These gaps suggest that even academic videos could greatly benefit from further alignment and standardization with structured guidelines such as the LAP-VEGaS score or other ones.
The distinct goals driving academic versus commercial video production likely contribute to the variance in quality and adherence to the LAP-VEGaS educational framework seen across these categories. This goal misalignment may be a key factor behind the observed low-quality scores and the overall mismatch between current LDN training videos and the educational standards proposed by LAP-VEGaS. Despite these overarching trends, there's notable variability in the educational quality within both academic and commercial categories, as indicated by the large standard deviations in scores.
To assess the popularity of videos we applied the VPI score. Our results show that private uploaders have higher scores than academic sources. Observing surgeries performed by various pioneering surgeons worldwide serves as an invaluable supplement to traditional educational approaches in laparoscopic surgery. Nonetheless, it is essential to acknowledge that surgeons might upload their laparoscopic videos on YouTube for motives beyond educational purposes. The overall average Viewer Performance Index (VPI) of these videos was notably lower compared to other surgical video types on YouTube, but higher than other laparoscopic videos of the same complexity.17,29,34,38 This could be attributed to the preference of viewers for simpler procedures like laparoscopic cholecystectomy or sleeve gastrectomy. The variability in the quality of laparoscopic videos may undermine the educational value they offer. Further, we show significant correlations between VPI, LDNC, and LAP-VEGaS scores. As a result, the fact that the video content is substantial and reliable is a factor that increases the popularity of the video. It was found similarly in studies in the literature.17,39 Since existing validated scoring systems do not encompass the main aspects of minimal-invasive living donor nephrectomy, we introduced a new technical and procedure-related descriptive tool (“LDNC”) for LDN. We found that the average LDNC score was 9.68 ± 1.97 (from 13 possible points) with a combination of surgical information, detailed procedure description and laparoscopic techniques, surgical outcome, patient positioning, and video quality. Some videos only focused on parts of the content of LDN such as technical aspects of vessel preparation or port placement. It is unrealistic to expect all videos to thoroughly address every aspect of LDN; however, it should be acknowledged that some videos, despite being incomplete, may still offer precise and valuable content. Further studies must check the validation, quality, and reliability of this newly structured “LDNC” descriptive tool in the setting of transplantation and donor removal.
Today with developing techniques, online video- (sharing) platforms play an important role in education and training.5,6 The medical field is also affected by these developments and there is little doubt that Youtube is the largest video site in the world both for surgical education and training of (young) surgeons as well as an educational platform for patients.5,13,40 However, recent research examining the quality and content of YouTube videos regarding surgical procedures suggested that YouTube is an inadequate source of public information on such matters with low educational quality and is often difficult to understand and with minimal actionability, a conclusion that aligns with our own findings. 37 Further, most of the academic or physician-made videos are educational and focused on the surgical technique and thus were aimed at doctors, not patients.32,33,36,37 However, adequate and high-quality specific education for patients undergoing surgery is particularly elementary in the setting of transplant surgery, especially if it is such an altruistic act as a living donation. In this situation, medical professionals need to recognize that patients considering or agreeing to live kidney donation are likely to have watched YouTube videos. These videos can sometimes provide misleading information about the surgical risks involved, potentially influencing the patient's understanding and their ability to give informed consent. Besides discussing the risks and benefits with potential live kidney donors, clinicians should also inquire if the patients have any questions arising from online media content they have encountered.
This investigation is pioneering in applying besides the validated LAP-VEGaS video assessment tool, a new specific procedure-related descriptive tool (“LDNC”) in donor nephrectomy videos for a renal transplant setting to examine the educational quality and reliability of LDN videos on YouTube, specifically for surgical trainees. The results of this study are notably significant in that they show a generally low quality of LDN videos as determined by a validated and structured evaluation tool, such as the LAP-VEGaS score, particularly in videos created and produced by surgical companies.
The LAP-VEGaS guidelines and the other used scores and assessment tools such as JAMA and VPI aimed to elevate the standard and quality of online video resources for laparoscopic surgical training videos.17,19,20 Although these tools, specifically the LAP-VEGaS score, do not cover aspects like technical quality and criteria, preoperative imaging, and procedure-specific metrics, they outline the fundamental educational components that should be present in surgical training videos. Further, we introduced in our study a new checklist for evaluation of technical aspects and consecutive surgical steps of donor removal for renal transplant which are elementary in our eyes for learning these procedures against the background of patient safety. To ensure robust educational value tools and standardization of steps, it is imperative that all laparoscopic surgical training videos are meticulously planned and developed with standardized quality assessment tools like the LAP-VEGaS guidelines or other quality assessment tools in the focus. By proactively incorporating all necessary elements, surgical educators have the opportunity to create online surgical walkthroughs that are not only more educational but also rich in information. Furthermore, reviews by independent experts’ postproduction can help ensure videos meet crucial quality assessment standards as well as presentation of detailed surgical technical operative steps), including comprehensive patient presentations, stepwise surgical procedure guides, and technical steps, demonstrations of intraoperative findings that accentuate crucial anatomy and landmarks (like the Critical View of Safety in LDN), duration of surgery, and the inclusion of illustrative aids. The initial results of this research support the integration of these measures in the creation and refinement of video-based educational materials, aiming to secure their value and efficacy for surgical trainees moving forward.
However, the current investigation is subject to several constraints and limits. Firstly, the search terms utilized by surgical trainees to locate laparoscopic donor nephrectomy training videos may differ, although those employed in this study represented the most common ones. Second, this study excluded video tutorials in languages other than English from its analysis, potentially limiting the generalizability of the findings to non-English-speaking trainees. Although non-English language videos constituted a small portion of the 63 YouTube videos collected, indicating a potential deficiency in educational materials for both English and non-English-speaking surgical trainees, this study focused solely on English language resources. It is plausible that the excluded LDN videos showcasing additional procedures might have varied in quality compared to those included for evaluation. However, as the primary objective of this study was to assess LDN training videos, those featuring additional procedures could be evaluated in future investigations on the educational efficacy of minimally invasive surgery video resources.
Third, the descriptive tool for quality, completeness, and technical aspects evaluation of living donor nephrectomies (“LDNC”) was new and does not fully satisfy all necessary requirements. However, with the help of this new “descriptive tool,” an adequate description was given how many of the crucial surgical steps of the procedure are presented in the video analyzed and how “complete” the video is from a professional surgical point of view. Therefore, from a professional surgical point of view, it is not a score to predict outcome in one way or other—and if this tool will be used in a predictive way, its accuracy and validity have to first be reliably tested before using it to predict outcome.
Nevertheless, despite these limitations, the study's robustness was bolstered by the utilization of a validated assessment tool and 2 raters demonstrating near-perfect interrater reliability.
Conclusion
Utilizing videos for surgical training proves to be a valuable component of surgical education. While relying solely on video-based resources does not provide adequate training for safe and effective LND, the current educational standard, content, and quality of online training videos for LND falls short according to the LAP-VEGaS guidelines. Educational materials produced by universities or medical schools exhibit a relatively superior quality but still fall short of incorporating all critical educational components. To enhance video quality and ensure they provide valuable information for both potential kidney donors and surgical trainees, we need careful (peer-)review, standardization, and the application of structured descriptive tools like LAP-VEGAS and LDNC. High-quality LND training videos can supplement hands-on skills training, enabling a safe, repeatable, and cost-effective approach outside the operating room. Encouraging their use is a strategic way to address the current shortage of quality LND training resources.
Footnotes
Acknowledgments
We acknowledge support from the Medical University of Graz within the program of Open Access Publishing.
Author Contributions
MEM, NJ, RS, CM, and HMH were responsible for the study concept and design, DJ, TE, AL, MEM, and HMH were responsible for the data acquisition, PS, AG, NJ, HMH, and RS analyzed and interpreted the data, MEM, HMH, and NJ drafted the manuscript, and PS, RS, CM, and HMH critically revised the manuscript. All authors read and approved the final manuscript.
Declaration of Conflicting Interests
The author(s) declare no conflict of interest or competing interests.
Data Availability Statement
The data that support the findings of the study are available on request from the corresponding author.
Informed Consent Statement
This study utilized a descriptive research approach by analyzing videos that are publicly available on the Internet, specifically focusing on content hosted by a social media website, “www. YouTube.com”. The investigation did not involve human participants or animals, and since it relied exclusively on publicly accessible videos without the use of any patient data or materials, an informed consent statement is not necessary and not applicable.
Institutional Review Board Statement
The authors are accountable for all aspects of the work in ensuring the questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. This study did not require approval by the local Research Ethics Boards as it involved public ally available data only.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
