Abstract
Objective
Short videos are increasingly being used to disseminate health information. However, the quality of videos on common ophthalmic conditions such as cataract has not been systematically evaluated.
Methods
This study employed a cross-sectional design. The TikTok platform was searched using the term “cataract” from 20:00 to 24:00 on 8 November 2024, without any restrictions. The top 100 retrieved videos were included in the study. They were rated using The Journal of American Medical Association (JAMA) benchmark criteria, Global Quality Score (GQS) scale, modified Decision-making Information Support Criteria for Evaluating the Reliability of Non-randomised Studies score, and Patient Education Materials Assessment Tool for Audio Visual Content. Videos by different groups were compared for quality and their underlying factors.
Results
The top 100 videos had an average of 2009.1 likes, 795.65 comments, 2628.91 shares, and 554.08 saves. Their JAMA benchmark criteria, GQS, the modified Decision-making Information Support Criteria for Evaluating the Reliability of Non-randomised Studies score, and the Patient Education Materials Assessment Tool for Audio Visual Content score ratings differed (p < .05) with account ownership, doctor rank, and video content. More videos were uploaded by institutions and physicians than by nonphysicians (p < .05). The number of likes, comments, favorites, and shares of videos was not correlated with quality (Spearman correlation; p > .05). Further regression analysis confirmed that video quality can be predicted using account ownership.
Conclusion
The quality of cataract-related short videos on platforms has room for improvement. Users may estimate video quality based on the identity of the content creator.
Introduction
Cataract is a common ophthalmic disease, and its prevalence is increasing with the ageing of populations. 1 The risk factors for cataract include age, some medications, trauma, and ultraviolet radiation. 2 Normal eye function, including vision and aqueous humor circulation, may be affected when the transparent crystalline lens becomes cloudy. 3 Some drugs can currently affect the progression of cataracts, but surgical treatment remains the primary option for cataracts. 4 The basic principle is to improve visual function by removing the cloudy lens and placing an appropriate artificial lens based on the eye condition of the patient. 5 During assessment, diagnosis, and treatment, information related to cataracts that patients are exposed to, including popular science information, is important in guiding patients to seek medical treatment and choose the best treatment strategy. Its professionalism and accuracy can directly affect the formulation of patient treatment decisions.
Due to the imbalance of medical resources, a proportion of the population may choose to obtain relevant medical knowledge about cataracts from sources other than doctors. The sources of such information include books, newspapers, television, professional medical websites, and social media.6,7 With the continuous improvement of network infrastructure, social media has become an important channel for the public to obtain external information. Social media has also become an important source of information for many patients about their cataract. 8 TikTok has a broad user base in China. Many bloggers share videos related to cataracts on these platforms. These videos are recommended to users by their retrieval and platform algorithms.9,10 However, the reliability of these videos in guiding the diagnosis and treatment of cataracts for nonprofessionals has not been carefully evaluated due to the limited professional knowledge. The quality of videos is not verified, and this affects the promotion of eye health popularization in the whole society.
This study aimed to conduct a cross-sectional assessment of the quality of cataract videos on TikTok in China. The purpose is to provide insights that will guide efforts to improve the quality of popular science videos related to cataract and improve the reliability of information on cataract and its diagnosis and treatment.
Materials and methods
Ethical considerations
This study did not involve any human, animal, or histological research. All the data are from the videos publicly released on TikTok, and any information that can be traced to individuals involved in the videos is hidden. The video content involved in this study does not contain the personal information of patients and is only used for this study and disclosed to the researchers of this project. This study was conducted from 1 October 2024, to 15 December 2024, at Third Xiangya Hospital, Central South University.
Data acquisition
Videos related to cataract on TikTok were searched for using the keyword “cataract” from 20:00–24:00 on 8 November 2024. To mitigate algorithm-induced recommendation bias, search fields were left blank during TikTok search sessions performed without account login. Language was not a search criterion. The top 100 videos from the results were selected, based on insights from previous literature, which are meant to have a higher influence on viewers.11,12 Videos outside the top 100 were not included in the study and will not have a significant impact on the research results. 13 Based on the aforementioned search strategy, we established the following inclusion/exclusion criteria: (1) Inclusion criteria: Top 100 ranked videos; (2) Exclusion criteria: Videos ranked beyond the top 100.
The data extracted from the included videos in the study contains account ownership, account authentication information, number of fans, title, video release time, video duration, likes, comments, favorites, and shares.
Data preprocessing
The duration of pure graphic and textual videos was uniformly considered to be 0 s. The included videos were categorized as follows based on their content, including disease basic information such as risk factors, pathogenesis, and epidemiological information; (a) disease treatment; (b) disease prognosis; (c) news reports; (d) advertising, and (e) others. 14 Based on the account ownership, accounts were categorized as institutional, doctor, or nondoctor personal accounts. Doctor accounts were further classified, based on authentication information, into those by chief physicians, deputy chief physicians, and attending physicians.
Video quality assessment
During this study, a physician was responsible for video retrieval and recording of video data. Two physicians rated video quality and determined a consistent score through discussion. 14 A physician with a senior professional title evaluated the quality of the video when there was disagreement between the primary evaluators.
The Journal of American Medical Association (JAMA) benchmark criteria, Global Quality Score (GQS) scale, modified Decision-making Information Support Criteria for Evaluating the Reliability of Non-randomised Studies (DISCERN) score, and the Patient Education Materials Assessment Tool for Audio Visual Content (PEMAT-A/V) were used to rate video quality.14–18 The JAMA benchmark criteria are mainly used to evaluate the credibility of videos based on four dimensions: (a) Author information; (b) Copyright information and reference sources; (c) Current and updated information; and (d) Conflict of interest information. The score for each dimension is 1. The GQS is mainly used to evaluate the quality of videos using five levels from low to high, with higher levels associated with higher scores. The modified DISCERN was developed from DISCERN, which is mainly used to evaluate the reliability of videos using five dimensions: (a) whether the video purpose is achieved; (b) whether reliable references are used; (c) whether the video content is objective; (d) whether the video provides additional information; and (e) whether it highlights uncertain areas. The score for each dimension is 1. PEMAT-A/V evaluates the quality of videos from the perspectives of comprehensibility and operability using 17 items, including 13 items for comprehensibility and four items for operability. The degree of comprehensibility and operability is expressed as the percentage of the total score out of the possible total score.
Data analysis
The data in this study are summarized as mean ± standard deviation and median (range). The groups were compared using one-way ANOVA or nonparametric tests based on whether their data conformed to a normal distribution. Tukey's multiple comparison was used for pairwise comparisons of groups. Spearman correlation analysis was used to determine the correlation between different parameters based on the absolute value of the correlation coefficient r as follows: 0 < r < 0.25, weak correlation; 0.25 ≤ r < 0.5, subweak correlation; 0.5 ≤ r < 0.75, substrong correlation; and 0.75 ≤ r < 1, strong correlation. Finally, the relationships between the GQS and various variables were tested using multiple linear regression, and a parameter equation incorporating GQS was formulated using stepwise multiple linear regression analysis. The account ownership, account authentication information, number of fans, title, video release time, video duration, number of likes, number of comments, number of favorites, and number of shares were included in the multiple linear regression. The variables were assessed for collinearity, and those that were collinear were excluded (VIF > 3). SPSS 27 was used for the statistical analyses, and GraphPad Prism 10 was used for plotting. p < .05 denoted statistical significance.
Result
Analysis of basic features of videos
This study included 100 videos related to cataracts. The average length of these videos was 99.31 s. The average date of video release was 376 days ago. The average number of likes, comments, shares, and saves was 2009.1, 795.65, 2628.91, and 554.08, respectively. The average JAMA rating, GQS score, and mDISCERN score were 1.12, 2.53, and 2.55, respectively. The average PEMAT understanding and actionability scores were 72.40% and 39.67%. The average number of fans for video accounts was 436,392.61 (Table 1).
Video information.
JAMA: Journal of American Medical Association; GQS: Global Quality Score; mDISCERN: modified Decision-making Information Support Criteria for Evaluating the Reliability of Non-randomised Studies; PEMAT: Patient Education Materials Assessment Tool.
Eighty-eight, six, and six videos were uploaded by accounts owned by doctors, nondoctors, and institutions, respectively (Figure 1). The videos uploaded by institutional accounts were longer and had more likes, comments, and shares than those uploaded by individuals who were or were not doctors. There was no significant difference in the number of saves among the three types of accounts (Table 2). The number of fans of doctors varied with the academic ranks, but no significant differences in video length, likes, comments, favorites, and reposts were observed (p > .05).

Basic data of the videos.
Comparison of basic data of the videos.
Video quality analysis
Fifty-three percent of the videos with JAMA rating received a score of 1, but no videos had a score of 4. The proportion of videos with a GQS rating of 2 was the highest at 30%, followed by that of the videos with a rating of 3 (28%). Videos with an mDISCERN score of 3 accounted for 51%. For comprehensibility, 69% of videos had scores of 67–100%, indicating that the majority of videos are easy to understand. However, 62% of the videos scored 0–33% for operability, indicating more room for improvement (Table 3).
Analysis of video quality.
JAMA: Journal of American Medical Association; mDISCERN: modified Decision-making Information Support Criteria for Evaluating the Reliability of Non-randomised Studies; PEMAT: Patient Education Materials Assessment Tool.
In a subgroup analysis, this study classified and analyzed videos based on account ownership, and the results showed that the JAMA rating, GQS score, and mDISCERN score differed for the videos with different categories of account ownership (p < .05) (Table 4). Videos posted by doctor accounts had higher JAMA ratings than those posted by nondoctor personal accounts. Their comprehensibility and operability were also higher (p < .05). However, there was no significant difference in quality for the videos uploaded by doctors and institutional accounts (p > .05) (Figure 2). Afterward, this study classified the videos based on the professional titles of doctors and analyzed their differences. No significant difference in video quality was observed with different academic ranks of doctors (Figure 3). The videos also had different scores based on content. They had different scores for comprehensibility: videos focused on disease knowledge had significantly higher comprehensibility scores than those focused on treatment (p < .05) (Figure 4).

Comparison of videos based on account ownership.

Comparison of videos published by doctors based on professional rank.

Comparison of videos based on their content-related categories.
Comparison of the quality of video.
JAMA: Journal of American Medical Association; mDISCERN: modified Decision-making Information Support Criteria for Evaluating the Reliability of Non-randomised Studies; PEMAT: Patient Education Materials Assessment Tool.
Video numerical correlation and regression analysis
Spearman correlation analysis revealed a correlation of the number of fans of the owner of the posting account with the likes, comments, shares, and bookmarks of the video, but not with video quality. The duration of a video was correlated with the JAMA rating, GQS score, mDISCERN score, and operability, but the degree of correlation was weak or subweak. There were no significant correlations between the number of likes, comments, favorites, and shares with the quality of videos, but there was a correlation between the quality ratings of the videos. Strong correlations were observed between the JAMA score and GQS, JAMA and mDISCERN scores, and GQS and mDISCERN scores, respectively (Table 5 and Figure 5).To further predict the factors that affect video quality, stepwise multiple linear regression analysis was used to analyze the factors affecting GQS scores. Two models were used. The first model only included account ownership, with an R2 of 0.08 and a corrected R2 of 0.08. The second model included account ownership and time of posting, with an R2 of 0.12 and a corrected R2 of 0.10. The equation was as follows (Tables 6 and 7):

Correlation analysis of video data.
Correlation analysis of video data.
JAMA: Journal of American Medical Association; mDISCERN: modified Decision-making Information Support Criteria for Evaluating the Reliability of Non-randomised Studies; PEMAT: Patient Education Materials Assessment Tool.
*: p < .05, **: p < .01.
Stepwise regression analysis summary.
Details of stepwise regression analysis.
Discussion
This study investigated the quality of videos on cataracts on TikTok using the JAMA benchmark criteria, GQS, modified DISCERN score, and PEMAT-A/V. Analysis of the basic information of the videos revealed that doctors specializing in cataract predominantly owned the accounts that posted videos related to cataracts. Videos uploaded by doctors had fewer likes and comments but were viewed and forwarded more than those uploaded by nondoctors. This indicates that videos uploaded by doctors are more likely to be viewed and given attention by the audience. Doctors specializing in cataract uploaded videos of higher quality than nondoctors. However, no significant difference in quality was observed between videos uploaded by doctors and institutions. The basic data of the videos were correlated with quality, but not with the quality ratings. Further clarification of the relationship between video quality and video data was achieved through stepwise multiple linear regression, resulting in two-equation models. Both models included account ownership, which can better reflect video quality.
With the increasing number of users and electronic products, the adverse effects on the eyes cannot be underestimated. These hazards are not only reflected in the fact that prolonged close-range eye use may increase the incidence of myopia among teenagers, but also in the wide spread of certain interactive content in videos that may be harmful to the eyes and increase the risk of eye diseases. 19 On the other hand, the quality of the information conveyed by videos can directly or indirectly affect viewer decisions. According to a study, more than 5% of videos on TikTok contain incorrect information, which may mislead video viewers. 20 Therefore, cataract videos on TikTok need to be evaluated to provide a basis for regulation by the platform.
From the perspective of social media, the quality of a video and whether it is easy to understand can be inferred from the likes, comments, favorites, and shares over time. These provide insights into the level of interaction of viewers with the video and acceptance of the shared viewpoints by users. 21 However, the average number of likes and comments on videos of high quality was lower than those on videos uploaded by nondoctor individuals and institutional users in this study, which may be due to the inconsistent number of fans of the account owner. In addition, the average number of saved and forwarded videos uploaded by doctors was higher than that of nondoctors. This further suggests that users can identify which videos are more beneficial to those around them while watching, and this has no significant association with the content of the videos. In addition, there was no significant difference in the data of these videos among doctors with different academic ranks, indicating that users are not highly sensitive to the academic ranks of the doctors who upload videos. Doctors with junior academic ranks can produce videos of good quality.
This study used multiple scales to evaluate the quality of videos from multiple perspectives. The quality of most videos was not satisfactory. The videos published by doctors and institutions had significantly better quality than those published by nondoctors. This can be attributed to the relevant professional knowledge of cataracts that doctors have. Additionally, institutions can hire professional doctors to review the content in the video; an institution owns the account, and the featured speaker in the video is a doctor. This study further explored the correlations among the data using Spearman correlation analysis and the factors affecting video quality using regression. The results showed a significant correlation between video quality and the uploader, but not with the number of likes, comments, favorites, and shares. This further indicates that users can first choose quality videos to watch based on account ownership. For the regression analysis, Model 1 included account ownership, while Model 2 included account ownership and video upload time. Model 2 had a higher R-squared value, but the relationship between video upload time and video quality could not be well explained.
The content of a popular science video posted on social media is mainly aimed at users who use the social media platform. Unlike for professional medical literature websites, creators of popular science videos on social media need to consider that most users have no relevant professional background. Users may not verify the authenticity of professional information conveyed in the video. From this perspective, video quality is important as it directly affects user decisions. 22 For users who choose to watch these videos as reference information for cataract disease diagnosis and treatment, strategies for selecting those of high quality need to be considered. In previous video viewing experiences, users were trained to judge the quality of a video based on its likes, comments, favorites, and shares. 23 However, this study presents different results: video quality demonstrated no significant correlation with likes, comments, favorites, and shares, which is similar to the report of previous research. High user engagement metrics should not be interpreted as indicators of informational reliability or clinical accuracy for cataract video content. Users need to choose videos as references for decision-making based on account ownership. This is convenient on TikTok: users can navigate to the account home page to view the authentication information.
However, the video platform ecosystem of China is highly fragmented, being dominated by players such as WeChat Video, BiliBili, and Kuaishou. The multifaceted daily needs of patients with cataract preclude exclusive reliance on TikTok for ophthalmic information. Notwithstanding, the status of TikTok as one of the short-video platforms with the highest traffic incentivizes ophthalmologists and science communicators to prioritize it for disseminating cataract-related content. Cross-platform account matrices partially homogenize content quality across these platforms. These videos primarily target lay persons with cataracts rather than medical professionals, which inherently limits clinical depth. Their information quality cannot match that of specialized ophthalmic research platforms hosting academic conferences for cataract specialists. Consequently, researchers and clinicians seeking advanced cataract knowledge should consider TikTok an unreliable primary resource.
This study has limitations. First, the TikTok videos included in this study were retrieved within a specific period, which is consistent with the cross-sectional design. Therefore, the limitations of cross-sectional research, including the inability to infer the causal relationship between indicators and the inability to predict future video quality, are inherent in this study. 24 Secondly, this study did not apply search conditions during retrieval and only used “cataract” as the search term. The actual situation of the user inputting search words or search statements may yield different results. For the video quality assessment, two physicians independently evaluated the videos and reached a consensus score through deliberation. However, this method inherently entails subjectivity and precludes quantitative verification of interrater reliability. In addition, the TikTok algorithm recommends the retrieved videos according to different user profiles, which will lead to different results for different users when retrieving cataract-related videos. Third, this study used a scale to evaluate the quality of videos. Two researchers separately rated the videos and determined the final quality rating through negotiation; however, a degree of subjectivity was involved in the evaluation.
Conclusion
Cataract-related videos are mainly uploaded to the platform by doctors with related specialization. Based on the existing evaluation standards, the quality of these videos has room for improvement. Users cannot simply choose to believe the information provided by videos with high engagement based on basic data such as likes and comments. They need to consider account ownership. Video platforms also need to introduce professional review mechanisms to improve the quality of professional medical videos.
Footnotes
Author contributions
All authors contributed to the study design. The first draft of the manuscript was written by Jiamin Cao and Ziyi Zhu. Data collection and analysis were performed by Feng Zhang and Ziyi Zhu. Wei Xiong revised the manuscript and helped with the project administration and funding acquisition. All authors read and approved the final manuscript.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Natural Science Foundation of China (Grant No. 82371104) and the Natural Science Foundation of Hunan Province (Grant No. 2023JJ30851).
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability statement
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
