bims-librar Biomed News
on Biomedical librarianship
Issue of 2024–06–16
seventeen papers selected by
Thomas Krichel, Open Library Society



  1. Res Synth Methods. 2024 Jun 14.
      Citation indices providing information on backward citation (BWC) and forward citation (FWC) links are essential for literature discovery, bibliographic analysis, and knowledge synthesis, especially when language barriers impede document identification. However, the suitability of citation indices varies. While some have been analyzed, the majority, whether new or established, lack comprehensive evaluation. Therefore, this study evaluates the citation coverage of the citation indices of 59 databases, encompassing the widely used Google Scholar, Scopus, and Web of Science alongside many others never previously analyzed, such as the emerging Lens, Scite, Dimensions, and OpenAlex or the subject-specific PubMed and JSTOR. Through a comprehensive analysis using 259 journal articles from across disciplines, this research aims to guide scholars in selecting indices with broader document coverage and more accurate and comprehensive backward and forward citation links. Key findings highlight Google Scholar, ResearchGate, Semantic Scholar, and Lens as leading options for FWC searching, with Lens providing superior download capabilities. For BWC searching, the Web of Science Core Collection can be recommended over Scopus for accuracy. BWC information from publisher databases such as IEEE Xplore or ScienceDirect was generally found to be the most accurate, yet only available for a limited number of articles. The findings will help scholars conducting systematic reviews, meta-analyses, and bibliometric analyses to select the most suitable databases for citation searching.
    Keywords:  backward citation searching; citation coverage; citation index; forward citation searching; reference searching; snowballing
    DOI:  https://doi.org/10.1002/jrsm.1729
  2. Nutr Clin Pract. 2024 Jun 12.
      From its first printing in 1879 to when publication ceased in 2004, the Index Medicus had proved invaluable for persons wishing to conduct healthcare-related research. With the loss of this resource and the rapid expansion of alternative, online sources, it is vital that persons understand how to appropriately search for and use this information. The purpose of this review is to outline the information sources available, discuss how to use current search technology to best obtain relevant information while minimizing nonproductive references, and give the author's opinion on the reliability of the various informational sources available. Topics to be discussed will include Medical Subject Headings and PICO searches and sources ranging from the National Library of Medicine and Cochrane Reviews to Wikipedia and other sites, such as associations and commercial interest sites.
    Keywords:  Embase; MeSH; NLM; PICO; PICOS; PICOT; Pubmed; research
    DOI:  https://doi.org/10.1002/ncp.11173
  3. Sci Data. 2024 Jun 13. 11(1): 622
      The demand for open data and open science is on the rise, fueled by expectations from the scientific community, calls to increase transparency and reproducibility in research findings, and developments such as the Final Data Management and Sharing Policy from the U.S. National Institutes of Health and a memorandum on increasing public access to federally funded research, issued by the U.S. Office of Science and Technology Policy. This paper explores the pivotal role of data repositories in biomedical research and open science, emphasizing their importance in managing, preserving, and sharing research data. Our objective is to familiarize readers with the functions of data repositories, set expectations for their services, and provide an overview of methods to evaluate their capabilities. The paper serves to introduce fundamental concepts and community-based guiding principles and aims to equip researchers, repository operators, funders, and policymakers with the knowledge to select appropriate repositories for their data management and sharing needs and foster a foundation for the open sharing and preservation of research data.
    DOI:  https://doi.org/10.1038/s41597-024-03449-z
  4. JMIR AI. 2024 May 02. 3 e42630
       BACKGROUND: Widespread misinformation in web resources can lead to serious implications for individuals seeking health advice. Despite that, information retrieval models are often focused only on the query-document relevance dimension to rank results.
    OBJECTIVE: We investigate a multidimensional information quality retrieval model based on deep learning to enhance the effectiveness of online health care information search results.
    METHODS: In this study, we simulated online health information search scenarios with a topic set of 32 different health-related inquiries and a corpus containing 1 billion web documents from the April 2019 snapshot of Common Crawl. Using state-of-the-art pretrained language models, we assessed the quality of the retrieved documents according to their usefulness, supportiveness, and credibility dimensions for a given search query on 6030 human-annotated, query-document pairs. We evaluated this approach using transfer learning and more specific domain adaptation techniques.
    RESULTS: In the transfer learning setting, the usefulness model provided the largest distinction between help- and harm-compatible documents, with a difference of +5.6%, leading to a majority of helpful documents in the top 10 retrieved. The supportiveness model achieved the best harm compatibility (+2.4%), while the combination of usefulness, supportiveness, and credibility models achieved the largest distinction between help- and harm-compatibility on helpful topics (+16.9%). In the domain adaptation setting, the linear combination of different models showed robust performance, with help-harm compatibility above +4.4% for all dimensions and going as high as +6.8%.
    CONCLUSIONS: These results suggest that integrating automatic ranking models created for specific information quality dimensions can increase the effectiveness of health-related information retrieval. Thus, our approach could be used to enhance searches made by individuals seeking online health information.
    Keywords:  deep learning; health misinformation; infodemic; information retrieval; language model; transfer learning
    DOI:  https://doi.org/10.2196/42630
  5. Eur Urol Focus. 2024 Jun 13. pii: S2405-4569(24)00086-5. [Epub ahead of print]
       BACKGROUND: Defining optimal therapeutic sequencing strategies in prostate cancer (PC) is challenging and may be assisted by artificial intelligence (AI)-based tools for an analysis of the medical literature.
    OBJECTIVE: To demonstrate that INSIDE PC can help clinicians query the literature on therapeutic sequencing in PC and to develop previously unestablished practices for evaluating the outputs of AI-based support platforms.
    DESIGN, SETTING, AND PARTICIPANTS: INSIDE PC was developed by customizing PubMed Bidirectional Encoder Representations from Transformers. Publications were ranked and aggregated for relevance using data visualization and analytics. Publications returned by INSIDE PC and PubMed were given normalized discounted cumulative gain (nDCG) scores by PC experts reflecting ranking and relevance.
    INTERVENTION: INSIDE PC for AI-based semantic literature analysis.
    OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: INSIDE PC was evaluated for relevance and accuracy for three test questions on the efficacy of therapeutic sequencing of systemic therapies in PC.
    RESULTS AND LIMITATIONS: In this initial evaluation, INSIDE PC outperformed PubMed for question 1 (novel hormonal therapy [NHT] followed by NHT) for the top five, ten, and 20 publications (nDCG score, +43, +33, and +30 percentage points [pps], respectively). For question 2 (NHT followed by poly [adenosine diphosphate ribose] polymerase inhibitors [PARPi]), INSIDE PC and PubMed performed similarly. For question 3 (NHT or PARPi followed by 177Lu-prostate-specific membrane antigen-617), INSIDE PC outperformed PubMed for the top five, ten, and 20 publications (+16, +4, and +5 pps, respectively).
    CONCLUSIONS: We applied INSIDE PC to develop standards for evaluating the performance of AI-based tools for literature extraction. INSIDE PC performed competitively with PubMed and can assist clinicians with therapeutic sequencing in PC.
    PATIENT SUMMARY: The medical literature is often very difficult for doctors and patients to search. In this report, we describe INSIDE PC-an artificial intelligence (AI) system created to help search articles published in medical journals and determine the best order of treatments for advanced prostate cancer in a much better time frame. We found that INSIDE PC works as well as another search tool, PubMed, a widely used resource for searching and retrieving articles published in medical journals. Our work with INSIDE PC shows new ways in which AI can be used to search published articles in medical journals and how these systems might be evaluated to support shared decision-making.
    Keywords:  Artificial intelligence; INSIDE PC; Literature extraction; Literature search; Machine learning; Prostate cancer therapy; Semantic analysis; Therapeutic sequencing
    DOI:  https://doi.org/10.1016/j.euf.2024.05.022
  6. Lab Anim. 2024 Jun 13. 236772241237608
      The search for 3R-relevant information is a prerequisite for any planned experimental approach considering animal use. Such a literature search includes all methods to replace, reduce and refine (3Rs) animal testing with the aim of improving animal welfare, and requires an intensive screening of literature databases reflecting the current state of knowledge in experimental biomedicine. We developed SMAFIRA, a freely available online tool to facilitate the screening of PubMed/MEDLINE for possible alternatives to animal testing. SMAFIRA employs state-of-the-art language models from the field of deep learning, and provides relevant literature citations in a ranked order, classified according to the experimental model used. By using this classification, the search for alternative methods in the biomedical literature will become much more efficient. The tool is available at https://smafira.bf3r.de.
    Keywords:  3Rs < ETHICS & WELFARE; Alternatives < ETHICS & WELFARE; Replacement < ETHICS & WELFARE; literature search; machine learning
    DOI:  https://doi.org/10.1177/00236772241237608
  7. JPRAS Open. 2024 Sep;41 33-36
       Purpose: Ensuring that educational materials geared toward transgender and gender-diverse patients are comprehensible can mitigate barriers to accessing gender-affirming care and understanding postoperative care. This study evaluates the readability of online patient resources related to gender-affirming vaginoplasty.
    Methods: Online searches for vaginoplasty were conducted in January 2023 using two search engines. The readability scores of the top ten websites and their associated hyperlinked webpages were derived using ten validated readability tests.
    Results: A total of 40 pages were assessed from the vaginoplasty searches. The average reading grade level for all the webpages with relevant educational materials was 13.3 (i.e., college level), exceeding the American Medical Association's recommended 6th grade reading level.
    Conclusion: Complex patient resources may impede patients' understanding of gender-affirming vaginoplasty. Online patient education resources should be created that are more accessible to patients with diverse reading comprehension capabilities.
    Keywords:  Health literacy; Health services for transgender persons; Healthcare disparities; Patient educational resources; Readability; Vaginoplasty
    DOI:  https://doi.org/10.1016/j.jpra.2024.04.004
  8. Turk Neurosurg. 2023 Jun 26.
       AIM: Internet usage to obtain health-related information is rapidly increasing. However, there are concerns about the comprehensibility and reliability of internet-accessed health-related information. The aim of this research was to investigate the reliability, quality, and readability of patient education materials (PEMs) about spinal cord stimulation (SCS) on the internet.
    MATERIAL AND METHODS: A total of 114 websites suitable for the study were identified after a search on Google for the term "spinal cord stimulation." Gunning Fog (GFOG), Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease Score (FRES), and Simple Measure of Gobbledygook (SMOG) were used to determine the readability of sites. The credibility of the websites was assessed using the Journal of the American Medical Association (JAMA) score. Quality was assessed using the global quality score (GQS),the DISCERN score, and the Health on the Net Foundation code of conduct (HONcode).
    RESULTS: Evaluating the text sections, the mean SMOG and FKGL were 10.92 ± 1.61 and 11.62 ± 2.11 years, respectively, and the mean FRES and GFOG were 45.32 ± 10.71 and 14.62 ± 2.24 (both very difficult), respectively. Of all the websites, 10.5% were found to be of high quality, 13.2% were found to be of high reliability, and only 6.1% had a HONcode. A significant difference was found between the typologies of the websites and the reliability and quality scores (p 0.05).
    CONCLUSION: The internet-based PEMs about SCS were found to have a readability level that exceeded the Grade 6 level recommended by the National Health Institute. However, the materials demonstrated low reliability and poor quality. We think that websites related to SCS, which is a specific neuromodulation option among several interventional procedures for the management of chronic pain, should have some level of readability according to specific indexes and reliable content suitable for the public's educational level.
    DOI:  https://doi.org/10.5137/1019-5149.JTN.42973-22.3
  9. J Spinal Cord Med. 2024 Jun 11. 1-6
       OBJECTIVE: The use of artificial intelligence chatbots to obtain information about patients' diseases is increasing. This study aimed to determine the reliability and usability of ChatGPT for spinal cord injury-related questions.
    METHODS: Three raters simultaneously evaluated a total of 47 questions on a 7-point Likert scale for reliability and usability, based on the three most frequently searched keywords in Google Trends ('general information', 'complications' and 'treatment').
    RESULTS: Inter-rater Cronbach α scores indicated substantial agreement for both reliability and usability scores (α between 0.558 and 0.839, and α between 0.373 and 0.772, respectively). The highest mean reliability score was for 'complications' (mean 5.38). The lowest average was for the 'general information' section (mean 4.20). The 'treatment' had the highest mean scores for the usability (mean 5.87) and the lowest mean value was recorded in the 'general information' section (mean 4.80).
    CONCLUSION: The answers given by ChatGPT to questions related to spinal cord injury were reliable and useful. Nevertheless, it should be kept in mind that ChatGPT may provide incorrect or incomplete information, especially in the 'general information' section, which may mislead patients and their relatives.
    Keywords:  Artificial intelligence; ChatGPT; Reliability; Spinal cord injury
    DOI:  https://doi.org/10.1080/10790268.2024.2361551
  10. Prostate Cancer Prostatic Dis. 2024 Jun 13.
       BACKGROUND: ChatGPT has recently emerged as a novel resource for patients' disease-specific inquiries. There is, however, limited evidence assessing the quality of the information. We evaluated the accuracy and quality of the ChatGPT's responses on male lower urinary tract symptoms (LUTS) suggestive of benign prostate enlargement (BPE) when compared to two reference resources.
    METHODS: Using patient information websites from the European Association of Urology and the American Urological Association as reference material, we formulated 88 BPE-centric questions for ChatGPT 4.0+. Independently and in duplicate, we compared the ChatGPT's responses and the reference material, calculating accuracy through F1 score, precision, and recall metrics. We used a 5-point Likert scale for quality rating. We evaluated examiner agreement using the interclass correlation coefficient and assessed the difference in the quality scores with the Wilcoxon signed-rank test.
    RESULTS: ChatGPT addressed all (88/88) LUTS/BPE-related questions. For the 88 questions, the recorded F1 score was 0.79 (range: 0-1), precision 0.66 (range: 0-1), recall 0.97 (range: 0-1), and the quality score had a median of 4 (range = 1-5). Examiners had a good level of agreement (ICC = 0.86). We found no statistically significant difference between the scores given by the examiners and the overall quality of the responses (p = 0.72).
    DISCUSSION: ChatGPT demostrated a potential utility in educating patients about BPE/LUTS, its prognosis, and treatment that helps in the decision-making process. One must exercise prudence when recommending this as the sole information outlet. Additional studies are needed to completely understand the full extent of AI's efficacy in delivering patient education in urology.
    DOI:  https://doi.org/10.1038/s41391-024-00847-7
  11. J Craniofac Surg. 2024 Jun 11.
       OBJECTIVE: This study aimed to evaluate the utility and efficacy of ChatGPT in addressing questions related to thyroid surgery, taking into account accuracy, readability, and relevance.
    METHODS: A simulated physician-patient consultation on thyroidectomy surgery was conducted by posing 21 hypothetical questions to ChatGPT. Responses were evaluated using the DISCERN score by 3 independent ear, nose and throat specialists. Readability measures including Flesch Reading Ease), Flesch-Kincaid Grade Level, Gunning Fog Index, Simple Measure of Gobbledygook, Coleman-Liau Index, and Automated Readability Index were also applied.
    RESULTS: The majority of ChatGPT responses were rated fair or above using the DISCERN system, with an average score of 45.44 ± 11.24. However, the readability scores were consistently higher than the recommended grade 6 level, indicating the information may not be easily comprehensible to the general public.
    CONCLUSION: While ChatGPT exhibits potential in answering patient queries related to thyroid surgery, its current formulation is not yet optimally tailored for patient comprehension. Further refinements are necessary for its efficient application in the medical domain.
    DOI:  https://doi.org/10.1097/SCS.0000000000010395
  12. Clin Cosmet Investig Dermatol. 2024 ;17 1321-1328
       Background: The available tools for evaluating scientific content target written scientific evidence and referencing without considering surgical, technical, or video graphic aspects.
    Objective: This study developed and validated a tool for qualitatively evaluating videos in the field of skin surgery. This will increase the quality of recorded surgical materials published online and ultimately enhance the reliability of streaming platforms as educational resources.
    Methodology: Tool development included several stages: draft generation, expert panel setting, internal reliability testing, and pilot study.
    Results: After two rounds of expert panels evaluating the developed tool, 23 relevant items evaluating the educational value, scientific accuracy, and clarity of the surgical technical steps of the videos were obtained. We applied the tool to the top 25 YouTube videos discussing elliptical excision. Internal consistency, reliability, and substantial agreement between the raters were identified. We identified a strong positive correlation between our tool score and the global rating score (r= 0.55, P= 0.004).
    Conclusion: It is critical to avoid relying on any video for educational purposes. The tool generated and validated in our study can determine a video's value. A pilot study of 25 YouTube videos demonstrated that the available videos are of fair-good quality, thus necessitating the need for high-quality video production.
    Keywords:  education; skin surgery; tool validation; video evaluation; youtube videos
    DOI:  https://doi.org/10.2147/CCID.S469592
  13. Endocr Connect. 2024 Jun 01. pii: EC-24-0059. [Epub ahead of print]
      YouTube® is one of the leading platforms for health information. However, the lack of regulation of content and quality raises concerns about accuracy and reliability. CoMICs (Concise Medical Information Cines) are evidenced-based short videos created by medical students and junior doctors and reviewed by experts to ensure clinical accuracy. We performed a systematic review to understand the impact of videos on knowledge and awareness about diabetes and PCOS. We then evaluated the quality of YouTube® videos about diabetes and PCOS using various validated quality assessment tools and compared these with CoMICs videos on the same topics. Quality assessment tools like DISCERN, JAMA benchmark criteria, and Global Quality Score (GQS) were employed. Some of the authors of this study also co-authored the creation of some of the CoMICs evaluated. Our study revealed that while videos effectively improve understanding of diabetes and PCOS, there are notable differences in quality and reliability of the videos on YouTube®. For diabetes, CoMICs videos had higher DISCERN scores (CoMICs vs YouTube®: 2.4 vs 1.6), superior reliability (p<0.01) and treatment quality (p<0.01), and met JAMA criteria for authorship (100% vs. 30.6%) and currency (100% vs. 53.1%). For PCOS, CoMICs had higher DISCERN scores (2.9 vs. 1.9), reliability (p<0.01), and treatment quality (p<0.01); and met JAMA criteria for authorship (100% vs. 34.0%) and currency (100% vs. 54.0%); and had higher GQS scores (4.0 vs 3.0). In conclusion, CoMICs outperformed other similar sources on YouTube® in providing reliable evidence-based medical information which may be used for patient education.
    DOI:  https://doi.org/10.1530/EC-24-0059
  14. Aliment Pharmacol Ther. 2024 Jun 10.
       BACKGROUND: TikTok is one of the fastest growing social media platforms. Irritable bowel syndrome (IBS) has recently become a trending topic of interest among TikTok users.
    AIM: To better understand the quality and accuracy of information presented in the most popular IBS-relevant videos on TikTok.
    METHODS: We reviewed videos with the tag 'IBS'. We excluded those not relevant to IBS or lasting <10 s or >10 min. Baseline characteristics about the videos were collected. Two independent reviewers assessed each video using DISCERN and Patient Education Materials and Assessment Tool (PEMAT) tools, two validated instruments to assess the quality of patient education materials.
    RESULTS: Of 100 videos, 33% were uploaded by participants with a defined medical background. The median DISCERN score of videos uploaded by participants with a medical background was 2.43 (2.00-3.10); from participants with a non-medical background, it was 1.37 (1.23-1.70) (p < 0.01). The median PEMAT Understandability scores of videos uploaded by participants with or without a medical background were 92.86 (86.61-95.00) and 80.95 (75.76-89.58), respectively (p < 0.01). The median PEMAT Actionability scores of videos uploaded by participants with or without a medical background were 100.00 (66.67-100.00) and 0.00, respectively (0.00-45.83; p < 0.01).
    CONCLUSION: Videos posted by medical professionals are easier to understand and to act on, and are more reliable and unbiased, and more likely to recommend shared decision making about treatment.
    DOI:  https://doi.org/10.1111/apt.18096
  15. BMC Public Health. 2024 Jun 14. 24(1): 1594
       BACKGROUND: YouTube, a widely recognized global video platform, is inaccessible in China, whereas Bilibili and TikTok are popular platforms for long and short videos, respectively. There are many videos related to laryngeal carcinoma on these platforms. This study aims to identify upload sources, contents, and feature information of these videos on YouTube, Bilibili, and TikTok, and further evaluate the video quality.
    METHODS: On January 1, 2024, we searched the top 100 videos by default sort order (300 videos in total) with the terms "laryngeal carcinoma" and "throat cancer" on YouTube, "" on Bilibili and TikTok. Videos were screened for relevance and similarity. Video characteristics were documented, and quality was assessed by using the Patient Education Materials Assessment Tool (PEMAT), Video Information and Quality Index (VIQI), Global Quality Score (GQS), and modified DISCERN (mDISCERN).
    RESULTS: The analysis included 99 YouTube videos, 76 from Bilibili, and 73 from TikTok. Median video lengths were 193 s (YouTube), 136 s (Bilibili), and 42 s (TikTok). TikTok videos demonstrated higher audience interaction. Bilibili had the lowest ratio of original contents (69.7%). Treatment was the most popular topic on YouTube and Bilibili, while that was the prognosis on TikTok. Solo narration was the most common video style across all platforms. Video uploaders were predominantly non-profit organizations (YouTube), self-media (Bilibili), and doctors (TikTok), with TikTok authors having the highest certification rate (83.3%). Video quality, assessed using PEMAT, VIQI, GQS, and mDISCERN, varied across platforms, with YouTube generally showing the highest scores. Videos from professional authors performed better than videos from non-professionals based on the GQS and mDISCERN scores. Spearman correlation analysis showed no strong relationships between the video quality and the audience interaction.
    CONCLUSIONS: Videos on social media platforms can help the public learn about the knowledge of laryngeal cancer to some extent. TikTok achieves the best flow, but videos on YouTube are of the best quality. However, the video quality across all platforms still needs enhancement. We need more professional uploaders to ameliorate the video quality related to laryngeal carcinoma. Content creators also should be aware of the certification, the originality, and the style of video shooting. As for the platforms, refining the algorithm will allow users to receive more high-quality videos.
    Keywords:  Bilibili; Information quality; Laryngeal cancer; Patient education; Public education; Public health; Social media; TikTok; YouTube
    DOI:  https://doi.org/10.1186/s12889-024-19077-6
  16. Front Public Health. 2024 ;12 1400749
       Background: Positive lifestyle adjustments have become effective methods in treating gastroesophageal reflux disease (GERD). Utilizing short video platforms to encourage GERD patients for effective self-disease management is a convenient and cost-effective approach. However, the quality of GERD-related videos on short video platforms is yet to be determined, and these videos may contain misinformation that patients cannot recognize. This study aims to assess the information quality of GERD-related short videos on TikTok and Bilibili in China.
    Methods: Search and filter the top 100 GERD-related videos on TikTok and Bilibili based on comprehensive rankings. Two independent gastroenterologists conducted a comprehensive evaluation of the video quality using the Global Quality Score and the modified DISCERN tool. Simultaneously, the content of the videos was analyzed across six aspects: definition, symptoms, risk factors, diagnosis, treatment, and outcomes.
    Results: A total of 164 GERD-related videos were collected in this study, and videos from non-gastrointestinal health professionals constitute the majority (56.71%), with only 28.66% originating from gastroenterology health professionals. The overall quality and reliability of the videos were relatively low, with DISCERN and GQS scores of 2 (IQR: 2-3) and 3 (IQR: 2-3), respectively. Relatively speaking, videos from gastrointestinal health professionals exhibit the highest reliability and quality, with DISCERN scores of 3 (IQR: 3-4) and GQS scores of 3 (IQR: 3-4), respectively.
    Conclusion: Overall, the information content and quality of GERD-related videos still need improvement. In the future, health professionals are required to provide high-quality videos to facilitate effective self-disease management for GERD patients.
    Keywords:  Bilibili; TikTok; gastroesophageal reflux disease; health information; short videos
    DOI:  https://doi.org/10.3389/fpubh.2024.1400749
  17. J Cardiopulm Rehabil Prev. 2024 Jun 17.
       PURPOSE: There is a growing concern surrounding the utility of medical content on social media. In this study, the popularity metrics and content quality of cardiac rehabilitation (CR) videos on YouTube regarding patient education were examined.
    METHODS: Using the search key word "cardiac rehabilitation," we analyzed the 50 most relevant videos. Our video popularity analytics encompassed viewing rate, such as ratio, number of comments, and the video power index (VPI). We assessed content quality using the Global Quality Scale (GQS), the modified DISCERN questionnaire, Journal of the American Medical Association ( JAMA ) benchmark criteria, Patient Education Materials Assessment Tool for Audio/Visual Materials (PEMAT-A/V), and a novel tool, the Cardiac Rehabilitation Specific Scale (CRSS).
    RESULTS: Notably, 78% of the videos were uploaded by medical organizations. The average viewing rate was 4.6 views per day. There were positive correlations between the scores from different content quality scales. Median scores for the GQS, the modified DISCERN questionnaire, JAMA benchmark criteria, and the CRSS were 3, 3.5, 2, and 5, respectively. Mean PEMAT-A/V scores were 60.4% for understandability and 38.3% for actionability. Videos published by entities other than medical centers predicted lower CRSS and GQS scores. High JAMA benchmark criteria scores were negative predictors of VPI, view rate, and number of comments.
    CONCLUSION: Our findings suggest that CR-related videos on YouTube are characterized by low popularity, average content quality and understandability, but a lack of reliability and actionability. To ensure individuals seek accurate CR information on social media platforms, we recommend directing them to videos uploaded by medical centers.
    DOI:  https://doi.org/10.1097/HCR.0000000000000864