bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒11‒03
thirty-one papers selected by
Thomas Krichel, Open Library Society



  1. Med Ref Serv Q. 2024 Oct 29. 1-16
      This paper uses the concept of resilience engineering as an organizing principle to discuss best practices that evolved within health science/medical libraries in the United States during COVID-19 crisis, focusing on the period March - August 2020. Protection of library staff, assistance to medical staff, reducing the circulation of misinformation and public health consumerism all required substantial changes to standard processes. These process changes had to arise in the context of both physical isolation and information overload. Some practices became widespread due to their utility, and these are the focus of this report.
    Keywords:  Accessibility; COVID-19; HIPAA; health science library; instruction; misinformation; public health; remote work; resilience
    DOI:  https://doi.org/10.1080/02763869.2024.2420045
  2. Nucleic Acids Res. 2024 Oct 29. pii: gkae967. [Epub ahead of print]
      The NCBI Taxonomy resource (https://www.ncbi.nlm.nih.gov/taxonomy) has long been a trusted, curated hub for organism names, classifications, and links to related data for all taxonomic nodes. NCBI Datasets (https://www.ncbi.nlm.nih.gov/datasets/) is an improved way to leverage the rich data available at NCBI so users can effectively browse, search, and download information. While taxonomy data has been a cornerstone of NCBI Datasets since its inception, we recently extended the taxonomy information available via NCBI Datasets by updating the existing NCBI Datasets taxonomy page, implementing a new taxonomy name details page, expanding programmatic access to taxonomic information via command-line tools and APIs and improving the way we handle taxonomic queries to connect users to gene and genome data. This paper highlights these improvements and provides examples to help users effectively harness these new features.
    DOI:  https://doi.org/10.1093/nar/gkae967
  3. BMJ Evid Based Med. 2024 Oct 31. pii: bmjebm-2023-112617. [Epub ahead of print]
    Cochrane Rapid Reviews Methods Group
      
    Keywords:  Evidence-Based Practice; Methods; Systematic Reviews as Topic
    DOI:  https://doi.org/10.1136/bmjebm-2023-112617
  4. J Foot Ankle Surg. 2024 Nov-Dec;63(6):pii: S1067-2516(24)00207-2. [Epub ahead of print]63(6): 623
      
    DOI:  https://doi.org/10.1053/j.jfas.2024.08.009
  5. PLoS One. 2024 ;19(10): e0308782
      In 2017, the National Library of Medicine (NLM) added a voluntary field for conflict of interest (COI) statements ("posted COI") on the abstract page of PubMed, but the extent to which it is used is unknown. This repeated cross-sectional study examined journals and articles indexed on PubMed from 2016 through 2021. We described the proportion of all journals with at least one article that included a posted COI and the percentage of all articles that included a posted COI over time. We also examined 100 randomly selected articles published between June 2021 and May 2022 from each of the 40 highest impact journals. For these, we established whether the articles had published COIs, and, of these, the proportion that included a posted COI. Among approximately 7,000 journals publishing articles each year, the proportion of journals with at least one article with a posted COI statement increased from 25.9% in 2016 to 33.2% in 2021. Among nearly 400,000 articles published each year, the proportion of articles that included a posted COI also increased from 9.0% in 2016 to 43.0% in 2021. Among 3,888 articles published in the 40 highest impact journals in 2021-2022, 30.2% (95% CI: 28.7%-31.6%) had published COIs; of these, 63.3% (95% CI: 60.4%-66.0%) included a posted COI. Use of the PubMed COI statement has increased since it became available in 2017, but adoption is still limited, even among high impact journals. NLM should carry out additional outreach to journals that are not using the statement to promote greater transparency of COIs.
    DOI:  https://doi.org/10.1371/journal.pone.0308782
  6. bioRxiv. 2024 Oct 21. pii: 2024.10.16.618663. [Epub ahead of print]
      We created GNQA, a generative pre-trained transformer (GPT) knowledge base driven by a performant retrieval augmented generation (RAG) with a focus on aging, dementia, Alzheimer's and diabetes. We uploaded a corpus of three thousand peer reviewed publications on these topics into the RAG. To address concerns about inaccurate responses and GPT 'hallucinations', we implemented a context provenance tracking mechanism that enables researchers to validate responses against the original material and to get references to the original papers. To assess the effectiveness of contextual information we collected evaluations and feedback from both domain expert users and 'citizen scientists' on the relevance of GPT responses. A key innovation of our study is automated evaluation by way of a RAG assessment system (RAGAS). RAGAS combines human expert assessment with AI-driven evaluation to measure the effectiveness of RAG systems. When evaluating the responses to their questions, human respondents give a "thumbs-up" 76% of the time. Meanwhile, RAGAS scores 90% on answer relevance on questions posed by experts. And when GPT-generates questions, RAGAS scores 74% on answer relevance. With RAGAS we created a benchmark that can be used to continuously assess the performance of our knowledge base. Full GNQA functionality is embedded in the free GeneNetwork.org web service, an open-source system containing over 25 years of experimental data on model organisms and human. The code developed for this study is published under a free and open-source software license at https://git.genenetwork.org/gn-ai/tree/README.md.
    Keywords:  FAIR; GPT; RAG; aging, dementia, Alzheimer’s and diabetes; artificial intelligence; systems genetics
    DOI:  https://doi.org/10.1101/2024.10.16.618663
  7. Proc (IEEE Conf Multimed Inf Process Retr). 2024 Aug;2024 471-476
      Information processing and retrieval in literature are critical for advancing scientific research and knowledge discovery. The inherent multimodality and diverse literature formats, including text, tables, and figures, present significant challenges in literature information retrieval. This paper introduces LitAI, a novel approach that employs readily available generative AI tools to enhance multimodal information retrieval from literature documents. By integrating tools such as optical character recognition (OCR) with generative AI services, LitAI facilitates the retrieval of text, tables, and figures from PDF documents. We have developed specific prompts that leverage in-context learning and prompt engineering within Generative AI to achieve precise information extraction. Our empirical evaluations, conducted on datasets from the ecological and biological sciences, demonstrate the superiority of our approach over several established baselines including Tesseract-OCR and GPT-4. The implementation of LitAI is accessible at https://github.com/ResponsibleAILab/LitAI.
    Keywords:  ChatGPT; GPT-4; Generative AI; Literature Mining; OCR; Prompt Engineering
    DOI:  https://doi.org/10.1109/mipr62202.2024.00080
  8. Vet Rec. 2024 Nov;195 Suppl 1 9
      Type 'myths about artificial intelligence' into an online search engine and there will be plenty of suggestions. But is there any substance behind them?
    DOI:  https://doi.org/10.1002/vetr.4849
  9. Sci Adv. 2024 Nov;10(44): eadn3750
      Do search engine algorithms systematically expose users to content from unreliable sites? There is widespread concern that they do, but little systematic evidence that search engine algorithms, rather than user-expressed preferences, are driving current exposure to and engagement with unreliable information sources. Using two datasets totaling roughly 14 billion search engine result pages (SERPs) from Bing, the second most popular search engine in the U.S., we show that search exposes users to few unreliable information sources. The vast majority of engagement with unreliable information sources from search occurs when users are explicitly searching for information from those sites, despite those searches being an extremely small share of the overall search volume. Our findings highlight the importance of accounting for user preference when examining engagement with unreliable sources from web search.
    DOI:  https://doi.org/10.1126/sciadv.adn3750
  10. J Craniofac Surg. 2024 Nov 01.
      Cleft lip with or without cleft palate (CL/P) is a common congenital facial pathology that occurs at higher incidences in Hispanic communities. The authors analyzed the availability and readability of Spanish-written patient education materials (PEMs) on CL/P from top-ranking U.S. children's hospitals to determine the presence of health literacy barriers. Availability of PEM was evaluated by 2 methods: (1) Google search and (2) evaluation of the official hospital websites. For each institution, a Google search was conducted using the phrase, "labio leporino y/o paladar hendido (translation: CL/P) + (hospital name)." In addition, each hospital website was assessed for Spanish PEM availability. Spanish PEMs were then categorized by whether they had been generated by an automated translation function or if they were independently written Spanish text. English PEM readability was assessed using the Simple Measure of Gobbledygook (SMOG). Spanish PEM readability was assessed using spanish orthgraphic length (Spanish Orthographic Length), the SMOG converted for the Spanish language. Unpaired 2-tailed t tests were used to compare readability. Of a total of 85 pediatric hospitals, 28 (37.3%) had Spanish PEM. Five (6.7%) hospitals created their own Spanish-language document. The average spanish orthgraphic length reading level was 9.49 compared with an average SMOG of 11.38 (P < 0.001). Institutions that did not provide Spanish PEM in any format had a significantly higher SMOG for English PEM of 12.13 compared with 11.38 of those that had SMOG with English PEM (P = 0.04). Health literacy barriers not only exist for Spanish PEM but also for English PEM, indicating an opportunity to improve communication.
    DOI:  https://doi.org/10.1097/SCS.0000000000010838
  11. Cureus. 2024 Sep;16(9): e70270
      INTRODUCTION: There is a growing need for inter-professional education (IPE) to reduce the burden of oral diseases and address oral health disparities. Professional websites associated with inter-professional education can serve as a reliable source of information about oral health. Hence, the study was conducted to determine the prevalence of oral health content on non-dental health professional associations' websites in India.METHODS: Eighty-nine organizations were selected from five types of health professional associations. Six dental search terms were used on searchable websites. The keywords were dental, oral, dentistry, mouth, teeth, and fluoride. Websites were assessed for any oral health content.
    RESULTS: Only four websites (4.5%) had any oral health content, and all four were physician-related.
    CONCLUSION: The study highlights the limited and inconsistent oral health content on non-dental professional association websites. Improving the availability, accuracy, and comprehensiveness of oral health information on these platforms is crucial to enhancing oral health literacy and promoting better oral health outcomes.
    Keywords:  allied health personnel; india; interprofessional education; oral health; physicians
    DOI:  https://doi.org/10.7759/cureus.70270
  12. BMC Health Serv Res. 2024 Oct 30. 24(1): 1311
      PURPOSE: This article outlines a research study that ranked health information quality criteria on social media from experts' perspectives.METHODOLOGY: A mixed-method approach (qualitative-quantitative) used in current research. In the qualitative phase a literature review explored existing dimensions for evaluating social media content quality, focusing on identifying common dimensions and attributes. Furthermore, a quantitative method involving experts was utilized to rank the health information quality criteria for social media.
    RESULTS: The findings indicated various dimensions of health information quality in the literature. Out of 17 criteria, accuracy, credibility, and reliability had the highest ranks, while originality, value-added, and amount of data had the lowest ranks, respectively, according to experts.
    CONCLUSION: The endeavor to bolster the dissemination of reliable health information on social media demands a sustained commitment to enhancing accountability, transparency, and accuracy, ensuring that users have access to information that is not only informative but also trustworthy.
    Keywords:  Consumer health information; Criteria; Information sources; Quality indicators; Social media
    DOI:  https://doi.org/10.1186/s12913-024-11838-8
  13. Eplasty. 2024 ;24 e49
      Background: Chat Generative Pretrained Transformer (ChatGPT), a newly developed pretrained artificial intelligence (AI) chatbot, is able to interpret and respond to user-generated questions. As such, many questions have been raised about its potential uses and limitations. While preliminary literature suggests that ChatGPT can be used in medicine as a research assistant and patient consultant, its reliability in providing original and accurate information is still unknown. Therefore, the purpose of this project was to conduct a review on the utility of ChatGPT in plastic surgery.Methods: On August 25, 2023, a thorough literature search was conducted on PubMed. Papers involving ChatGPT and medical research were included. Papers that were not written in English were excluded. Related papers were evaluated and synthesized into 3 information domains: generating original research topics, summarizing and extracting information from medical literature and databases, and conducting patient consultation.
    Results: Out of 57 initial papers, 8 met inclusion criteria. An additional 2 were added based on the references of relevant papers, bringing the total number to 10. ChatGPT can be useful in helping clinicians brainstorm and gain a general understanding of the literature landscape. However, its inability to give patient-specific information and act as a reliable source of information limit its use in patient consultation.
    Conclusion: ChatGPT can be a useful tool in the conception of and execution of literature searches and research information retrieval (with increased reliability when queries are specific); however, the technology is currently not reliable enough to be implemented in a clinical setting.
    Keywords:  Artificial Intelligence; ChatGPT; Patient Consultation; Plastic and Reconstructive Surgery; Research Assistant
  14. Am J Gastroenterol. 2024 Oct 31.
      BACKGROUND AND AIMS: Artificial intelligence-based chatbots offer a potential avenue for delivering personalized counselling to Autoimmune Hepatitis (AIH) patients. We assessed accuracy, completeness, comprehensiveness and safety of ChatGPT-4 responses to 12 inquiries out of a pool of 40 questions posed by four AIH patients.METHODS: Questions were categorized into three areas: Diagnosis(1-3), Quality of Life(4-8) and Medical treatment(9-12). 11 Key Opinion Leaders (KOLs) evaluated responses using a Likert scale with 6 points for accuracy, 5 points for safety and 3 points for completeness and comprehensiveness.
    RESULTS: Median scores for accuracy, completeness, comprehensiveness and safety were 5(4-6), 2 (2-2) and 3 (2-3); no domain exhibited superior evaluation. Post-diagnosis follow-up question was the trickiest with low accuracy and completeness but safe and comprehensive features. Agreement among KOLs (Fleiss's Kappa statistics) was slight for accuracy (0.05) but poor for the remaining features (-0.05, -0.06 and -0,02, respectively).
    CONCLUSIONS: Chatbots show good comprehensibility but lack reliability. Further studies are needed to integrate Chat-GPT within clinical practice.
    DOI:  https://doi.org/10.14309/ajg.0000000000003179
  15. J Pediatr Ophthalmol Strabismus. 2024 Oct 28. 1-12
      PURPOSE: To assess the appropriateness and readability of responses provided by four large language models (LLMs) (ChatGPT-4, Claude 3, Gemini, and Microsoft Co-pilot) to parents' queries pertaining to retinopathy of prematurity (ROP).METHODS: A total of 60 frequently asked questions were collated and categorized into six distinct sections. The responses generated by the LLMs were evaluated by three experienced ROP specialists to determine their appropriateness and comprehensiveness. Additionally, the readability of the responses was assessed using a range of metrics, including the Flesch-Kincaid Grade Level (FKGL), Gunning Fog (GF) Index, Coleman-Liau (CL) Index, Simple Measure of Gobbledygook (SMOG) Index, and Flesch Reading Ease (FRE) score.
    RESULTS: ChatGPT-4 demonstrated the highest level of appropriateness (100%) and performed exceptionally well in the Likert analysis, scoring 5 points on 96% of questions. The CL Index and FRE scores identified Gemini as the most readable LLM, whereas the GF Index and SMOG Index rated Microsoft Copilot as the most readable. Nevertheless, ChatGPT-4 exhibited the most intricate text structure, with scores of 18.56 on the GF Index, 18.56 on the CL Index, 17.2 on the SMOG Index, and 9.45 on the FRE score. This suggests that the responses demand a college-level comprehension.
    CONCLUSIONS: ChatGPT-4 demonstrated higher performance than other LLMs in responding to questions related to ROP; however, its texts were more complex. In terms of readability, Gemini and Microsoft Copilot were found to be more successful. [J Pediatr Ophthalmol Strabismus. 20XX;XX(X):XXX-XXX.].
    DOI:  https://doi.org/10.3928/01913913-20240911-05
  16. JMIR Form Res. 2024 Oct 30. 8 e60939
      BACKGROUND: In the digital age, large language models (LLMs) like ChatGPT have emerged as important sources of health care information. Their interactive capabilities offer promise for enhancing health access, particularly for groups facing traditional barriers such as insurance and language constraints. Despite their growing public health use, with millions of medical queries processed weekly, the quality of LLM-provided information remains inconsistent. Previous studies have predominantly assessed ChatGPT's English responses, overlooking the needs of non-English speakers in the United States. This study addresses this gap by evaluating the quality and linguistic parity of vaccination information from ChatGPT and the Centers for Disease Control and Prevention (CDC), emphasizing health equity.OBJECTIVE: This study aims to assess the quality and language equity of vaccination information provided by ChatGPT and the CDC in English and Spanish. It highlights the critical need for cross-language evaluation to ensure equitable health information access for all linguistic groups.
    METHODS: We conducted a comparative analysis of ChatGPT's and CDC's responses to frequently asked vaccination-related questions in both languages. The evaluation encompassed quantitative and qualitative assessments of accuracy, readability, and understandability. Accuracy was gauged by the perceived level of misinformation; readability, by the Flesch-Kincaid grade level and readability score; and understandability, by items from the National Institutes of Health's Patient Education Materials Assessment Tool (PEMAT) instrument.
    RESULTS: The study found that both ChatGPT and CDC provided mostly accurate and understandable (eg, scores over 95 out of 100) responses. However, Flesch-Kincaid grade levels often exceeded the American Medical Association's recommended levels, particularly in English (eg, average grade level in English for ChatGPT=12.84, Spanish=7.93, recommended=6). CDC responses outperformed ChatGPT in readability across both languages. Notably, some Spanish responses appeared to be direct translations from English, leading to unnatural phrasing. The findings underscore the potential and challenges of using ChatGPT for health care access.
    CONCLUSIONS: ChatGPT holds potential as a health information resource but requires improvements in readability and linguistic equity to be truly effective for diverse populations. Crucially, the default user experience with ChatGPT, typically encountered by those without advanced language and prompting skills, can significantly shape health perceptions. This is vital from a public health standpoint, as the majority of users will interact with LLMs in their most accessible form. Ensuring that default responses are accurate, understandable, and equitable is imperative for fostering informed health decisions across diverse communities.
    Keywords:  artificial intelligence; conversational agents; health equity; health information; health literacy; language equity; large language models; multilingualism; online health information; public health; vaccination
    DOI:  https://doi.org/10.2196/60939
  17. J Paediatr Child Health. 2024 Oct 29.
      BACKGROUND: Artificial intelligence (AI) systems hold great promise in improving medical care and health problems.AIM: We aimed to evaluate the answers by asking the most frequently asked questions to ChatGPT for the prediction and treatment of fever, which is a major problem in children.
    METHODS: The 50 questions most frequently asked about fever in children were determined, and we asked them to ChatGPT. We evaluated the responses using the quality and readability scales.
    RESULTS: While ChatGPT demonstrated good quality in its responses, the readability scale and the Patient Education Material Evaluation Tool (PEMAT) scale used with materials appearing on the site were also found to be successful. Among the scales in which we evaluated ChatGPT responses, a weak positive relationship was found between Gunning Fog (GFOG) and Simple Measure of Gobbledygook (SMOG) scores (r = 0.379) and a significant and positive relationship was found between FGL and SMOG scores (r = 0.899).
    CONCLUSION: This study sheds light on the quality and readability of information regarding the presentation of AI tools, such as ChatGPT, regarding fever, a common complaint in children. We determined that the answers to the most frequently asked questions about fire were high-quality, reliable, easy to read and understandable.
    Keywords:  ChatGPT; children; fever; paediatric
    DOI:  https://doi.org/10.1111/jpc.16710
  18. Rom J Ophthalmol. 2024 Jul-Sep;68(3):68(3): 243-248
      Aim: To evaluate the appropriateness and readability of the medical knowledge provided by ChatGPT-3.5 and Google Bard, artificial-intelligence-powered conversational search engines, regarding surgical treatment for glaucoma.Methods: In this retrospective, cross-sectional study, 25 common questions related to the surgical management of glaucoma were asked on ChatGPT-3.5 and Google Bard. Glaucoma specialists graded the responses' appropriateness, and different scores assessed readability.
    Results: Appropriate answers to the posed questions were obtained in 68% of the responses with Google Bard and 96% with ChatGPT-3.5. On average, the responses generated by Google Bard had a significantly lower proportion of sentences, having more than 30 and 20 syllables (23% and 52% respectively) compared to ChatGPT-3.5 (66% and 82% respectively), as noted by readability. Google Bard had significantly (p<0.0001) lower readability grade scores and significantly higher "Flesch Reading ease score", implying greater ease of readability amongst the answers generated by Google Bard.
    Discussion: Many patients and their families turn to LLM chatbots for information, necessitating clear and accurate content. Assessments of online glaucoma information have shown variability in quality and readability, with institutional websites generally performing better than private ones. We found that ChatGPT-3.5, while precise, has lower readability than Google Bard, which is more accessible but less precise. For example, the Flesch Reading Ease Score was 57.6 for Google Bard and 22.6 for ChatGPT, indicating Google Bard's content is easier to read. Moreover, the Gunning Fog Index scores suggested that Google Bard's text is more suitable for a broader audience. ChatGPT's knowledge is limited to data up to 2021, whereas Google Bard, trained with real-time data, offers more current information. Further research is needed to evaluate these tools across various medical topics.
    Conclusion: The answers generated by ChatGPT-3.5™ AI are more accurate than the ones given by Google Bard. However, comprehension of ChatGPT-3.5™ answers may be difficult for the public with glaucoma. This study emphasized the importance of verifying the accuracy and clarity of online information that glaucoma patients rely on to make informed decisions about their ocular health. This is an exciting new area for patient education and health literacy.
    Keywords:  AI = Artificial Intelligence; AMA = American Medical Association; ChatGPT; GDD = Glaucoma Drainage Devices; GPT-3 = Generative Pre-trained Transformer 3; Google Bard; IOP = intraocular pressure; LLM = Large Language Model; LaMBDA = Language Model for Dialogue Applications; Large Language Model; SMOG = Simple Measure of Gobbledygook; glaucoma
    DOI:  https://doi.org/10.22336/rjo.2024.45
  19. Can Urol Assoc J. 2024 Aug 30.
      INTRODUCTION: The purpose of this study is to evaluate YouTube content about metoidioplasty on completeness of perioperative information, actionability, understandability, degree of misinformation, quality, and presence of commercial bias.METHODS: A YouTube search for "Metoidioplasty" was conducted and the first 100 video results were watched by five independent reviewers. Videos in English <30 minutes in length were included and videos primarily showing surgical footage were excluded. Videos were evaluated between January 2022 and June 2022. Each video was evaluated for presenter demographics, channel/video statistics, and whether it covered topics including anatomy, treatment options, outcomes, procedure risks, and misinformation, and whether it had a clickbait title. Calculated scores for validated DISCERN and Patient Education Materials Assessment Tool (PEMAT) metrics were the primary outcome variables used to quantify quality, actionability, and understandability. For PEMAT, a cutoff of 75% was used to differentiate between "poor" versus "good/sufficient." Multivariate and univariate logistic regressions were performed to assess correlations among primary outcome variables and other variables.
    RESULTS: Of the 79 videos analyzed, 24% (n=19) were of high quality; 99% (n=78) had poor understandability and 100% (n=79%) had poor actionability. Patients/consumers were the most common publisher type (n=71, 90%).
    CONCLUSIONS: This study demonstrates metoidioplasty content available on YouTube is not comprehensive and is of poor quality, and poor actionability and understandability, demonstrating a clear need for more relevant, accessible, comprehensible, and accurate content.
    DOI:  https://doi.org/10.5489/cuaj.8872
  20. Qual Manag Health Care. 2024 Oct 22.
      BACKGROUND AND OBJECTIVES: We evaluated the prevalence of potential reinforcement of common unhealthy misinterpretations of bodily sensations in social media (YouTube videos) addressing elbow enthesopathy (eECRB, enthesopathy of the extensor carpi radialis brevis, tennis elbow).METHODS: We recorded video metric data on 139 unique YouTube videos when searching "lateral epicondylitis" and "tennis elbow." We designed a rubric to assess the level of potential reinforcement of unhelpful thinking in videos about eECRB. Informational quality was scored with an adapted version of the DISCERN instrument. We then assessed the factors associated with these scores.
    RESULTS: Sixty-five percent (91 of 139) of videos contained information reinforcing at least one common misconception regarding eECRB. Potential reinforcement of misconceptions was associated with longer video duration, higher likes per day, and higher likes per view. No factors were associated with information quality scores.
    CONCLUSIONS: These findings of a high prevalence of potential reinforcement of misconceptions in YouTube videos, in combination with the known associations of misconceptions with greater discomfort and incapability, point to the potential of such videos to harm health. Producers of patient facing health material can add avoidance of reinforcement of unhelpful thinking along with readability, accuracy, and relevance as a guiding principle.
    DOI:  https://doi.org/10.1097/QMH.0000000000000478
  21. Emerg Med Int. 2024 ;2024 7077469
      Background: Trauma is one of the leading causes of mortality worldwide, and online platforms have become essential sources of information for trauma management. YouTube can play a significant role in helping people access medical information. Methods: YouTube was searched using the keywords management of trauma and assessment of trauma to identify relevant videos. Two authors independently evaluated the videos according to the ATLS (10th edition) guidelines, the modified DISCERN (m-DISCERN) scale, and the Global Quality Scale (GQS) criteria. The videos that met the study criteria were evaluated based on the provider, video length, and view count. Results: Out of 939 videos, 667 were excluded resulting in 272 videos included in the study. According to the ATLS (10th edition) guidelines, the median score for videos was 8 (IQR 7-8). Videos uploaded by official institutions and healthcare professionals received higher scores than from uncertain sources (p = 0.003). According to the GQS, 86% of the videos were low or moderate quality; uncertain sources uploaded 78% of low-quality videos. Conclusion: YouTube is an information source about trauma management that contains videos of varying quality and has a broad audience. Official institutions and healthcare professionals should be aware of this evolving technology and publish up-to-date, accurate content to increase awareness about trauma management and help patients distinguish helpful information from misleading content.
    Keywords:  ATLS; YouTube; assessment of trauma; management of trauma; trauma; video
    DOI:  https://doi.org/10.1155/2024/7077469
  22. Ear Nose Throat J. 2024 Oct 26. 1455613241293867
      Objective: The aim of this study was to evaluate the educational quality of endonasal endoscopic dacryocystorhinostomy (EE-DCR) videos on YouTube with Instructional Videos in Otorhinolaryngology by YO-IFOS (IVORY) and LAParoscopic surgery Video Educational GuidelineS (LAP-VEGaS) guidelines and to evaluate the correlation of the 2 guidelines. Methods: EE-DCR videos were queried using search terms on YouTube. Views, likes, likes/dislikes ratio, age, and length of videos were noted. Videos were evaluated using the IVORY and LAP-VEGaS guidelines. Two IVORY scores were created: total (IVORY-1) and organ-specific (IVORY-2). The correlation analysis between video features and guideline scores was performed. Results: A total of 61 EE-DCR videos were evaluated. The mean score of LAP-VEGaS was 10.3 (±SD 2.7), the mean IVORY-1 score was 22.5 (±SD 5.5), and the mean IVORY-2 score was 10.6 (±SD 1.94). Correlation analysis revealed a statistically significant correlation between the IVORY-1 total score and the number of likes, the duration of the video, the age of the video, and the LAP-VEGaS score. Linear regression analysis showed that higher IVORY-1 scores predicted longer video duration, newer video age, and higher LAP-VEGaS scores. There was a significant association between LAP-VEGaS categories and the IVORY-1 total score (P < .001). Conclusion: The quality of EE-DCR videos is generally low to moderate. The IVORY and LAP-VEGaS guidelines were found to be correlated with each other. Both guidelines can be used to evaluate EE-DCR videos and otolaryngology surgical education videos in general. We believe that scales such as IVORY and LAP-VEGaS may be improved according to specific surgical procedures.
    Keywords:  IVORY guidelines; LAP-VEGaS guidelines; YouTube; dacryocystorhinostomy; quality; social media
    DOI:  https://doi.org/10.1177/01455613241293867
  23. Integr Cancer Ther. 2024 Jan-Dec;23:23 15347354241293417
      BACKGROUND: The global burden of cancer continues to rise and complementary and alternative medicine (CAM) is attracting a lot of interest. However, quality of online information on CAM, particularly on platforms like YouTube, remains questionable. This study aimed to create a comprehensive assessment tool to assess the quality of CAM-related YouTube videos, crucial for informed decision-making in oncology.METHODS: The assessment tool was developed by adapting existing criteria for website content analysis to video rating. A YouTube search was conducted using German-language terms related to CAM ("complementary medicine (CM) for cancer" and "alternative medicine (AM) for cancer"). In total 25 videos were assessed based on the defined criteria and assigned to five different types of providers (journalism, healthcare organization, hospital or health insurance, independent person, non-medical organization). Statistical analysis was conducted using IBM SPSS Statistics 27.
    RESULTS: Interrater reliability analysis showed an Intraclass Correlation Coefficient (ICC) of .91, indicating good to excellent agreement. The average video result was of poor quality, with none of the videos meeting all criteria. The videos achieved a mean rating of 38.2 points (SD: 6.5 points; possible range: 20-60 points). Journalism-based videos showed the most views per days, particularly surpassing those from hospitals or health insurance providers (Kruskal-Wallis-Test: z = 3.14, P = .02). However, there was no statistically significant correlation between video quality and the type of provider or interaction indices. Videos retrieved under the search term "CM" generally scored higher in quality than those under "AM" (Mann-Whitney U test: U = 39.5, P = .04). Nonetheless, "CM" videos were less frequently viewed (Mann-Whitney U test: U = 31.0, P = .01).
    CONCLUSION: This study, the first of its kind focusing on CAM in cancer care emphasized the challenges in identifying credible sources on social media platforms such as YouTube. The developed assessment tool offers a more detailed evaluation method for health-related videos but requires further refinement and testing. Collaboration between healthcare and media entities is suggested to improve the dissemination of reliable information on platforms like YouTube.
    Keywords:  YouTube; assessment tool; cancer; complementary and alternative medicine (CAM); health literacy; quality; videos
    DOI:  https://doi.org/10.1177/15347354241293417
  24. J Pediatr Ophthalmol Strabismus. 2024 Oct 28. 1-8
      PURPOSE: To investigate the quality and reliability of YouTube videos about retinopathy of prematurity (ROP) to direct parents of infants with the disease to access more accurate content.METHODS: The term "retinopathy of prematurity" has been searched on YouTube containing all of the videos between January 2 and February 2, 2024. The first 200 videos were evaluated by two ophthalmologists. Duplicated-split videos, videos shorter than 60 seconds, videos presented in languages other than English or with an incomprehensible accent, and videos unrelated to ROP were excluded. Video uploaders, types, continental origins, durations, and viewer interactions were noted. DISCERN, The Journal of the American Medical Association (JAMA), and the Global Quality Score (GQS) scoring systems were used to evaluate the quality of the videos.
    RESULTS: The mean quality of all videos was poor in all scoring systems. Academic societies and medical institutes scored highest in video uploaders, followed by physicians; patient experience videos had the lowest quality. Of the video types, the medical education seminars were of the highest quality. Although a strong positive correlation was detected between video duration and video quality, this same strong correlation was not observed between viewer interactions and video quality. There was no significant difference between video origins in terms of video quality.
    CONCLUSIONS: It would be wiser to direct the parents of patients with ROP to watch longer videos uploaded by the academic community, medical institute, or physicians, and to watch the medical training seminars. Also, it might be important to warn them not to take user interactions too seriously. [J Pediatr Ophthalmol Strabismus. 20XX;XX(X):XXX-XXX.].
    DOI:  https://doi.org/10.3928/01913913-20240911-04
  25. Int J Dent Hyg. 2024 Oct 27.
      INTRODUCTION: To examine the content of Turkish videos on YouTube about tooth brushing and to evaluate their usefulness in providing information to the public.METHODS: Two keywords were determined using the website Google Trends. For each keyword, the top 100 most watched videos were determined according to the searches made on YouTube and a total of 200 videos were examined. According to the inclusion and exclusion criteria, 99 videos were included in the study. The URL address of each video included, the number of views, the number of likes, the number of dislikes, the duration of the video, the time elapsed since the upload date (days) and whether the videos was narrated by a dentist, commercial institution or individual user was recorded. In addition, the interaction index and viewing rates of the videos were calculated. The contents of the videos included were evaluated by a specialist dentist.
    RESULTS: Based on the evaluation of the information content of the videos, 54.5% of them contained minimal information, 38.4% poor information, 5.1% good information and 2% excellent information. The two videos containing excellent information include 10 of the 12 topics evaluated and both videos are narrated by a dentist.
    CONCLUSION: Turkish videos about tooth brushing on YouTube are insufficient in terms of content and accurate information. For this reason, individuals should be directed to professional sources in order to receive accurate and up-to-date information, and dentists should share videos containing sufficient content and quality information on this platform.
    Keywords:  Turkish videos; internet; oral health; preventive dentistry; tooth brushing
    DOI:  https://doi.org/10.1111/idh.12859
  26. Cureus. 2024 Sep;16(9): e70227
      Introduction Breast self-examination (BSE) is essential for early detection of breast cancer to lower the disease's morbidity and death rate. Education about the proper application reinforces its effectiveness. YouTube is an emerging modality for education distribution. Thus, we aimed to evaluate the quality and reliability of BSE videos on YouTube. Materials and methods A web search of YouTube was conducted using the term "breast self-examination". The first 50 relevant videos found through this search were compiled and evaluated. Video reliability was evaluated by applying benchmark criteria from the Journal of the American Medical Association (JAMA). The educational quality of the videos was evaluated using the Global Quality Score (GQS) and the guidelines' comprehensiveness score for BSE-specific instructions. Results The mean number of views was 311,625.9. Medical sources were the most common upload sources, which were found in 60% of the analyzed videos (30 videos), while examination demonstration was the most common type of video content (33 videos, 66%), followed by examination information (15 videos, 30%). However, a significant association was found between videos containing both examination information and demonstration and better educational quality. Regarding video reliability, 34% of videos (17 videos) scored 0, and only 2% (one video) scored four. According to the GQS, only 8% (four videos) were of excellent quality, while the majority (20 videos, 40%) were of suboptimal quality. Based on the BSE comprehensiveness score, the mean score was seven out of nine. Conclusions Videos containing examination information and demonstrations showed the best educational quality. Although most of the YouTube videos of BSE showed a high comprehensiveness score for BSE-specific instructions, their JAMA reliability and GQS scores were poor.
    Keywords:  breast cancer; breast examination; education; video; youtube
    DOI:  https://doi.org/10.7759/cureus.70227
  27. Sex Transm Dis. 2024 Oct 29.
      BACKGROUND: Sexually transmitted infections (STIs), including syphilis, pose a significant public health challenge. The advent of social media platforms has revolutionized health information dissemination, with YouTube and TikTok emerging as prominent sources. However, concerns persist regarding the reliability of syphilis-related content on these platforms. This study aimed to evaluate the quality and accuracy of syphilis-related content on TikTok and YouTube, employing established tools such as DISCERN, Accuracy in Digital-health Instrument (ANDI), and Global Quality Scale (GQS).METHODOLOGY: We conducted a thorough search on TikTok and YouTube on November 26, 2023, using the keyword "syphilis." Inclusion criteria comprised videos in English, less than 20 minutes in duration, and relevance to syphilis. Two dermatologists independently rated 98 eligible videos using DISCERN, ANDI, and GQS. Statistical analyses included Chi-square tests, mean comparisons, and interclass correlation.
    RESULTS: TikTok videos exhibited higher mean views (222,519 ± 412,746) compared to YouTube (127,527 ± 223,622). However, TikTok videos had lower mean GQS (2.3 ± 0.9), ANDI (2.19 ± 0.99), and DISCERN (28.7 ± 6.56) scores compared to YouTube (GQS: 2.9 ± 1.1, ANDI: 2.90 ± 0.97, DISCERN: 38.8 ± 9). Non-professional uploaders were 40.8% on TikTok, while YouTube were (53.1%).
    CONCLUSION: This study reveals disparities in the quality and accuracy of syphilis-related content on TikTok and YouTube. Despite higher popularity on TikTok, content quality, as assessed by DISCERN, ANDI, and GQS, was generally lower compared to YouTube. Targeted interventions are needed to improve the reliability of syphilis-related information on social media platforms.
    DOI:  https://doi.org/10.1097/OLQ.0000000000002090
  28. J Low Genit Tract Dis. 2024 Oct 28.
      OBJECTIVES/PURPOSES OF THE STUDY: The purpose of this study is to evaluate the content, delivery, and quality of medical information for vulvar lichen sclerosus on the social media platform TikTok.MATERIALS AND METHODS: This is a descriptive, cross-sectional study. Using the third-party data scraping tool Apify, TikTok videos tagged with #lichensclerosus or "lichen sclerosus" were identified and sorted by view count. A sample of 100 videos was reviewed by 2 independent reviewers, excluding those not discussing lichen sclerosus. Videos were assessed using a coding document, the Patient Educational Materials Assessment Tool, and the DISCERN instrument. Interrater reliability was measured, and statistical analyses included Fleiss' kappa, intraclass correlation coefficient, t tests, and Wilcoxon rank sum test with Holm-Bonferroni correction.
    RESULTS: Content creators included patients (46%), health care professionals (30%), and others. Topics focused on clinical disease (52%) and treatment (48%). Evidence-based medicine was discussed in 71.7% of treatment-related videos, while 51.7% included nonevidence-based approaches, with a neutral or positive sentiment. Videos discussing topical steroids often had negative sentiments. Quality assessment revealed 61% of videos were understandable, 27% actionable, and 46% contained misinformation. Videos by health care professionals had less misinformation and higher quality scores compared to patient-generated content. Commercially biased videos were more understandable but contained more misinformation.
    CONCLUSIONS: TikTok serves as a significant platform for sharing information on lichen sclerosus, but nearly half of the content contains misinformation. Health care professionals need to engage in social media to provide accurate information and counteract misinformation. Enhanced collaboration with patient advocates and careful resource sharing can improve the quality and reliability of medical information available online.
    DOI:  https://doi.org/10.1097/LGT.0000000000000846
  29. J Med Internet Res. 2024 Oct 29. 26 e51655
      BACKGROUND: Short videos have demonstrated huge potential in disseminating health information in recent years. However, to our knowledge, no study has examined information about colorectal polyps on short-video sharing platforms.OBJECTIVE: This study aimed to analyze the content and quality of colorectal polyps-related videos on short-video sharing platforms.
    METHODS: The terms "" (intestinal polyps) or "" (colonic polyps) or "" (rectal polyps) or "" (colorectal polyps) or "" (polyps of large intestine) were used to search in TikTok (ByteDance), WeChat (Tencent Holdings Limited), and Xiaohongshu (Xingyin Information Technology Limited) between May 26 and June 8, 2024, and then the top 100 videos for each search term on different platforms were included and recorded. The Journal of American Medical Association (JAMA) score, the Global Quality Scale (GQS), the modified DISCERN, and the Patient Education Materials Assessment Tool (PEMAT) were used to evaluate the content and quality of selected videos by 2 independent researchers. SPSS (version 22.0; IBM Corp) and GraphPad Prism (version 9.0; Dotmatics) were used for analyzing the data. Descriptive statistics were generated, and the differences between groups were compared. Spearman correlation analysis was used to evaluate the relationship between quantitative variables.
    RESULTS: A total of 816 eligible videos were included for further analysis, which mainly conveyed disease-related knowledge (n=635, 77.8%). Most videos were uploaded by physicians (n=709, 86.9%). These videos had an average JAMA score of 2.0 (SD 0.6), GQS score of 2.5 (SD 0.8), modified DISCERN score of 2.5 (SD 0.8), understandability of 80.4% (SD 15.6%), and actionability of 42.2% (SD 36.1%). Videos uploaded by news agencies were of higher quality and received more likes and comments (all P<.05). The number of collections and shares of videos about posttreatment caveats were more than those for other content (P=.03 and P=.006). There was a positive correlation between the number of likes, comments, collections, and shares (all P<.001). The duration and the number of fans were positively correlated with the quality of videos (all P<.05).
    CONCLUSIONS: There are numerous videos about colorectal polyps on short-video sharing platforms, but the reliability and quality of these videos are not good enough and need to be improved.
    Keywords:  colorectal polyps; health information; quality assessment; reliability; short videos
    DOI:  https://doi.org/10.2196/51655
  30. BMC Oral Health. 2024 Oct 28. 24(1): 1307
      BACKGROUND: To investigate the current status of health information-seeking behavior (HISB) of periodontitis patients, and to identity its main influencing factors using the Comprehensive Model of Information Seeking (CMIS).METHODS: In total, 274 periodontitis patients were recruited from a specialized dental hospital in Hangzhou by purposive sampling method for a cross-sectional study. Demographics, direct experience, salience, beliefs, characteristics and utility were 6 variables of CMIS. Data were collected from the patients by using the general information questionnaire, the Health Information Seeking Behavior Scale, the Self-Efficacy Scale for Self-care (SESS) for measuring belief, the Short Form of Health Literacy Dental Scale (HeLD⁃14) for measuring direct experience, and the Brief Illness Perception Questionnaire (BIPQ) for measuring salience. Univariate analysis and regression analysis were utilized to determine the factors influencing the HISB.
    RESULTS: The HISB score of periodontitis patients in this study was 3.68 ± 0.40. The low level of HISB was negatively associated with multiple factors, including age of 40 ~ 59 (odds ratio [OR] 0.041, 95% confidence interval [CI] 0.006-0.299), age of 18 ~ 39 (OR 0.053, 95%CI 0.008-0.364), low level of understandability of information (characteristic) [OR 0.317, 95%CI 0.119-0.840] and low level of satisfaction of information (utility) [OR 0.027, 95%CI 0.008-0.089]. However, low level of HISB was positively correlated with medium self-efficacy level [OR 3.112, 95% CI 1.463-6.747] and low self-efficacy level [OR 8.061, 95% CI 1.981-32.807].
    CONCLUSIONS: According to the CMIS model, we identified several factors influencing health-seeking behaviors (HISB). Lower levels of HISB are closely associated with older age and lower level of understandability and satisfaction of information. Conversely, higher self-efficacy may encourage patients to seek health information more actively. Therefore, it is essential to focus on elderly patients and assess their information expectations and needs in a timely manner, while also working to enhance their self-efficacy to promote more effective access to health information.
    Keywords:  Health information-seeking behavior; Illness perception; Influencing factors; Periodontitis; Self-efficacy
    DOI:  https://doi.org/10.1186/s12903-024-05068-x
  31. J Racial Ethn Health Disparities. 2024 Oct 28.
      INTRODUCTION: Unmet medical needs in rural areas are of grave concern in the U.S. With the advent of digital technologies, the Internet has become a critical means for accessing essential health information. However, racial/ethnic minority rural communities experiencing scarcity in healthcare services and access to the Internet are underrepresented in digital health studies. This study examined the association between online health information-seeking behaviors and unmet medical needs in a sample of African/Black American adults living in a rural region of the U.S.METHODS: Among a sample of 191 adults, we used descriptive analyses to document the level of unmet medical needs and online health information-seeking behaviors of this population and conducted logistic regressions to test the association between online health information-seeking behaviors and unmet medical needs.
    RESULTS: Most participants were older than 50 years old (60.2%), female (68.1%), unemployed (57.6%), and had an annual income of less than $25,000 (60.2%). About 20% of participants experienced unmet medical needs. The mean score of online health information-seeking behaviors was 2.37 (range 0-12). Increasing online health information-seeking behaviors was associated with 5.95 increased odds of experiencing unmet medical needs (OR = 5.95, 95% CI 1.27-27.77).
    DISCUSSION: The finding highlights that it is necessary to develop targeted programs aimed at populations with high unmet medical needs, focusing on providing accessible health information and resources. Further research is warranted to investigate the motivations to engage in online health information-seeking behaviors to inform structural and workforce interventions to address unmet medical needs in this under-resourced region.
    Keywords:  African/Black American; Digital health; Online health information–seeking behaviors; Rural community; Unmet medical needs
    DOI:  https://doi.org/10.1007/s40615-024-02207-6