bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒06‒30
25 papers selected by
Thomas Krichel, Open Library Society



  1. J Med Libr Assoc. 2024 Jan 16. 112(1): 42-47
      Background: By defining search strategies and related database exports as code/scripts and data, librarians and information professionals can expand the mandate of research data management (RDM) infrastructure to include this work. This new initiative aimed to create a space in McGill University's institutional data repository for our librarians to deposit and share their search strategies for knowledge syntheses (KS).Case Presentation: The authors, a health sciences librarian and an RDM specialist, created a repository collection of librarian-authored knowledge synthesis (KS) searches in McGill University's Borealis Dataverse collection. We developed and hosted a half-day "Dataverse-a-thon" where we worked with a team of health sciences librarians to develop a standardized KS data management plan (DMP), search reporting documentation, Dataverse software training, and howto guidance for the repository.
    Conclusion: In addition to better documentation and tracking of KS searches at our institution, the KS Dataverse collection enables sharing of searches among colleagues with discoverable metadata fields for searching within deposited searches. While the initial creation of the DMP and documentation took about six hours, the subsequent deposit of search strategies into the institutional data repository requires minimal effort (e.g., 5-10 minutes on average per deposit). The Dataverse collection also empowers librarians to retain intellectual ownership over search strategies as valuable stand-alone research outputs and raise the visibility of their labor. Overall, institutional data repositories provide specific benefits in facilitating compliance both with PRISMA-S guidance and with RDM best practices.
    Keywords:  Data Deposit; Data Repository; Expert Searching; Knowledge Synthesis; Research Data Management; Research Reproducibility; Systematic Review Methodology
    DOI:  https://doi.org/10.5195/jmla.2024.1791
  2. J Med Libr Assoc. 2024 Jan 16. 112(1): 33-41
      Objective: With exponential growth in the publication of interprofessional education (IPE) research studies, it has become more difficult to find relevant literature and stay abreast of the latest research. To address this gap, we developed, evaluated, and validated search strategies for IPE studies in PubMed, to improve future access to and synthesis of IPE research. These search strategies, or search hedges, provide comprehensive, validated sets of search terms for IPE publications.Methods: The search strategies were created for PubMed using relative recall methodology. The research methods followed the guidance of previous search hedge and search filter validation studies in creating a gold standard set of relevant references using systematic reviews, having expert searchers identify and test search terms, and using relative recall calculations to validate the searches' performance against the gold standard set.
    Results: The three recommended search hedges for IPE studies presented had recall of 71.5%, 82.7%, and 95.1%; the first more focused for efficient literature searching, the last with high recall for comprehensive literature searching, and the remaining hedge as a middle ground between the other two options.
    Conclusion: These validated search hedges can be used in PubMed to expedite finding relevant scholarships, staying up to date with IPE research, and conducting literature reviews and evidence syntheses.
    Keywords:  Interprofessional education; relative recall; search hedge validation; systematic reviews as topic
    DOI:  https://doi.org/10.5195/jmla.2024.1742
  3. J Am Med Inform Assoc. 2024 Jun 25. pii: ocae127. [Epub ahead of print]
      OBJECTIVE: Author name incompleteness, referring to only first initial available instead of full first name, is a long-standing problem in MEDLINE and has a negative impact on biomedical literature systems. The purpose of this study is to create an Enhanced Author Names (EAN) dataset for MEDLINE that maximizes the number of complete author names.MATERIALS AND METHODS: The EAN dataset is built based on a large-scale name comparison and restoration with author names collected from multiple literature databases such as MEDLINE, Microsoft Academic Graph, and Semantic Scholar. We assess the impact of EAN on biomedical literature systems by conducting comparative and statistical analyses between EAN and MEDLINE's author names dataset (MAN) on 2 important tasks, author name search and author name disambiguation.
    RESULTS: Evaluation results show that EAN improves the number of full author names in MEDLINE from 69.73 million to 110.9 million. EAN not only restores a substantial number of abbreviated names prior to the year 2002 when the NLM changed its author name indexing policy but also improves the availability of full author names in articles published afterward. The evaluation of the author name search and author name disambiguation tasks reveal that EAN is able to significantly enhance both tasks compared to MAN.
    CONCLUSION: The extensive coverage of full names in EAN suggests that the name incompleteness issue can be largely mitigated. This has significant implications for the development of an improved biomedical literature system. EAN is available at https://zenodo.org/record/10251358, and an updated version is available at https://zenodo.org/records/10663234.
    Keywords:  MEDLINE; author name completeness; author name disambiguation; author name search
    DOI:  https://doi.org/10.1093/jamia/ocae127
  4. Front Public Health. 2024 ;12 1418627
      Digital health disparities continue to affect marginalized populations, especially older adults, individuals with low-income, and racial/ethnic minorities, intensifying the challenges these populations face in accessing healthcare. Bridging this digital divide is essential, as digital access and literacy are social determinants of health that can impact digital health use and access to care. This article discusses the potential of leveraging community Wi-Fi and spaces to improve digital access and digital health use, as well as the challenges and opportunities associated with this strategy. The existing limited evidence has shown the possibility of using community Wi-Fi and spaces, such as public libraries, to facilitate telehealth services. However, privacy and security issues from using public Wi-Fi and spaces remain a concern for librarians and healthcare professionals. To advance digital equity, efforts from multilevel stakeholders to improve users' digital access and literacy and offer tailored technology support in the community are required. Ultimately, leveraging community Wi-Fi and spaces offers a promising avenue to expand digital health accessibility and use, highlighting the critical role of collaborative efforts in overcoming digital health disparities.
    Keywords:  digital health; health care disparities; health services accessibility; internet access; public health; telehealth; telemedicine
    DOI:  https://doi.org/10.3389/fpubh.2024.1418627
  5. J Med Libr Assoc. 2024 Jan 16. 112(1): 13-21
      Objective: To evaluate the ability of DynaMedex, an evidence-based drug and disease Point of Care Information (POCI) resource, in answering clinical queries using keyword searches.Methods: Real-world disease-related questions compiled from clinicians at an academic medical center, DynaMedex search query data, and medical board review resources were categorized into five clinical categories (complications & prognosis, diagnosis & clinical presentation, epidemiology, prevention & screening/monitoring, and treatment) and six specialties (cardiology, endocrinology, hematology-oncology, infectious disease, internal medicine, and neurology). A total of 265 disease-related questions were evaluated by pharmacist reviewers based on if an answer was found (yes, no), whether the answer was relevant (yes, no), difficulty in finding the answer (easy, not easy), cited best evidence available (yes, no), clinical practice guidelines included (yes, no), and level of detail provided (detailed, limited details).
    Results: An answer was found for 259/265 questions (98%). Both reviewers found an answer for 241 questions (91%), neither found the answer for 6 questions (2%), and only one reviewer found an answer for 18 questions (7%). Both reviewers found a relevant answer 97% of the time when an answer was found. Of all relevant answers found, 68% were easy to find, 97% cited best quality of evidence available, 72% included clinical guidelines, and 95% were detailed. Recommendations for areas of resource improvement were identified.
    Conclusions: The resource enabled reviewers to answer most questions easily with the best quality of evidence available, providing detailed answers and clinical guidelines, with a high level of replication of results across users.
    Keywords:  Clinical Decision Support Systems; Evidence-based information; Information Retrieval; point of care resources
    DOI:  https://doi.org/10.5195/jmla.2024.1770
  6. Arthroscopy. 2024 Jun 24. pii: S0749-8063(24)00462-6. [Epub ahead of print]
      Surgeons have dealt with the negative effects of misinformation from "Dr. Google" since patients starting using search engines to seek out medical information. With the advent of natural language processing software such as ChatGPT, patients may have a seemingly real conversation with AI software. However, ChatGPT provides misinformation in response to medical questions, and responds at the reading level of a college freshman, whereas US National Institute of Health recommends medical information be written at a 6th grade level. The flaw of ChatGPT is that it recycles information from the internet. It is "artificially intelligent" because of its ability to mimic natural language - not because of its ability to understand and synthesize content. It fails to understand nuance or critically analyze new inputs. Ultimately, these skills require human intelligence, while ChatGPT provides responses that are exactly what you might expect - artificial.
    DOI:  https://doi.org/10.1016/j.arthro.2024.06.027
  7. Arthroscopy. 2024 Jun 22. pii: S0749-8063(24)00452-3. [Epub ahead of print]
      PURPOSE: To assess the ability of ChatGPT to answer common patient questions regarding hip arthroscopy, and to analyze the accuracy and appropriateness of its responses.METHODS: Ten questions were selected from well-known patient education websites, and ChatGPT (version 3.5) responses to these questions were graded by two fellowship-trained hip preservation surgeons. Responses were analyzed, compared to the current literature, and graded from A to D (A being the highest, and D being the lowest) in a grading scale based on the accuracy and completeness of the response. If the grading differed between the two surgeons, a consensus was reached. Inter-rater agreement was calculated. The readability of responses was also assessed using the Flesch-Kincaid Reading Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL).
    RESULTS: Responses received the following consensus grades: A (50%, n=5), B (30%, n=3), C (10%, n=1), D (10%, n=1) (Table 2). Inter-rater agreement based on initial individual grading was 30%. The mean FRES was 28.2 (SD± 9.2), corresponding to a college graduate level, ranging from 11.7 to 42.5. The mean FKGL was 14.4 (SD±1.8), ranging from 12.1 to 18, indicating a college student reading level.
    CONCLUSION: ChatGPT can answer common patient questions regarding hip arthroscopy with satisfactory accuracy graded by two high-volume hip arthroscopists, however, incorrect information was identified in more than one instance. Caution must be observed when using ChatGPT for patient education related to hip arthroscopy.
    CLINICAL RELEVANCE: Given the increasing number of hip arthroscopies being performed annually, ChatGPT has the potential to aid physicians in educating their patients about this procedure and address any questions they may have.
    Keywords:  ChatGPT; artificial intelligence; hip arthroscopy; machine learning
    DOI:  https://doi.org/10.1016/j.arthro.2024.06.017
  8. Prostate Cancer Prostatic Dis. 2024 Jun 26.
      BACKGROUND/OBJECTIVES: Patients often face uncertainty about what they should know after prostate cancer diagnosis. Web-based information is common but is at risk of being of poor quality or readability.SUBJECTS/METHODS: We used ChatGPT, a freely available Artificial intelligence (AI) platform, to generate enquiries about prostate cancer that a newly diagnosed patient might ask and compared to Google search trends. Then, we evaluated ChatGPT responses to these questions for clinical appropriateness and quality using standardised tools.
    RESULTS: ChatGPT generates broad and representative questions, and provides understandable, clinically sound advice.
    CONCLUSIONS: AI can guide and empower patients after prostate cancer diagnosis through education. However, the limitations of the ChatGPT language-model must not be ignored and require further evaluation and optimisation in the healthcare field.
    DOI:  https://doi.org/10.1038/s41391-024-00864-6
  9. Arthroscopy. 2024 Jun 25. pii: S0749-8063(24)00407-9. [Epub ahead of print]
      PURPOSE: To assess the ability for ChatGPT-4, an automated Chatbot powered by artificial intelligence (AI), to answer common patient questions concerning the Latarjet procedure for patients with anterior shoulder instability and compare this performance to Google Search Engine.METHODS: Using previously validated methods, a Google search was first performed using the query "Latarjet." Subsequently, the top ten frequently asked questions (FAQs) and associated sources were extracted. ChatGPT-4 was then prompted to provide the top ten FAQs and answers concerning the procedure. This process was repeated to identify additional FAQs requiring discrete-numeric answers to allow for a comparison between ChatGPT-4 and Google. Discrete, numeric answers were subsequently assessed for accuracy based on the clinical judgement of two fellowship-trained sports medicine surgeons blinded to search platform.
    RESULTS: Mean (±standard deviation) accuracy to numeric-based answers were 2.9±0.9 for ChatGPT-4 versus 2.5±1.4 for Google (p=0.65). ChatGPT-4 derived information for answers only from academic sources, which was significantly different from Google Search Engine (p=0.003), which used only 30% academic sources and websites from individual surgeons (50%) and larger medical practices (20%). For general FAQs, 40% of FAQs were found to be identical when comparing ChatGPT-4 and Google Search Engine. In terms of sources used to answer these questions, ChatGPT-4 again used 100% academic resources, while Google Search Engine used 60% academic resources, 20% surgeon personal websites, and 20% medical practices (p=0.087).
    CONCLUSION: ChatGPT-4 demonstrated the ability to provide accurate and reliable information about the Latarjet procedure in response to patient queries, using multiple academic sources in all cases. This was in contrast to Google Search Engine, which more frequently used single surgeon and large medical practice websites. Despite differences in the resources accessed to perform information retrieval tasks, the clinical relevance and accuracy of information provided did not significantly differ between ChatGPT-4 and Google Search Engine.
    Keywords:  AI; Generative; LLM; Latarjet; artificial intelligence; chatbot; large language model; shoulder instability
    DOI:  https://doi.org/10.1016/j.arthro.2024.05.025
  10. Br J Hosp Med (Lond). 2024 Jun 30. 85(6): 1-9
      Aims/Background Seroma formation is the most common complication following breast surgery. However, there is little evidence on the readability of online patient education materials on this issue. This study aimed to assess the accessibility and readability of the relevant online information. Methods This systematic review of the literature identified 37 relevant websites for further analysis. The readability of each online article was assessed through using a range of readability formulae. Results The average Flesch-Reading Ease score for all patient education materials was 53.9 (± 21.9) and the average Flesch-Kincaid reading grade level was 7.32 (± 3.1), suggesting they were 'fairly difficult' to read and is higher than the recommended reading level. Conclusion Online patient education materials regarding post-surgery breast seroma are at a higher-than-recommended reading grade level for the public. Improvement would allow all patients, regardless of literacy level, to access such resources to aid decision-making around undergoing breast surgery.
    Keywords:  Breast surgery; Patient information; Post-breast surgery seroma; Readability; Seroma
    DOI:  https://doi.org/10.12968/hmed.2024.0058
  11. Neuroophthalmology. 2024 ;48(4): 257-266
      Most cases of optic neuritis (ON) occur in women and in patients between the ages of 15 and 45 years, which represents a key demographic of individuals who seek health information using the internet. As clinical providers strive to ensure patients have accessible information to understand their condition, assessing the standard of online resources is essential. To assess the quality, content, accountability, and readability of online information for optic neuritis. This cross-sectional study analyzed 11 freely available medical sites with information on optic neuritis and used PubMed as a gold standard for comparison. Twelve questions were composed to include the information most relevant to patients, and each website was independently examined by four neuro-ophthalmologists. Readability was analyzed using an online readability tool. Journal of the American Medical Association (JAMA) benchmarks, four criteria designed to assess the quality of health information further were used to evaluate the accountability of each website. Freely available online information. On average, websites scored 27.98 (SD ± 9.93, 95% CI 24.96-31.00) of 48 potential points (58.3%) for the twelve questions. There were significant differences in the comprehensiveness and accuracy of content across websites (p < .001). The mean reading grade level of websites was 11.90 (SD ± 2.52, 95% CI 8.83-15.25). Zero websites achieved all four JAMA benchmarks. Interobserver reliability was robust between three of four neuro-ophthalmologist (NO) reviewers (ρ = 0.77 between NO3 and NO2, ρ = 0.91 between NO3 and NO1, ρ = 0.74 between NO2 and NO1; all p < .05). The quality of freely available online information detailing optic neuritis varies by source, with significant room for improvement. The material presented is difficult to interpret and exceeds the recommended reading level for health information. Most websites reviewed did not provide comprehensive information regarding non-therapeutic aspects of the disease. Ophthalmology organizations should be encouraged to create content that is more accessible to the general public.
    Keywords:  Optic neuritis; online resources; patient education; patient information; readability
    DOI:  https://doi.org/10.1080/01658107.2024.2301728
  12. Iowa Orthop J. 2024 ;44(1): 47-58
      Background: Patients often access online resources to educate themselves prior to undergoing elective surgery such as carpal tunnel release (CTR). The purpose of this study was to evaluate available online resources regarding CTR on objective measures of readability (syntax reading grade-level), understandability (ability to convey key messages in a comprehensible manner), and actionability (providing actions the reader may take).Methods: The study conducted two independent Google searches for "Carpal Tunnel Surgery" and among the top 50 results, analyzed articles aimed at educating patients about CTR. Readability was assessed using six different indices: Flesch-Kincaid Grade Level Index, Flesch Reading Ease, Gunning Fog Index, Simple Measure of Gobbledygook (SMOG) Index, Coleman Liau Index, Automated Readability Index. The Patient Education Materials Assessment Tool evaluated understandability and actionability on a 0-100% scale. Spearman's correlation assessed relationships between these metrics and Google search ranks, with p<0.05 indicating statistical significance.
    Results: Of the 39 websites meeting the inclusion criteria, the mean readability grade level exceeded 9, with the lowest being 9.4 ± 1.5 (SMOG index). Readability did not correlate with Google search ranking (lowest p=0.25). Mean understandability and actionability were 59% ± 15 and 26% ± 24, respectively. Only 28% of the articles used visual aids, and few provided concise summaries or clear, actionable steps. Notably, lower grade reading levels were linked to higher actionability scores (p ≤ 0.02 in several indices), but no readability metrics significantly correlated with understandability. Google search rankings showed no significant association with either understandability or actionability scores.
    Conclusion: Online educational materials for CTR score poorly in readability, understandability, and actionability. Quality metrics do not appear to affect Google search rankings. The poor quality metric scores found in our study highlight a need for hand specialists to improve online patient resources, especially in an era emphasizing shared decision-making in healthcare. Level of Evidence: IV.
    Keywords:  PEMAT; carpal tunnel release; hand surgery; patient education; readability
  13. Iowa Orthop J. 2024 ;44(1): 151-158
      Background: The National Institutes of Health (NIH) and American Medical Association (AMA) recommend that online health information be written at a maximum 6th grade reading level. The aim was to evaluate online resources regarding shoulder arthroscopy utilizing measures of readability, understandability, and actionability, using syntax reading grade level and the Patient Education Materials Assessment Tool (PEMAT-P).Methods: An online Google™ search utilizing "shoulder arthroscopy" was performed. From the top 50 results, websites directed at educating patients were included. News and scientific articles, audiovisual materials, industry websites, and unrelated materials were excluded. Readability was calculated using objective algorithms: Flesch-Kincaid Grade-Level (FKGL), Simple Measure of Gobbledygook (SMOG) grade, Coleman-Liau Index (CLI), and Gunning-Fog Index (GFI). The PEMAT-P was used to assess understandability and actionability, with a 70% score threshold. Scores were compared across academic institutions, private practices, and commercial health publishers. The correlation between search rank and readability, understandability, and actionability was calculated.
    Results: Two independent searches yielded 53 websites, with 44 (83.02%) meeting inclusion criteria. No mean readability score performed below a 10th grade reading level. Only one website scored at or below 6th grade reading level. Mean understandability and actionability scores were 63.02%±12.09 and 29.77%±20.63, neither of which met the PEMAT threshold. Twelve (27.27%) websites met the understandability threshold, while none met the actionability threshold. Institution categories scored similarly in understandability (61.71%, 62.68%, 63.67%) among academic, private practice, and commercial health publishers respectively (p=0.9536). No readability or PEMAT score correlated with search rank.
    Conclusion: Online shoulder arthroscopy patient education materials score poorly in readability, understandability, and actionability. One website scored at the NIH and AMA recommended reading level, and 27.27% of websites scored above the 70% PEMAT score for understandability. None met the actionability threshold. Future efforts should improve online resources to optimize patient education and facilitate informed decision-making. Level of Evidence: IV.
    Keywords:  health information; patient education; readability; shoulder arthroscopy
  14. OTO Open. 2024 Apr-Jun;8(2):8(2): e158
      Objective: Obstructive sleep apnea (OSA) has many treatment options and the Internet is an important resource for patients. The quality of information reviewed by patients about sleep surgery is unknown. We assessed readability, accessibility, actionability, and quality of online content for OSA surgeries.Study Design: Review of webpages by 2 independent reviewers.
    Setting: Internet-based search.
    Methods: We queried Google for sleep apnea surgery and included top 100 English language webpages. Content was scored by 2 reviewers using the Flesch-Kincaid (FK), Simple Measure of Gobbledygook (SMOG), JAMA benchmarks, CDC Clear Communication Index (CCI), and Patient Education Materials Assessment Tool (PEMAT) understandability and actionability scores.
    Results: Eighty-seven webpages were evaluated including 40 hosted by academic hospitals, 23 private practices, 10 general knowledge, 4 national organizations, 3 industry, 3 non-profit hospitals, and 2 government-sponsored. Mean CCI ranged from 22.7% to 84.9%. No sources met the 90% CCI cutoff. Average PEMAT understandability score was 80.4% (±7.8; 62.5%-93.3%), with 91% meeting the 70% standard score. Average PEMAT-actionability score was 38.4% (±16.5; 0%-70%), with 5% meeting the standard score. The average readability of webpages was the 10th grade reading level. Only 5% of pages met the recommended 6th grade reading level or lower. Only 21% of pages addressed surgical risks.
    Conclusion: Most online resources regarding OSA surgery do not meet recommended standards for communication. Providers should be aware of limitations of materials when counseling patients on sleep surgery treatments. Future patient education resources should meet health communication and readability standards.
    Keywords:  internet; obstructive sleep apnea; patient education; sleep surgery
    DOI:  https://doi.org/10.1002/oto2.158
  15. Int Ophthalmol. 2024 Jun 25. 44(1): 279
      PURPOSE: YouTube, a popular source for diverse information, hosts a wealth of content on aesthetic canthoplasty. Yet, concerns linger about the accuracy and reliability of these videos, with potential for inaccuracies, biases, or misleading information. This study aims to evaluate the quality and reliability of YouTube content on this sought-after facial enhancement procedure.METHODS: The study employs four distinct scoring tools: the Global Quality Score (GQS), the Medical Quality Video Evaluation Tool (MQ-VET), the Patient Education Materials Assessment Tool for Audiovisual Materials (PEMAT-A/V), and the Video Power Index (VPI).
    RESULTS: Analysis of a total of 173 YouTube videos relevant to aesthetic canthoplasty revealed scores that were primarily indicative of poor quality and reliability.(Mean score ± SD, PEMAT A/V: 30.75 ± 28.8, MQ-VET: 28.57 ± 12.6, GQS: 1.7 ± 1) Notably, these videos were predominantly uploaded by healthcare professionals (82.1%), and they focused more on advertisements (46.2%) than on scientific or educational information. Their elevated viewership and engagement metrics (likes, comments, and shares) attest to their significant popularity and influence. (Mean VPI score: 176.6 ± 635.8).
    CONCLUSION: YouTube's influence on aesthetic eyelid surgery is undeniable, shaping patient choices and expectations. However, unrealistic beauty ideals, heightened body dissatisfaction, and social comparisons lurk within its content, potentially harming psychological well-being and surgical decisions. Prioritizing qualified medical guidance and critical evaluation of online information are crucial for patients. Authors and platforms must act responsibly: authors by producing high-quality content, platforms by tackling misinformation.
    Keywords:  Aesthetic eye surgery; Canthoplasty; Fox-eye surgery; Global quality score; Patient education; Social media influence; YouTube
    DOI:  https://doi.org/10.1007/s10792-024-03197-4
  16. J ISAKOS. 2024 Jun 20. pii: S2059-7754(24)00124-X. [Epub ahead of print]
      OBJECTIVES: The purpose of this study was to assess the educational reliability and quality of videos shared on YouTube regarding medial collateral ligament (MCL) injuries of the knee.METHODS: Using the search keywords "medial collateral ligament" on YouTube, the first 50 videos were evaluated by two independent reviewers. Video characteristics were extracted, and each video was categorized by upload source and content type. Three scoring systems were used to evaluate the videos: the Journal of the American Medical Association (JAMA) Benchmark Score to assess a video's reliability; the Global Quality Score (GQS) to assess educational quality; the novel MCL Specific Score (MCL-SS) to assess MCL-specific content quality. Linear regression analyses were conducted to explore relationships between video characteristics and scores.
    RESULTS: Collectively, the videos were viewed 5,759,427 times with a mean number of views per video of 115,189 ± 177,861. The mean JAMA score was 1.8, GQS was 2.1, and MCL-SS was 5.6, indicating both poor reliability and quality. Only videos uploaded by physicians showed a statistically significantly higher mean MCL-SS (P = .032) but were still of low quality with a mean MCL-SS of 9.2 ± 5.9. Multivariate linear regression revealed that videos uploaded by physicians were statistically significant predictors of greater MCL-SS (β = 4.108; P = .029). Longer video durations were statistically significant predictors of greater GQS (β = .001; P = .002) and MCL-SS (β = .007; P < .001).
    CONCLUSIONS: YouTube videos regarding MCL injuries, despite their popularity, were found to be on average of poor overall reliability and quality as measured by JAMA, GQS, and MCL-SS.
    LEVEL OF EVIDENCE: III - Cross-sectional Study.
    Keywords:  Knee Injury; Medial Collateral Ligament; Patient Education; Quality; Reliability; YouTube
    DOI:  https://doi.org/10.1016/j.jisako.2024.06.007
  17. Health Care Women Int. 2024 Jun 27. 1-15
      This descriptive study aimed to determine the content, quality, and reliability of YouTube videos on breast cancer-related lymphedema exercises. A total of 127 videos were independently assessed, with 103 of them categorized as either informative or misleading content groups. The content (mean score: 4.07 ± 2.29) and quality (mean score: 3.15 ± 1.46) of videos concerning lymphedema exercises were moderate, while reliability (mean score: 2.27 ± 1.64) was low. Among the 103 videos categorized using the content checklist, Global Quality Scale, and DISCERN reliability instrument, 57.3% (n = 59) were informative, and 42.7% (n = 44) had misleading information. The mean scores of the content, quality, and reliability of the informative content videos were substantially higher than the misleading content videos, and of videos uploaded by universities/professional organizations/health care professionals/medical advertisements were higher than the videos uploaded by other sources. Through this study, the researchers have unveiled that the content and quality levels of YouTube videos in lymphedema exercises were moderate, and the reliability level was low.
    DOI:  https://doi.org/10.1080/07399332.2024.2368499
  18. Cureus. 2024 Jun;16(6): e62752
      OBJECTIVES: This study aims to systematically evaluate the quality and reliability of YouTube videos on cardiac rehabilitation, addressing a gap in the literature regarding the assessment of online health resources in this field.DESIGN AND SETTING: The study is a cross-sectional analysis. This research was conducted entirely online, utilizing the YouTube platform for data collection.
    MAIN MEASURES: The videos were assessed for educational quality and reliability using modified versions of the DISCERN, Journal of the American Medical Association (JAMA), and Global Quality Scale (GQS) benchmarks. Specific data points such as upload date, length, uploader and narrator identity, and engagement metrics (views, likes, and dislikes) were also collected.
    RESULTS:  Out of 300 videos initially reviewed, 140 met the inclusion criteria. The majority of videos were of low quality (67.9%), with medium (12.9%) and high-quality (19.3%) content being less common. Videos were predominantly uploaded by academic, university, or hospital sources (63.6%) and narrated by non-physician health professionals (41.4%). The content mainly provided general information about cardiac rehabilitation.
    CONCLUSIONS:  The study revealed a concerning predominance of low-quality YouTube content on cardiac rehabilitation, underscoring the necessity for healthcare professionals and academic institutions to enhance the quality of online resources.
    Keywords:  cardiac rehabilitation; health information literacy; quality of health care; social media; video recording
    DOI:  https://doi.org/10.7759/cureus.62752
  19. Cureus. 2024 May;16(5): e60904
      BACKGROUND: YouTube serves as a good source of information on autism; however, the reliability and quality of such content remain uncertain. This study aimed to evaluate the reliability and quality of autism-related information presented in YouTube videos using the Global Quality Score (GQS) and Reliability Score.  Methods: A cross-sectional observational study was conducted in November 2023. A total of 48 autism-related videos on YouTube were sourced using keywords such as 'autism', 'autism cause', 'autism treatment', and 'autism kids'. The authors then viewed the videos and collected data regarding the number of views, likes and comments, uploader type, and type of information disseminated. The authors also used The GQS and modified DISCERN score to assess the quality and reliability of information in the videos. The data was then subjected to statistical analysis using the Kruskal-Wallis test and IBM SPSS Statistics for Windows, Version 22 (Released 2013; IBM Corp., Armonk, New York, United States).  Results: Out of 48 videos, seven videos were excluded, leaving 41 for analysis. The included videos amassed 25,540,635 views, 304,557 likes, and 37,039 comments. The majority of videos were uploaded by hospitals (n=15; 36.59%), followed by news channels (n=12; 29.27%). Most videos described autism symptoms (n=26; 63.41%), with fewer addressing potential etiology (n=16; 39.02%). The median GQS was highest for videos uploaded by healthcare professionals (n=5), contrasting with news channels. The Kruskal-Wallis test revealed significant differences (p=0.02).  Conclusion: These videos collectively garnered substantial viewership, likes, and comments. Most videos described autism symptoms, although fewer addressed potential causes. Notably, videos uploaded by healthcare professionals achieved the highest GQSs, highlighting their significance in disseminating reliable autism information. Healthcare professionals therefore play a crucial role in disseminating reliable autism information via YouTube. Encouraging their involvement in creating informative videos can enhance public understanding of autism.
    Keywords:  autism; global quality score; reliability score; social media; youtube
    DOI:  https://doi.org/10.7759/cureus.60904
  20. Oral Maxillofac Surg. 2024 Jun 24.
      PURPOSE: In the digital era, the internet is the go-to source of information, and patients often seek insights on medical conditions like TMJ ankylosis. YouTube, a popular platform, is widely used for this purpose. However, YouTube's lack of regulation means it can host unreliable content. Hence, the primary objective of this study is to assess the scientific quality of YouTube videos concerning TMJ ankylosis.MATERIALS AND METHODS: This study analyzed 59 TMJ ankylosis-related videos. Two Oral and Maxillofacial Surgery specialists assessed these videos. Data on the video source, duration, upload date, the time elapsed since upload, total views, likes, dislikes and comments, Interaction index, and viewing rate were collected and analyzed. Video quality was assessed using the Global Quality Scale (GQS) and the Quality Criteria for Consumer Health Information (DISCERN), comparing health professionals and non-health professionals.
    RESULTS: Health professional's videos were better in terms of GQS 3.21 ± 0.94 and DISCERN score 3.03 ± 0.75 as compared to the non-health professional videos GQS 3.0 ± 1.04, and DISCERN 2.81 ± 1.13. Health professional group videos had more reliability and better quality than the non-health professional group (p < 0.01).
    CONCLUSION: YouTube should not be relied on as a trustworthy source for high-quality and reliable information regarding TMJ ankylosis videos. Healthcare professionals must be prepared to address any ambiguous or misleading information and to prioritize building trustworthy relationships with patients through accurate diagnostic and therapeutic processes.
    Keywords:  DISCERN; Educational video; GQS; Online learning; TMJ ankylosis; YouTube
    DOI:  https://doi.org/10.1007/s10006-024-01270-x
  21. Patient Educ Couns. 2024 Jun 24. pii: S0738-3991(24)00225-8. [Epub ahead of print]127 108358
      OBJECTIVE: To better understand cancer clinical trials (CCT) information-seeking, a necessary precursor to patient and provider engagement with CCT.METHODS: Data from the National Cancer Institute's Cancer Information Service (CIS) were used to examine CCT information-seeking patterns over a 5-year period. Descriptive and logistic regression analyses were conducted to examine characteristics of CIS inquiries and their associations with having a CCT discussion.
    RESULTS: Between September 2018 - August 2023, 117,016 CIS inquiries originated from cancer survivors, caregivers, health professionals, and the general public; 27.5 % of these inquiries included a CCT discussion (n = 32,160). Among CCT discussions, 35.5 % originated from survivors, 53.5 % from caregivers, 6.1 % from the public, and 4.9 % from health professionals. Inquiries in Spanish had lower odds of a CCT discussion (OR=.26, [.25-.28]), whereas inquiries emanating from the CIS instant messaging (OR=2.29, [2.22-2.37]) and email (OR=1.24, [1.18-1.30]) platforms were associated with higher odds of discussing CCT compared to the telephone. Individuals who were male, younger, insured, and had higher income and education had significantly higher odds of a CCT discussion while those who were non-Hispanic Black and living in rural locales had significantly lower odds.
    CONCLUSIONS: Disparities in CCT information-seeking may contribute to downstream CCT participation.
    PRACTICE IMPLICATIONS: Quality, language-concordant health information is needed to enable equitable awareness of - and ultimately engagement in - CCT.
    Keywords:  Cancer clinical trials; Cancer disparities; Health information-seeking
    DOI:  https://doi.org/10.1016/j.pec.2024.108358
  22. Front Public Health. 2024 ;12 1377017
      Introduction: During the COVID-19 pandemic, older adults were facing more mental health issues that may cause complex impacts on pandemic prevention, and turning to the internet for health information is a double-edged sword for them. This study aimed to investigate the reciprocal relationship between negative emotions and prevention behaviors in older adults, as well as the direct and moderating effects of online health information seeking (OHIS) on negative emotions and prevention behaviors.Methods: Based on the common-sense model of self-regulation (CSM) and a sample of more than 20,000 participants from the Survey of Health, Aging and Retirement in Europe (SHARE), this study first used an autoregressive cross-lagged panel model (CLPM) to analyze the longitudinal effect of negative emotions on prevention behaviors. Second, the study used ordinary least squares (OLS) regression to explore the influence of OHIS usage frequency changes on negative emotions and prevention behaviors. Third, the study used multigroup analysis to examine the moderating effect of OHIS usage frequency changes on the CLPM.
    Results: The findings indicate a significant longitudinal association where initial negative emotions predicted later prevention behaviors (β = 0.038, p < 0.001), and increased OHIS frequency was linked to positive changes in prevention behavior (β = 0.109, p < 0.001). Multigroup analysis revealed that the connection between negative emotions or increased negative emotions and prevention behaviors remained significant for those with no change or an increase in OHIS frequency but not for those with a decrease.
    Conclusion: This study suggested that negative emotions may drive older adults to engage more in prevention behaviors and that OHIS can augment this effect. These results underscore the importance of addressing mental health and providing reliable online health information to support older adults in managing infectious disease risks.
    Keywords:  COVID-19; SHARE; longitudinal study; negative emotion; older adults; online health information seeking
    DOI:  https://doi.org/10.3389/fpubh.2024.1377017
  23. Artif Intell Med. 2024 Jun 05. pii: S0933-3657(24)00146-5. [Epub ahead of print]154 102904
      With the rapid progress in Natural Language Processing (NLP), Pre-trained Language Models (PLM) such as BERT, BioBERT, and ChatGPT have shown great potential in various medical NLP tasks. This paper surveys the cutting-edge achievements in applying PLMs to various medical NLP tasks. Specifically, we first brief PLMS and outline the research of PLMs in medicine. Next, we categorise and discuss the types of tasks in medical NLP, covering text summarisation, question-answering, machine translation, sentiment analysis, named entity recognition, information extraction, medical education, relation extraction, and text mining. For each type of task, we first provide an overview of the basic concepts, the main methodologies, the advantages of applying PLMs, the basic steps of applying PLMs application, the datasets for training and testing, and the metrics for task evaluation. Subsequently, a summary of recent important research findings is presented, analysing their motivations, strengths vs weaknesses, similarities vs differences, and discussing potential limitations. Also, we assess the quality and influence of the research reviewed in this paper by comparing the citation count of the papers reviewed and the reputation and impact of the conferences and journals where they are published. Through these indicators, we further identify the most concerned research topics currently. Finally, we look forward to future research directions, including enhancing models' reliability, explainability, and fairness, to promote the application of PLMs in clinical practice. In addition, this survey also collect some download links of some model codes and the relevant datasets, which are valuable references for researchers applying NLP techniques in medicine and medical professionals seeking to enhance their expertise and healthcare service through AI technology.
    Keywords:  BERT; GPT; Healthcare; Medical science; Natural language processing; Pre-trained language model
    DOI:  https://doi.org/10.1016/j.artmed.2024.102904