bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒05‒12
39 papers selected by
Thomas Krichel, Open Library Society



  1. Arq Bras Cardiol. 2024 ;pii: S0066-782X2024000300204. [Epub ahead of print]121(3): e20240106
      
    DOI:  https://doi.org/10.36660/abc.20240106
  2. Sci Rep. 2024 05 06. 14(1): 10413
      Generative artificial intelligence technologies, especially large language models (LLMs) like ChatGPT, are revolutionizing information acquisition and content production across a variety of domains. These technologies have a significant potential to impact participation and content production in online knowledge communities. We provide initial evidence of this, analyzing data from Stack Overflow and Reddit developer communities between October 2021 and March 2023, documenting ChatGPT's influence on user activity in the former. We observe significant declines in both website visits and question volumes at Stack Overflow, particularly around topics where ChatGPT excels. By contrast, activity in Reddit communities shows no evidence of decline, suggesting the importance of social fabric as a buffer against the community-degrading effects of LLMs. Finally, the decline in participation on Stack Overflow is found to be concentrated among newer users, indicating that more junior, less socially embedded users are particularly likely to exit.
    DOI:  https://doi.org/10.1038/s41598-024-61221-0
  3. Med Ref Serv Q. 2024 Apr-Jun;43(2):43(2): 95-105
      To help address the well-being of the campus and contribute to empathy building amongst students pursuing careers as healthcare providers, an academic health sciences library built a graphic novel collection focused on comics that discuss medical conditions and health-related topics. The collection contains the experiences of patients, providers, and caregivers. The reader-friendly format of graphic novels provides an easy entry point for discussing empathy with health professions faculty and students. The collection has been used in the classroom during library instruction sessions, with the idea of integrating it within the curriculum.
    Keywords:  Comics; empathy; graphic medicine; graphic novels; wellness
    DOI:  https://doi.org/10.1080/02763869.2024.2329016
  4. Med Ref Serv Q. 2024 Apr-Jun;43(2):43(2): 196-202
      Named entity recognition (NER) is a powerful computer system that utilizes various computing strategies to extract information from raw text input, since the early 1990s. With rapid advancement in AI and computing, NER models have gained significant attention and been serving as foundational tools across numerus professional domains to organize unstructured data for research and practical applications. This is particularly evident in the medical and healthcare fields, where NER models are essential in efficiently extract critical information from complex documents that are challenging for manual review. Despite its successes, NER present limitations in fully comprehending natural language nuances. However, the development of more advanced and user-friendly models promises to improve work experiences of professional users significantly.
    Keywords:  Artificial intelligence (AI); information-extraction techniques; named entity recognition (NER); natural language processing (NLP); structured data
    DOI:  https://doi.org/10.1080/02763869.2024.2335139
  5. Med Ref Serv Q. 2024 Apr-Jun;43(2):43(2): 182-190
      Created by the NIH in 2015, the Common Data Elements (CDE) Repository provides free online access to search and use Common Data Elements. This tool helps to ensure consistent data collection, saves time and resources, and ultimately improves the accuracy of and interoperability among datasets. The purpose of this column is to provide an overview of the database, discuss why it is important for researchers and relevant for health sciences librarians, and review the basic layout of the website, including sample searches that will demonstrate how it can be used.
    Keywords:  Common elements (CDEs); data; data elements; data science; online database; reviewdata
    DOI:  https://doi.org/10.1080/02763869.2024.2323896
  6. Health Info Libr J. 2024 May 10.
      The traditional qualifications and work of a health librarian may not, at first glance, seem like they readily lend themselves to the wider work of an organisation. Too often librarians are seen as experts in a small specialist field. However, as librarians, we know that at our core is extensive digital experience and knowledge as well as a core set of transferrable skills that can be adapted to meet the ever-changing needs of the organisation. This article describes how the library evidence team became part of a wider board project to develop a governance system for Apps. It also describes how the skills of librarians can be developed to work in this area and raise the profile of the team within the board.
    Keywords:  National Health Service (NHS); advocacy; collaboration; digital information resources; education and training; governance; information and communication technologies (ICT); librarians
    DOI:  https://doi.org/10.1111/hir.12537
  7. Med Ref Serv Q. 2024 Apr-Jun;43(2):43(2): 152-163
      Health sciences library public services underwent profound changes due to the COVID-19 pandemic. Circulation, reference services, instruction, interlibrary loan, and programming were all significantly affected. Libraries adapted by moving to virtual services, featuring online workshops, video consultations, and digital information sharing. Reference services moved to virtual consultations for a streamlined experience, and instruction transitioned to interactive video tutorials. Interlibrary loan services saw a decrease in print material lending but an increase in electronic subscriptions. Library programming shifted from in-person to virtual, focusing on wellness activities. This post-pandemic transformation underscores the importance of ongoing adaptation to meet changing user needs.
    Keywords:  COVID-19 pandemic; health sciences library; public services; virtual programming; virtual services
    DOI:  https://doi.org/10.1080/02763869.2024.2330244
  8. Med Ref Serv Q. 2024 Apr-Jun;43(2):43(2): 130-151
      While LibGuides are widely used in libraries to curate resources for users, there are a number of common problems, including maintenance, design and layout, and curating relevant and concise content. One health sciences library sought to improve our LibGuides, consulting usage statistics, user feedback, and recommendations from the literature to inform decision making. Our team recommended a number of changes to make LibGuides more usable, including creating robust maintenance and content guidelines, scheduling regular updates, and various changes to the format of the guides themselves to make them more user-friendly.
    Keywords:  Academic libraries; health sciences libraries; libguides; subject guides; usability; user research
    DOI:  https://doi.org/10.1080/02763869.2024.2335138
  9. Med Ref Serv Q. 2024 Apr-Jun;43(2):43(2): 164-181
      Systems librarianship, when merged with the position of informationist, evolves into the identity of the systems informationist in the hospital setting. The Health Sciences Library at Geisinger has successfully implemented a systems informationist role within an open systems framework. The duties of the systems informationist are framed here using: input for information-seeking behavior; throughput of clinical support for patient care; output by user experience in research and education; and feedback to elevate operational excellence. This case report contributes a focused approach to systems librarianship, providing examples for other hospital libraries that may be interested in developing their own Systems Services.
    Keywords:  Systems librarians; hospital libraries; informationists; open systems framework; systems informationists; systems services
    DOI:  https://doi.org/10.1080/02763869.2024.2333181
  10. Med Ref Serv Q. 2024 Apr-Jun;43(2):43(2): 106-118
      The objective of this study was to examine the accuracy of indexing for "Appalachian Region"[Mesh]. Researchers performed a search in PubMed for articles published in 2019 using "Appalachian Region"[Mesh] or "Appalachia" or "Appalachian" in the title or abstract. Only 17.88% of the articles retrieved by the search were about Appalachia according to the ARC definition. Most articles retrieved appeared because they were indexed with state terms that were included as part of the mesh term. Database indexing and searching transparency is of growing importance as indexers rely increasingly on automated systems to catalog information and publications.
    Keywords:  Appalachia; databases; indexing; search strategies
    DOI:  https://doi.org/10.1080/02763869.2024.2326768
  11. BMC Med Res Methodol. 2024 May 09. 24(1): 108
      OBJECTIVE: Systematic literature reviews (SLRs) are critical for life-science research. However, the manual selection and retrieval of relevant publications can be a time-consuming process. This study aims to (1) develop two disease-specific annotated corpora, one for human papillomavirus (HPV) associated diseases and the other for pneumococcal-associated pediatric diseases (PAPD), and (2) optimize machine- and deep-learning models to facilitate automation of the SLR abstract screening.METHODS: This study constructed two disease-specific SLR screening corpora for HPV and PAPD, which contained citation metadata and corresponding abstracts. Performance was evaluated using precision, recall, accuracy, and F1-score of multiple combinations of machine- and deep-learning algorithms and features such as keywords and MeSH terms.
    RESULTS AND CONCLUSIONS: The HPV corpus contained 1697 entries, with 538 relevant and 1159 irrelevant articles. The PAPD corpus included 2865 entries, with 711 relevant and 2154 irrelevant articles. Adding additional features beyond title and abstract improved the performance (measured in Accuracy) of machine learning models by 3% for HPV corpus and 2% for PAPD corpus. Transformer-based deep learning models that consistently outperformed conventional machine learning algorithms, highlighting the strength of domain-specific pre-trained language models for SLR abstract screening. This study provides a foundation for the development of more intelligent SLR systems.
    Keywords:  Article screening; Deep learning; Machine learning; Systematic literature review; Text classification
    DOI:  https://doi.org/10.1186/s12874-024-02224-3
  12. Eur Arch Otorhinolaryngol. 2024 May 04.
      BACKGROUND: The widespread diffusion of Artificial Intelligence (AI) platforms is revolutionizing how health-related information is disseminated, thereby highlighting the need for tools to evaluate the quality of such information. This study aimed to propose and validate the Quality Assessment of Medical Artificial Intelligence (QAMAI), a tool specifically designed to assess the quality of health information provided by AI platforms.METHODS: The QAMAI tool has been developed by a panel of experts following guidelines for the development of new questionnaires. A total of 30 responses from ChatGPT4, addressing patient queries, theoretical questions, and clinical head and neck surgery scenarios were assessed by 27 reviewers from 25 academic centers worldwide. Construct validity, internal consistency, inter-rater and test-retest reliability were assessed to validate the tool.
    RESULTS: The validation was conducted on the basis of 792 assessments for the 30 responses given by ChatGPT4. The results of the exploratory factor analysis revealed a unidimensional structure of the QAMAI with a single factor comprising all the items that explained 51.1% of the variance with factor loadings ranging from 0.449 to 0.856. Overall internal consistency was high (Cronbach's alpha = 0.837). The Interclass Correlation Coefficient was 0.983 (95% CI 0.973-0.991; F (29,542) = 68.3; p < 0.001), indicating excellent reliability. Test-retest reliability analysis revealed a moderate-to-strong correlation with a Pearson's coefficient of 0.876 (95% CI 0.859-0.891; p < 0.001).
    CONCLUSIONS: The QAMAI tool demonstrated significant reliability and validity in assessing the quality of health information provided by AI platforms. Such a tool might become particularly important/useful for physicians as patients increasingly seek medical information on AI platforms.
    Keywords:  AI; Artificial intelligence; ChatGPT; Head and neck surgery; Health-related information quality; Machine learning; Maxillofacial surgery; Natural language processing; Neural networks; Otorhinolaryngology
    DOI:  https://doi.org/10.1007/s00405-024-08710-0
  13. Int J Impot Res. 2024 May 07.
      The present study assessed the accuracy of artificiaI intelligence-generated responses to frequently asked questions on erectile dysfunction. A cross-sectional analysis involved 56 erectile dysfunction-related questions searched on Google, categorized into nine sections: causes, diagnosis, treatment options, treatment complications, protective measures, relationship with other illnesses, treatment costs, treatment with herbal agents, and appointments. Responses from ChatGPT 3.5, ChatGPT 4, and BARD were evaluated by two experienced urology experts using the F1 and global quality scores (GQS) for accuracy, relevance, and comprehensibility. ChatGPT 3.5 and ChatGPT 4 achieved higher GQS than BARD in categories such as causes (4.5 ± 0.54, 4.5 ± 0.51, 3.15 ± 1.01, respectively, p < 0.001), treatment options (4.35 ± 0.6, 4.5 ± 0.43, 2.71 ± 1.38, respectively, p < 0.001), protective measures (5.0 ± 0, 5.0 ± 0, 4 ± 0.5, respectively, p = 0.013), relationships with other illnesses (4.58 ± 0.58, 4.83 ± 0.25, 3.58 ± 0.8, respectively, p = 0.006), and treatment with herbal agents (3 ± 0.61, 3.33 ± 0.83, 1.8 ± 1.09, respectively, p = 0.043). F1 scores in categories: causes (1), diagnosis (0.857), treatment options (0.726), and protective measures (1), indicated their alignment with the guidelines. There was no significant difference between ChatGPT 3.5 and ChatGPT 4 regarding answer quality, but both outperformed BARD in the GQS. These results emphasize the need to continually enhance and validate AI-generated medical information, underscoring the importance of artificiaI intelligence systems in delivering reliable information on erectile dysfunction.
    DOI:  https://doi.org/10.1038/s41443-024-00898-3
  14. Cureus. 2024 May;16(5): e59960
      Background Large language models (LLMs), such as ChatGPT-4, Gemini, and Microsoft Copilot, have been instrumental in various domains, including healthcare, where they enhance health literacy and aid in patient decision-making. Given the complexities involved in breast imaging procedures, accurate and comprehensible information is vital for patient engagement and compliance. This study aims to evaluate the readability and accuracy of the information provided by three prominent LLMs, ChatGPT-4, Gemini, and Microsoft Copilot, in response to frequently asked questions in breast imaging, assessing their potential to improve patient understanding and facilitate healthcare communication. Methodology We collected the most common questions on breast imaging from clinical practice and posed them to LLMs. We then evaluated the responses in terms of readability and accuracy. Responses from LLMs were analyzed for readability using the Flesch Reading Ease and Flesch-Kincaid Grade Level tests and for accuracy through a radiologist-developed Likert-type scale. Results The study found significant variations among LLMs. Gemini and Microsoft Copilot scored higher on readability scales (p < 0.001), indicating their responses were easier to understand. In contrast, ChatGPT-4 demonstrated greater accuracy in its responses (p < 0.001). Conclusions While LLMs such as ChatGPT-4 show promise in providing accurate responses, readability issues may limit their utility in patient education. Conversely, Gemini and Microsoft Copilot, despite being less accurate, are more accessible to a broader patient audience. Ongoing adjustments and evaluations of these models are essential to ensure they meet the diverse needs of patients, emphasizing the need for continuous improvement and oversight in the deployment of artificial intelligence technologies in healthcare.
    Keywords:  artificial intelligence; breast imaging; chatgpt; gemini; large language models; microsoft copilot
    DOI:  https://doi.org/10.7759/cureus.59960
  15. J Dent Educ. 2024 May 07.
      PURPOSE/OBJECTIVES: This study proposes the utilization of a Natural Language Processing tool to create a semantic search engine for dental education while addressing the increasing concerns of accuracy, bias, and hallucination in outputs generated by AI tools. The paper focuses on developing and evaluating DentQA, a specialized question-answering tool that makes it easy for students to seek information to access information located in handouts or study material distributed by an institution.METHODS: DentQA is structured upon the GPT3.5 language model, utilizing prompt engineering to extract information from external dental documents that experts have verified. Evaluation involves non-human metrics (BLEU scores) and human metrics for the tool's performance, relevance, accuracy, and functionality.
    RESULTS: Non-human metrics confirm DentQA's linguistic proficiency, achieving a Unigram BLEU score of 0.85. Human metrics reveal DentQA's superiority over GPT3.5 in terms of accuracy (p = 0.00004) and absence of hallucination (p = 0.026). Additional metrics confirmed consistent performance across different question types (X2 (4, N = 200) = 13.0378, p = 0.012). User satisfaction and performance metrics support DentQA's usability and effectiveness, with a response time of 3.5 s and over 70% satisfaction across all evaluated parameters.
    CONCLUSIONS: The study advocates using a semantic search engine in dental education, mitigating concerns of misinformation and hallucination. By outlining the workflow and the utilization of open-source tools and methods, the study encourages the utilization of similar tools for dental education while underscoring the importance of customizing AI models for dentistry. Further optimizations, testing, and utilization of recent advances can contribute to dental education significantly.
    Keywords:  artificial intelligence; conversational agent; dentQA; dental; education; natural language processing; search engine
    DOI:  https://doi.org/10.1002/jdd.13560
  16. Foot Ankle Spec. 2024 May 07. 19386400241249810
      BACKGROUND: Artificial intelligence (AI) large language models (LLMs), such as Chat Generative Pre-trained Transformer (ChatGPT), have gained traction as both augmentative tools in patient care but also as powerful synthesizing machines. The use of ChatGPT in orthopaedic foot and ankle surgery, particularly as an informative resource for patients, has not been described to date. The purpose of this study was to assess the quality of information provided by ChatGPT in response to commonly asked questions about total ankle replacement (TAR).METHODS: ChatGPT was asked 10 frequently asked questions about TAR in a conversational thread. Responses were recorded without follow-up, and subsequently graded A, B, C, or F, corresponding with "excellent response," "adequate response needing mild clarification," "inadequate response needing moderate clarification," and "poor response needing severe clarification."
    RESULTS: Of the 10 responses, 2 were grade "A," 6 were grade "B," 2 were grade "C," and none were grade "F." Overall, the LLM provided good-quality responses to the posed prompts. Conclusion. Overall, the provided responses were understandable and representative of the current literature surrounding TAR. This study highlights the potential role LLMs in augmenting patient understanding of foot and ankle operative procedures.
    LEVELS OF EVIDENCE: IV.
    Keywords:  ankle arthritis; ankle arthroplasty; artificial intelligence; large language model; patient comprehension; quality of information
    DOI:  https://doi.org/10.1177/19386400241249810
  17. Sci Rep. 2024 05 04. 14(1): 10273
      Many people in the advanced stages of dementia require full-time caregivers, most of whom are family members who provide informal (non-specialized) care. It is important to provide these caregivers with high-quality information to help them understand and manage the symptoms and behaviors of dementia patients. This study aims to evaluate ChatGPT, a chatbot built using the Generative Pre-trained Transformer (GPT) large language model, in responding to information needs and information seeking of such informal caregivers. We identified the information needs of dementia patients based on the relevant literature (22 articles were selected from 2442 retrieved articles). From this analysis, we created a list of 31 items that describe these information needs, and used them to formulate 118 relevant questions. We then asked these questions to ChatGPT and investigated its responses. In the next phase, we asked 15 informal and 15 formal dementia-patient caregivers to analyze and evaluate these ChatGPT responses, using both quantitative (questionnaire) and qualitative (interview) approaches. In the interviews conducted, informal caregivers were more positive towards the use of ChatGPT to obtain non-specialized information about dementia compared to formal caregivers. However, ChatGPT struggled to provide satisfactory responses to more specialized (clinical) inquiries. In the questionnaire study, informal caregivers gave higher ratings to ChatGPT's responsiveness on the 31 items describing information needs, giving an overall mean score of 3.77 (SD 0.98) out of 5; the mean score among formal caregivers was 3.13 (SD 0.65), indicating that formal caregivers showed less trust in ChatGPT's responses compared to informal caregivers. ChatGPT's responses to non-clinical information needs related to dementia patients were generally satisfactory at this stage. As this tool is still under heavy development, it holds promise for providing even higher-quality information in response to information needs, particularly when developed in collaboration with healthcare professionals. Thus, large language models such as ChatGPT can serve as valuable sources of information for informal caregivers, although they may not fully meet the needs of formal caregivers who seek specialized (clinical) answers. Nevertheless, even in its current state, ChatGPT was able to provide responses to some of the clinical questions related to dementia that were asked.
    Keywords:  ChatGPT; Dementia; Information need; Information seeking; Large language model
    DOI:  https://doi.org/10.1038/s41598-024-61068-5
  18. Oral Surg Oral Med Oral Pathol Oral Radiol. 2024 Apr 19. pii: S2212-4403(24)00164-0. [Epub ahead of print]
      OBJECTIVES: To examine the quality, reliability, readability, and usefulness of ChatGPT in promoting oral cancer early detection.STUDY DESIGN: About 108 patient-oriented questions about oral cancer early detection were compiled from expert panels, professional societies, and web-based tools. Questions were categorized into 4 topic domains and ChatGPT 3.5 was asked each question independently. ChatGPT answers were evaluated regarding quality, readability, actionability, and usefulness using. Two experienced reviewers independently assessed each response.
    RESULTS: Questions related to clinical appearance constituted 36.1% (n = 39) of the total questions. ChatGPT provided "very useful" responses to the majority of questions (75%; n = 81). The mean Global Quality Score was 4.24 ± 1.3 of 5. The mean reliability score was 23.17 ± 9.87 of 25. The mean understandability score was 76.6% ± 25.9% of 100, while the mean actionability score was 47.3% ± 18.9% of 100. The mean FKS reading ease score was 38.4% ± 29.9%, while the mean SMOG index readability score was 11.65 ± 8.4. No misleading information was identified among ChatGPT responses.
    CONCLUSION: ChatGPT is an attractive and potentially useful resource for informing patients about early detection of oral cancer. Nevertheless, concerns do exist about readability and actionability of the offered information.
    DOI:  https://doi.org/10.1016/j.oooo.2024.04.010
  19. J Am Pharm Assoc (2003). 2024 May 08. pii: S1544-3191(24)00139-0. [Epub ahead of print] 102119
      BACKGROUND: ChatGPT is a conversational artificial intelligence (AI) technology that has shown application in various facets of healthcare. With the increased use of AI, it is imperative to assess the accuracy and comprehensibility of AI platforms.OBJECTIVE: This pilot project aimed to assess the understandability, readability, and accuracy of ChatGPT as a source of medication-related patient education as compared with an evidence-based medicine tertiary reference resource, LexiComp®.
    METHODS: Patient education materials (PEMs) were obtained from ChatGPT and LexiComp® for eight common medications (albuterol, apixaban, atorvastatin, hydrocodone/acetaminophen, insulin glargine, levofloxacin, omeprazole, and sacubitril/valsartan). PEMs were extracted, blinded, and assessed by two investigators independently. The primary outcome was a comparison of the Patient Education Materials Assessment Tool-printable (PEMAT-P). Secondary outcomes included Flesch reading ease, Flesch Kincaid grade level, percent passive sentences, word count, and accuracy. A 7-item accuracy checklist for each medication was generated by expert consensus among pharmacist investigators, with LexiComp® PEMs serving as the control. PEMAT-P interrater reliability was determined via intraclass correlation coefficient (ICC). Flesch reading ease, Flesch Kincaid grade level, percent passive sentences, and word count were calculated by Microsoft® Word®. Continuous data were assessed using the Student's t-test via SPSS (version 20.0).
    RESULTS: No difference was found in the PEMAT-P understandability score of PEMs produced by ChatGPT versus LexiComp® [77.9% (11.0) vs. 72.5% (2.4), P=0.193]. Reading level was higher with ChatGPT [8.6 (1.2) vs. 5.6 (0.3), P<0.001). ChatGPT PEMs had a lower percentage of passive sentences and lower word count. The average accuracy score of ChatGPT PEMs was 4.25/7 (61%), with scores ranging from 29-86%.
    CONCLUSION: Despite comparable PEMAT-P scores, ChatGPT PEMs did not meet grade level targets. Lower word count and passive text with ChatGPT PEMs could benefit patients, but the variable accuracy scores prevent routine use of ChatGPT to produce medication-related PEMs at this time.
    Keywords:  artificial intelligence; counseling; e-health; literacy; patient education; pharmacists
    DOI:  https://doi.org/10.1016/j.japh.2024.102119
  20. Eur Arch Otorhinolaryngol. 2024 May 06.
      
    Keywords:  Artificial intelligence; ChatGPT; Patient education; Readability
    DOI:  https://doi.org/10.1007/s00405-024-08716-8
  21. Health Info Libr J. 2024 May 08.
      BACKGROUND: Recently, public health data dashboards have gained popularity as trusted, up-to-date sources of health information. However, their usability and usefulness may be limited.OBJECTIVE: To identify the requirements of usable public health data dashboards through a case study with domain experts.
    METHODS: Paired-user virtual data collection sessions were conducted with 20 experts in three steps: (1) a monitored use of an existing dashboard to complete tasks and discuss the usability problems, (2) a survey rating user experience, and (3) an interview regarding the users and use cases. Data analysis included quantitative analysis of the survey findings and thematic analysis of the audio transcripts.
    RESULTS: Analyses yielded several findings: (1) familiar charts with clear legends and labels should be used to focus users' attention on the content; (2) charts should be organized in a simple and consistent layout; (3) contextual information should be provided to help with interpretations; (4) data limitations should be clearly communicated; (5) guidance should be provided to lead user interactions.
    DISCUSSION: The identified requirements guide health librarians and information professionals in evaluating public health data dashboards.
    CONCLUSION: Public health data dashboards should be designed based on users' needs to provide useful up-to-date information sources for health information consumers.
    Keywords:  consumer health information; data visualization; health literacy; human‐computer interaction; information literacy; patient education; public health
    DOI:  https://doi.org/10.1111/hir.12532
  22. Med Ref Serv Q. 2024 Apr-Jun;43(2):43(2): 119-129
      Evidence-based medicine (EBM) instruction is required for physician assistant (PA) students. As a follow-up to an initial didactic year survey, this study seeks to understand which attributes of EBM resources clinical PA students find most and least useful, their self-efficacy utilizing medical literature, and their usage of EBM tools in the clinic. Results indicate that students preferred UpToDate and PubMed. PA students valued ease of use, which can inform instructors and librarians. Respondents utilized EBM tools daily or a few days a week, underscoring the importance of EBM tools in real-world scenarios. After their clinical year, students felt moderately confident utilizing the medical literature, emphasizing EBM training.
    Keywords:  Bibliographic database; clinical instruction; didactic instruction; evidence-based medicine; evidence-based practice; information-seeking behavior; physician assistant student; self-efficacy; user experience
    DOI:  https://doi.org/10.1080/02763869.2024.2329012
  23. J Am Acad Orthop Surg Glob Res Rev. 2024 May 01. 8(5):
      INTRODUCTION: Rotator cuff injuries (RCIs) are incredibly common in the US adult population. Forty-three percent of adults have basic or below-basic literacy levels; nonetheless, patient educational materials (PEMs) are frequently composed at levels exceeding these reading capabilities. This study investigates the readability of PEMs on RCIs published by leading US orthopaedic institutions.METHODS: The top 25 orthopaedic institutions on the 2022 U.S. News & World Report Best Hospitals Specialty Ranking were selected. Readability scores of PEMs related to RCI were calculated using the www.readabilityformulas.com website.
    RESULTS: Among the 25 analyzed PEM texts, all exceeded the sixth-grade reading level. Only four of 168 scores (2.4%) were below the eighth-grade level.
    DISCUSSION: This study indicates that PEMs on rotator cuff injuries from top orthopedic institutions are too complex for many Americans, with readability levels ranging from 8.5 to 16th grade, well above the CDC-recommended eighth-grade level. The research highlights a widespread issue with high reading levels across healthcare information and underscores the need for healthcare providers to adopt patient-centered communication strategies to improve comprehension and accessibility.
    CONCLUSION: PEMs on rotator cuff injuries from leading orthopedic institutions often have a reading level beyond that of many Americans, exceeding guidelines from the NIH and CDC that recommend PEMs be written at an eighth-grade reading level. To increase accessibility, enhance healthcare literacy, and improve patient outcomes, institutions should simplify these materials to meet recommended readability standards.
    DOI:  https://doi.org/e24.00085
  24. Eur Arch Otorhinolaryngol. 2024 May 06.
      INTRODUCTION: The treatment of patients with a cochlear implant (CI) is usually an elective, complex and interdisciplinary process. As an important source of information, patients often access the internet prior to treatment. The quality of internet-based information regarding thematic coverage has not yet been analysed in detail. Therefore, the aim of this study was to analyse the information on CI care available on the internet regarding its thematic coverage and readability.MATERIAL METHODS: Eight search phrases related to CI care were defined as part of the study. A checklist for completeness of thematic coverage was then created for each search phrase. The current German CI clinical practice guideline and the white paper on CI care in Germany were used as a basis. As a further parameter, readability was assessed using Flesch Reading Ease Scores. The search phrases were used for an internet search with Google. The first ten results were then analysed with regard to thematic coverage, readability and the provider of the website.
    RESULTS: A total of 80 websites were identified, which were set up by 54 different providers (16 providers were found in multiple entries) from eight different provider groups. The average completeness of thematic coverage was 41.6 ± 28.2%. Readability according to the Flesch Reading Ease Score was categorised as "hard to read" on average (34.7 ± 14.2 points, range: 0-72). There was a negative statistically significant correlation between the thematic coverage of content and readability (Spearman's rank correlation: r = - 0.413, p = 0.00014). The completeness of thematic coverage of information on CI care available on the internet was highly heterogeneous and had a significant negative correlation with the readability. This result should be taken into account by both the providers of internet information and by patients when using internet-based information on CI care and help to further improve the quality of web-based information.
    Keywords:  Cochlear implantation; Internet search; Patient information; Quality; Readability; Thematic coverage
    DOI:  https://doi.org/10.1007/s00405-024-08694-x
  25. Arch Bone Jt Surg. 2024 ;12(4): 264-274
      Objectives: While the internet provides accessible medical information, often times it does not cater to the average patient's ability to understand medical text at a 6th and 8th grade reading level, per American Medical Association (AMA)/National Institute of Health (NIH) recommendations. This study looks to analyze current online materials relating to posterior cruciate ligament (PCL) surgery and their readability, understandability, and actionability.Methods: The top 100 Google searchs for "PCL surgery" were compiled. Research papers, procedural protocols, advertisements, and videos were excluded from the data collection. The readability was examined using 7 algorithms: the Flesch Reading Ease Score, Gunning Fog, Flesch-Kincaid Grade Level, Coleman-Liau Index, SMOG index, Automated Readability Index and the Linsear Write Formula. Two evaluators assessed Understandability and Actionability of the results with the Patient Educational Materials Assessment Tool (PEMAT). Outcome measures included Reading Grade Level, Reader's age minimum and maximum, Understandability, and Actionability.
    Results: Of the 100 results, 16 were excluded based on the exclusion criteria. There was a statistically significant difference between the readability of the results from all algorithms and the current recommendation by AMA and NIH. Subgroup analysis demonstrated that there was no difference in readability as it pertained to which page they appeared on Google search. There was also no difference in readability between individual websites versus organizational websites (hospital and non-hospital educational websites). Three articles were at the 8th grade recommended reading level, and all three were from healthcare institutes.
    Conclusion: There is a discrepancy in readability between the recommendation of AMA/NIH and online educational materials regarding PCL surgeries, regardless of where they appear on Google and across different forums. The understandability and actionability were equally poor. Future research can focus on the readability and validity of video and social media as they are becoming increasingly popular sources of medical information.
    Keywords:  Knee; PCL surgery; Patient education materials; Readability; Understandability
    DOI:  https://doi.org/10.22038/ABJS.2024.75361.3492
  26. J Craniofac Surg. 2024 May 06.
      Although the lateral window approach allows for greater graft material delivery and bone formation, it is more challenging and invasive, prompting keen interest among dentists to master this method. YouTube is increasingly used for medical training; however, concerns regarding the quality of instructional videos exist. This study proposes new criteria for evaluating YouTube videos on maxillary sinus elevation with the aim of establishing standards for assessing instructional content in the field. We sourced 100 maxillary sinus elevation videos from YouTube and, following exclusion criteria, analyzed 65 remaining videos. The video characteristics, content quality, and newly developed criteria were evaluated. Statistical analyses, employing ordinal logistic regression, identified the factors influencing the quality of instructional videos and evaluated the significance of our new criteria. Although video interaction and view rate exhibited positive relations to content quality, they were not significant (P=0.818 and 0.826, respectively). Notably, videos of fair and poor quality showed a significant negative relation (P<0.001). Audio commentary, written commentary, and descriptions of preoperative data displayed positive but statistically insignificant relationships (P=0.088, 0.228, and 0.612, respectively). The comparison of video evaluation results based on the developed criteria with content quality scores revealed significant negative relationships for good, fair, and poor videos (P<0.001, Exp(B)=-4.306, -7.853, -10.722, respectively). Among the various video characteristics, only image quality showed a significant relationship with content quality. Importantly, our newly developed criteria demonstrated a significant relationship with video content quality, providing valuable insights for assessing instructional videos on maxillary sinus elevation and laying the foundation for robust standards.
    DOI:  https://doi.org/10.1097/SCS.0000000000010169
  27. Acta Neurol Belg. 2024 May 06.
      INTRODUCTION: The purpose of this study was to evaluate YouTube videos on meralgia paresthetica (MP) for reliability, quality, and differences between quality levels.METHODS: We analyzed 59 videos related to MP. We evaluated several video characteristics, including views, likes, dislikes, duration, and speaker profile. We used view ratio, like ratio, Video Power Index (VPI), Global Quality Scale (GQS), JAMA criteria, and modified DISCERN (mDISCERN) to assess viewer engagement, popularity, educational quality, and reliability.
    RESULTS: The videos received a total of 4,009,141 views (average 67,951.54), with 25.4% focused on exercise training and 23.7% focused on disease information. Mean scores were mDISCERN 2.4, GQS 2.8, and JAMA 2.1. Physician-led videos had higher mDISCERN scores, while allied health worker-led videos had more views, likes, dislikes, view ratios, and VPI. Poor and high-quality videos differed in views, likes, view ratio, VPI, and duration. Positive correlations existed among mDISCERN, JAMA, and GQS scores, with video duration positively correlated with GQS.
    CONCLUSION: The content of YouTube videos discussing diseases significantly influences viewer engagement and popularity. To enhance the availability of valuable content on YouTube, which lacks a peer review process, medical professionals must contribute high-quality educational materials tailored to their target audience.
    Keywords:  Lateral femoral cutaneous nerve entrapment; Meralgia paresthetica; Reliability; Video Power Index
    DOI:  https://doi.org/10.1007/s13760-024-02567-0
  28. Cureus. 2024 Apr;16(4): e57887
      Background This study aimed to assess the reliability, quality, and content of the information provided by YouTube™ videos on oral health during pregnancy to reveal the effectiveness of the videos for patients. Methodology This cross-sectional study was conducted by two experienced dental specialists. They initiated the study by searching for YouTube™ videos using the keyword 'pregnancy oral health'. The videos were then assessed based on various parameters, including origin, type, number of days since upload, duration, number of views, number of likes and dislikes, and number of comments. The specialists also calculated the interaction index and viewing rate. The reliability and quality of the videos were evaluated using the global quality scale (GQS) and modified DISCERN (mDISCERN) scales, while the content was assessed with the comprehensiveness tailor-made index. The data were analyzed with the Shapiro-Wilk, the Kruskal-Wallis, the post-hoc Bonferroni, and Fisher's exact tests. The significance level was set at P < 0.05. Results After reviewing initially 224 videos, 129 were included in the study. Health professionals were the publishers of most videos. A statistically significant positive correlation was found between content scores and video duration, number of comments, interaction index, and total DISCERN scores (p<0.05) (r=0.445, r=0.186, r=0.552, r=0.241, r=0.200, r=0.681, respectively). Statistically significant associations were found between GQS scores, video duration, number of comments, and total mDISCERN scores (p<0.05) (r=0.510, r=225, r=0.156, r=0.768, respectively). Statistically significant relationships were identified between the total content score, video source, and GQS (p<0.05). According to the total content score, 57.4% of the videos had a score of 2, 35.7% had a score of 1, and only 7% had a score of 0. Conclusions This study's findings underscore the significant variability in the scientific accuracy, content, and quality of health information on the Internet, particularly on YouTube™. It reveals that, while there are videos that provide rich content and high-quality information, there are also poor-quality and inadequate videos that may mislead patients. Health professionals should be aware of misinformation found on YouTube™ and ensure that patients always have access to accurate and reliable information.
    Keywords:  e-health; gingivitis; oral health; pregnancy; youtube
    DOI:  https://doi.org/10.7759/cureus.57887
  29. Healthcare (Basel). 2024 Apr 26. pii: 897. [Epub ahead of print]12(9):
      Effective public health interventions rely on understanding how individuals access, interpret, and utilise health information. Studying the health information-seeking behaviour (HISB) of a community can provide valuable insights to inform strategies that address community health needs and challenges. This study explored the online HISBs of People of African Descent (PoAD) in the United Kingdom (UK), a demographic that comprises four percent of the UK population and has a 92.8% active Internet usage rate. Data on the HISB were collected from 21 PoAD across various UK regions through online semi-structured interviews before being analysed using reflexive Thematic Analysis (TA). The participants ranged in age from 20 to 70 years and had a mean age of 42.8 (SD ± 11.4). Our analysis of the interview transcripts revealed five key themes: Internet usage and preferences, attitudes toward social media, barriers to seeking health information online, trust in online health information, and cultural influences on online HISB. Our findings indicate a proactive engagement among PoAD in seeking health information online that is underscored by a preference for professional sources over ethnic congruence. However, concerns about misinformation exist, and there are barriers to accessing health information online, including data privacy, unreliable information, and information relevance and overload. We also found that cultural factors and traditional beliefs impact the adoption of Internet-based interventions among PoAD, highlighting the need for culturally sensitive approaches. Preferences regarding the frequency and delivery of online health information varied among participants, with a majority preferring a weekly update. This study emphasises the critical need for accessible, culturally appropriate, secure, and reliable online health resources tailored to the needs and preferences of the PoAD.
    Keywords:  cultural sensitivity; culturally tailored interventions; health information-seeking behaviour; internet-based intervention; online health; people of African descent
    DOI:  https://doi.org/10.3390/healthcare12090897
  30. JSES Rev Rep Tech. 2024 May;4(2): 175-181
      Background: Management of acromioclavicular (AC) joint injuries has been an ongoing source of debate, with over 150 variations of surgery described in the literature. Without a consensus on surgical technique, patients are seeking answers to common questions through internet resources. This study investigates the most common online patient questions pertaining to AC joint injuries and the quality of the websites providing information.Hypothesis: 1) Question topics will pertain to surgical indications, pain management, and success of surgery and 2) the quality and transparency of online information are largely heterogenous.
    Methods: Three AC joint search queries were entered into the Google Web Search. Questions under the "People also ask" tab were expanded in order and 100 results for each query were included (300 total). Questions were categorized based on Rothwell's classification. Websites were categorized by source. Website quality was evaluated by the Journal of the American Medical Association (JAMA) Benchmark Criteria.
    Results: Most questions fell into the Rothwell Fact category (48.0%). The most common question topics were surgical indications (28.0%), timeline of recovery (13.0%), and diagnosis/evaluation (12.0%). The least common question topics were anatomy/function (3.3%), evaluation of surgery (3.3%), injury comparison (1.0%), and cost (1.0%). The most common websites were medical practice (44.0%), academic (22.3%), and single surgeon personal (12.3%). The average JAMA score for all websites was 1.0 ± 1.3. Government websites had the highest JAMA score (4.0 ± 0.0) and constituted 45.8% of all websites with a score of 4/4. PubMed articles constituted 63.6% (7/11) of government website. Comparatively, medical practice websites had the lowest JAMA score (0.3 ± 0.7, range [0-3]).
    Conclusion: Online patient AC joint injury questions pertain to surgical indications, timeline of recovery, and diagnosis/evaluation. Government websites and PubMed articles provide the highest-quality sources of reliable, up-to-date information but constitute the smallest proportion of resources. In contrast, medical practice represents the most visited websites, however, recorded the lowest quality score. Physicians should utilize this information to answer frequently asked questions, guide patient expectations, and help provide and identify reliable online resources.
    Keywords:  AC joint; Acromioclavicular joint; Orthopedic; Rockwood; Shoulder; Sports; Trauma
    DOI:  https://doi.org/10.1016/j.xrrt.2024.02.001
  31. Digit Health. 2024 Jan-Dec;10:10 20552076241253473
      Objective: As the demand and supply sides of popular health services increasingly rely on the Internet, mastering e-health literacy should become an essential skill for older adults. The aim of this article is to analyse the effects of Internet health information usage habits on older adults' e- health literacy and to investigate the influencing mechanisms.Methods: Using a combination of random sampling and convenient sampling, data were collected through questionnaire surveys. Data from 776 older adults was analysed using correlation and hierarchical regression to analyse.
    Results: The mean scores for all aspects of older adults' habits of using health information on the Internet and electronic health literacy were relatively high. There was no statistically significant difference in the predictive power of the three aspects of electronic health literacy among older adults with different genders, health statuses, education levels and ages (p > 0.05). The four factors of older adults' habits of using Internet health information can increase the explanatory power of application ability, judgment ability and decision-making ability in Model 2 by 53.7%, 46.2% and 57%, respectively, with statistical significance (p < 0.001).
    Conclusion: The better the habits of older adults in using health information on the Internet, the higher their level of electronic health literacy. Families, communities and social groups should help older adults use online health resources to improve their e-health literacy. Older adults can use WeChat or other interpersonal information platforms to share online health information with each other.
    Keywords:  Habit of using internet health information; application ability; decision-making ability; e-Health; judgment ability
    DOI:  https://doi.org/10.1177/20552076241253473
  32. J Orthod. 2024 May 08. 14653125241249494
      OBJECTIVE: To explore how orthognathic patients seek information during decision-making.DESIGN: Qualitative, cross-sectional study.
    SETTING: A hospital in Cumbria, UK.
    PARTICIPANTS: Prospective orthognathic patients.
    METHODS: Participants were purposively recruited from joint orthognathic clinics after the original consultation. Semi-structured interviews were conducted via remote video call with nine participants aged 18-30 years. Data collection and reflexive thematic analysis occurred in parallel until thematic saturation was achieved.
    RESULTS: The central finding of this research was that patients were making informed decisions about orthognathic surgery. Four themes were identified to support this central finding including the following: (1) selective engagement with orthognathic information sources; (2) the central role of patient-specific information from professionals and peers; (3) Internet use to supplement standard information resources; and (4) concerns over information found online. The preferred source of information was verbal from the clinical team as it was trusted and person-specific. Past patients were identified as valued sources of information and establishing contact through digital social media networks was found to be a convenient alternative to face-to-face. Online information found was valued but concerns included information overload, problems establishing applicability and concerns over its credibility.
    CONCLUSION: Orthognathic patients were making informed decisions about their treatment. This study highlights the central role of the patient-clinician interaction in decision-making, especially in providing patient-specific information. Insight into the nuances of information-seeking behaviours will better inform clinical care. Since patients frequently access online information that is decision-relevant, encouraging patients to discuss online searches will support the shared decision-making process and alleviate any concerns with information found. During consultation, explaining the purpose of an information aid rather than expecting patients to read them separately, may further enhance its usefulness in decision-making. This study identified an unmet need for visual aids, such as real-time images of postoperative recovery. These findings can inform the design of future information resources.
    Keywords:  consent; decision-making; orthognathic; patient information
    DOI:  https://doi.org/10.1177/14653125241249494
  33. J Med Internet Res. 2024 May 08. 26 e49928
      BACKGROUND: Alpha-gal syndrome is an emerging allergy characterized by an immune reaction to the carbohydrate molecule alpha-gal found in red meat. This unique food allergy is likely triggered by a tick bite. Cases of the allergy are on the rise, but prevalence estimates do not currently exist. Furthermore, varying symptoms and limited awareness of the allergy among health care providers contribute to delayed diagnosis, leading individuals to seek out their own information and potentially self-diagnose.OBJECTIVE: The study aimed to (1) describe the volume and patterns of information-seeking related to alpha-gal, (2) explore correlations between alpha-gal and lone star ticks, and (3) identify specific areas of interest that individuals are searching for in relation to alpha-gal.
    METHODS: Google Trends Supercharged-Glimpse, a new extension of Google Trends, provides estimates of the absolute volume of searches and related search queries. This extension was used to assess trends in searches for alpha-gal and lone star ticks (lone star tick, alpha gal, and meat allergy, as well as food allergy for comparison) in the United States. Time series analyses were used to examine search volume trends over time, and Spearman correlation matrices and choropleth maps were used to explore geographic and temporal correlations between alpha-gal and lone star tick searches. Content analysis was performed on related search queries to identify themes and subcategories that are of interest to information seekers.
    RESULTS: Time series analysis revealed a rapidly increasing trend in search volumes for alpha-gal beginning in 2015. After adjusting for long-term trends, seasonal trends, and media coverage, from 2015 to 2022, the predicted adjusted average annual percent change in search volume for alpha-gal was 33.78%. The estimated overall change in average search volume was 627%. In comparison, the average annual percent change was 9.23% for lone star tick, 7.34% for meat allergy, and 2.45% for food allergy during this time. Geographic analysis showed strong significant correlations between alpha-gal and lone star tick searches especially in recent years (ρ=0.80; P<.001), with primary overlap and highest search rates found in the southeastern region of the United States. Content analysis identified 10 themes of primary interest: diet, diagnosis or testing, treatment, medications or contraindications of medications, symptoms, tick related, specific sources of information and locations, general education information, alternative words for alpha-gal, and unrelated or other.
    CONCLUSIONS: The study provides insights into the changing information-seeking patterns for alpha-gal, indicating growing awareness and interest. Alpha-gal search volume is increasing at a rapid rate. Understanding specific questions and concerns can help health care providers and public health educators to tailor communication strategies. The Google Trends Supercharged-Glimpse tool offers enhanced features for analyzing information-seeking behavior and can be valuable for infodemiology research. Further research is needed to explore the evolving prevalence and impact of alpha-gal syndrome.
    Keywords:  Google Trends; allergic; allergy; alpha gal; alpha-gal; alpha-gal syndrome; content analysis; geographic; immune; immunological; immunology; infodemiology; information behavior; information behaviour; information seeking; lone star tick; time series
    DOI:  https://doi.org/10.2196/49928
  34. Genet Med. 2024 May 07. pii: S1098-3600(24)00095-9. [Epub ahead of print] 101161
      
    Keywords:  ACMG; Mastermind; VUS; variant classification
    DOI:  https://doi.org/10.1016/j.gim.2024.101161