bims-librar Biomed News
on Biomedical librarianship
Issue of 2025–11–16
39 papers selected by
Thomas Krichel, Open Library Society



  1. Sci Rep. 2025 Nov 12. 15(1): 39619
      Smart and modern libraries need robust and sophisticated systems to maximize space use. Traditional library designs are built on unchanging spatial arrangements, making it difficult to manage dynamic user demands due to overcrowded study zones, poor navigation, and wasteful storage. Real-time user behavior fluctuations are ignored in the current design, resulting in space underutilization and overcrowding in high-traffic areas. Introduce Reinforcement Learning Maximize Space Utilization (RLMSU) methodologies to manage dynamic space in modern libraries to solve research problems. The RLMSU platform collects data from IoT sensors, historical usage patterns, and computer vision to improve book-shelf design, seating, and navigation pathways. The agent-action (AA) paradigm predicts user occupancy and movement in the RL method, optimizing space allocation and resource use while maintaining accessibility. Every agent's actions are rewarded under the AA principle, gathering library environment feedback. Python is used to analyze space use and develop the RLMSU framework using the Full Library Services Dataset. Effective use of the AA idea increases seating availability by 30% and reduces congestion by 25%. The system works well in dynamic scenarios and improves user satisfaction during peak and off-peak hours. Hence, this process was successfully integrated with research institutions, university libraries, and public libraries to improve space and operational efficiency.
    Keywords:  Action-agent; And user satisfaction; Operational efficiency; Reinforcement learning; Seat availability; Smart libraries; Space utilization
    DOI:  https://doi.org/10.1038/s41598-025-23218-1
  2. J Med Libr Assoc. 2025 Oct 23. 113(4): 349-357
       Background: New York State (NYS) residents living in rural communities experience multiple barriers to accessing healthcare. Telehealth, or remote provision of healthcare services, could address these barriers. However, telehealth remains underutilized in rural communities due to limited access to broadband and lack of provider/patient awareness. Rural libraries could serve as telehealth hubs and thereby increase telehealth uptake.
    Case Presentation: A community-academic partnership was formed between the University of Rochester Wilmot Cancer Institute and the Community Cancer Action Council, a group of 29 community stakeholders. The partnership surveyed libraries across NYS to assess telehealth capacity. After identifying a library to pilot a telehealth hub, surveys were sent to that library's patrons and staff to assess perspectives on telehealth. Fifty-three libraries (19.4%) responded to the initial survey, 92.2% of whom felt libraries could beneficially host telehealth hubs. The Macedon Public Library was chosen as the pilot location as they had constructed a private telehealth booth. 60% of 48 Macedon community members surveyed indicated they would utilize telehealth in the library, while 89% of 9 Macedon library staff agreed they were committed to implementing telehealth services.
    Conclusions: We found high community interest in establishing a community telehealth hub in a library. In the next phase of the project, the community-academic partnership will promote use of telehealth to oncology providers.
    Keywords:  Community Engagement; Disparities; Libraries; Rural; Telehealth
    DOI:  https://doi.org/10.5195/jmla.2025.2132
  3. J Med Libr Assoc. 2025 Oct 23. 113(4): 358-365
       Background: Medical educators are increasingly aware of the need for patient-centred and inclusive curricula. Collaboration paired with sound evidence can facilitate efforts in this area. Librarians are well-equipped to help move this work forward, as their skills and expertise can support educators through the process of revising learning materials that will incorporate timely and socially accountable information.
    Case Presentation: This case report describes an initiative at one Canadian medical school, whereby a health sciences librarian joined an interdisciplinary working group to support the updating of case-based learning materials for the undergraduate medical curriculum. These materials were revised with an anti-oppressive and patient-centred lens, and as an embedded member of the working group the librarian provided on-demand literature searches, participated in conversations regarding the importance of critical appraisal skills, and consulted on sustainable access to electronic materials used in the cases. From this experience and close collaboration, lessons which enhanced their practice and stronger relationships emerged for the librarian.
    Conclusions: Involving librarians' expertise in updating learning materials provides many benefits to curriculum developers and presents opportunities for liaison librarians to engage with their faculties more closely. Promoting patient-centredness and inclusivity is an ongoing process, and academic health sciences librarians can apply their expertise to curricular initiatives such as the one described here, while librarians working in clinical settings can support these efforts through specialized forms of teaching and outreach.
    Keywords:  Academic librarianship; Case-Based Learning; EDIA; Medical education; Patient-Centred Care
    DOI:  https://doi.org/10.5195/jmla.2025.2106
  4. Cochrane Database Syst Rev. 2025 Nov 12. 11 CD015679
       OBJECTIVES: This is a protocol for a Cochrane Review (methodology). The objectives are as follows: To assess the effectiveness and resource requirements of supplementary search methods compared with bibliographic database searching for identifying studies and study reports. The supplementary search methods we will consider are: citation searching; contacting study authors; handsearching; regulatory agency sources and clinical study reports; clinical trials registries; web searching.
    DOI:  https://doi.org/10.1002/14651858.CD015679
  5. J Med Libr Assoc. 2025 Oct 23. 113(4): 327-335
       Objective: Predatory journal articles do not undergo rigorous peer review and so their quality is potentially lower. Citing them disseminates the unreliable data they may contain and may undermine the integrity of science. Using citation analysis techniques, this study investigates the influence of predatory journals in the health sciences.
    Methods: The twenty-six journals in the "Medical Sciences" category of a known predatory publisher were selected. The number of articles published by these journals was recorded based on the information from their websites. The "Cited References" search function in Web of Science was used to retrieve citation data for these journals.
    Results: Of the 3,671 articles published in these predatory journals, 1,151 (31.4%) were cited at least once by 3,613 articles indexed in Web of Science. The number of articles that cited articles published in predatory journals increased significantly from 64 in 2014 to 665 in 2022, an increase of 10-fold in nine years. The citing articles were published by researchers from all over the world (from high-, middle-, and lower-income countries) and in the journals of traditional and open access publishers. Forty-three percent (1,560/3,613) of the citing articles were supported by research funds.
    Conclusions: The content from articles published in predatory journals has infiltrated reputable health sciences journals to a substantial extent. It is crucial to develop strategies to prevent citing such articles.
    Keywords:  Predatory journals; Web of Science; citation analysis; health sciences
    DOI:  https://doi.org/10.5195/jmla.2025.2024
  6. J Med Libr Assoc. 2025 Oct 23. 113(4): 318-326
       Objective: This case study identifies the presence and prevalence of precision indexing errors in a subset of automatically indexed MEDLINE records in PubMed (specifically, all MEDLINE records automatically indexed with the MeSH term Malus, the genus name for apple trees). In short, how well does automatic indexing compare [figurative] apples to [literal] apples?
    Methods: 1,705 MEDLINE records automatically indexed with the MeSH term Malus underwent title/abstract and full text screening to determine whether they were correctly indexed (i.e., the records were about Malus, meaning they discussed the literal fruit or tree) or incorrectly indexed (i.e., they were not about Malus, meaning they did not discuss the literal fruit or tree). The context and type of indexing error were documented for each erroneously indexed record.
    Results: 135 (7.9%) records were incorrectly indexed with the MeSH term Malus. The most common indexing error was due to the word "apple" being used in similes, metaphors, and idioms (80, or 59.2%), with the next most common error being due to "apple" being present in a name or term (50, or 37%). Additional indexing errors were attributed to the use of "apple" in acronyms, and, in one case, a reference to Sir Isaac Newton.
    Conclusion: As indicated by this study's findings, automatic indexing can commit errors when indexing records that have words with non-literal or alternative meanings in their titles or abstracts. Librarians should be mindful of the existence of automatic indexing errors, and instruct authors on how best to ameliorate the effects of them within their own manuscripts.
    Keywords:  Abstract and Indexing; MEDLINE; Medical subject heading; PubMed; automatic indexing
    DOI:  https://doi.org/10.5195/jmla.2025.2110
  7. J Med Libr Assoc. 2025 Oct 23. 113(4): 383-386
      This article briefly documents the history and significance of PubMed Central (PMC) Journal Backfiles Digitization, 2004-2024 to raise awareness of this open access project among researchers who will find much to discover to advance understanding about the human condition across time and place. The success of PMC Journal Backfiles Digitization-including the interdisciplinary teamwork and partnerships underpinning it-provides a blueprint for future efforts to make the globally appreciated collections of the National Library of Medicine (NLM) accessible to all. By continuing to prioritize open access, teamwork, and partnerships, NLM and likeminded institutions can ensure that knowledge and data inform the advancement of medicine and public health.
    Keywords:  Articles; Biomedical journals; Digitization; Historical Medical Archive; PubMed Central
    DOI:  https://doi.org/10.5195/jmla.2025.2235
  8. J Med Libr Assoc. 2025 Oct 23. 113(4): 336-341
       Background: The Early Career Librarians Initiative of the South Central Chapter of the Medical Library Association (ECLI) offered a webinar series that addressed topics of interest to new professionals such as networking, goal setting, and salary negotiation. Additionally, the ECLI assessed participant feedback on the series through a program evaluation survey.
    Case Presentation: ECLI partnered with the Network of the National Library of Medicine (NNLM), Region 3, to offer six webinars over the course of two years. Attendees were asked to complete a survey. Quantitative results were analyzed, and qualitative free-text responses were thematically coded. A total of 567 people attended the webinars, and 154 completed the survey. Four major themes emerged as the most useful aspects of the webinar series: practical tips, encouragement, and real-life experience.
    Conclusion: Early career librarians often feel overwhelmed and are interested in guidance on career planning and building professional soft skills. This highly attended webinar series and positive evaluation feedback, demonstrates the value of providing accessible online professional development opportunities for early career and transitioning librarians, offering valuable information and support in key areas of need.
    Keywords:  Career development; ECLI; continuing education; early career librarian; emerging librarian; health science librarian; information overload; library professional; medical librarian; onboarding overload; professional development; sense of community; transitioning librarian
    DOI:  https://doi.org/10.5195/jmla.2025.2071
  9. J Med Libr Assoc. 2025 Oct 23. 113(4): 298-309
       Objective: Nurses must evaluate and sift through large quantities of information of varying quality as part of patient care. This study sought to determine nurses' evaluation criteria when encountering health information, including consumer health information written for the general public and scholarly sources, such as journal articles.
    Methods: We employed a mixed-methods approach with a survey and follow-up individual interviews. In both the survey and interviews, nurses were asked to evaluate information written for the general public or a scholarly audience. Interviewees were encouraged to think aloud to elucidate their criteria. We analyzed data using descriptive statistics and inductive thematic analysis.
    Results: Criteria used for both consumer and scholarly information were similar, with accuracy, relevance, authority, purpose, and currency as the most frequently reported. Nurses often relied on easily identifiable characteristics, such as where information came from, funding sources, intended audience, or its concordance with their prior knowledge. Nurses demonstrated awareness of the need to evaluate methodology in studies, especially empirical studies, for accuracy and relevance. However, they were less likely to evaluate methodology in review articles.
    Conclusions: Nurses value accurate, relevant information; however, their evaluation criteria are often superficial. Educators should encourage nursing students to engage more deeply with the nuances of evaluation. While many nurses pointed to research and peer review as evidence of accuracy, fewer demonstrated a deeper understanding of how to evaluate particular research methodologies, such as systematic reviews.
    Keywords:  Critical Appraisal; Evaluation Criteria; Evidence-Based Nursing; Evidence-Based Practice; Information Literacy; Nurses
    DOI:  https://doi.org/10.5195/jmla.2025.2163
  10. J Med Libr Assoc. 2025 Oct 23. 113(4): 269-280
       Objectives: To identify the most frequently-observed forms of cognitive bias among Health Information Professionals (HIPs) during decision-making processes. To determine if number of years in the profession influences the types of cognitive biases perceived in others' decisions.
    Method: This cross-sectional study invited participation of 498 elected and appointed leaders at the national, caucus, and chapter levels of the Medical Library Association. The 149 participants (32%) were presented with 24 cognitive biases often associated with expected decision-making contexts among HIPs.
    Results: The most frequently observed forms of cognitive bias in decision-making situations were: Status Quo, Sunk Costs, Novelty, Professionology, Authority, Worst-Case Scenario, and Group Think. Four of these overlapped with a previous 2007 study. Results were analyzed by length of years in the profession. Four forms of cognitive bias showed statistically significant differences in frequency by years in the profession: Authority, Naïve Realism, Overconfidence, and Status quo forms of cognitive bias.
    Discussion: This study identified commonly observed cognitive biases that interrupt decision-making processes. These results provide a first step toward developing solutions. Mitigation strategies for the seven most common forms of identified cognitive bias are explored with more general recommendations for all forms of cognitive bias. This study should guide the profession on the most commonly-perceived forms of cognitive bias occurring in decision-making environments with an eye upon possible mitigation strategies.
    Keywords:  Cognitive Bias; Decision Making; Evidence Based Practice; Group Processes; Health Information Professionals; Health Sciences Librarianship; Informaticists; Leadership; Medical Library Association
    DOI:  https://doi.org/10.5195/jmla.2025.2209
  11. J Med Libr Assoc. 2025 Oct 23. 113(4): 366-373
       Background: This case report describes the integration of a capstone Evidence-Based Medicine (EBM) assignment into a first-year medical student curriculum and presents an analysis of the correlation between search strategy quality and article selection quality within that assignment.
    Case Presentation: A whole-task EBM assignment, requiring students to address a clinical scenario by completing all EBM steps, was implemented after a curriculum-integrated EBM course. Student performance on their search strategy and article selection was assessed using a rubric (1-4 scale). Spearman's rank correlation coefficient was used to assess the relationship between these two variables. Eighty-two students completed the assignment. Fifty-nine percent received a score of 3 for their search strategy, while 77% received a score of 4 for article selection. Spearman's rank correlation coefficient was 0.19 (p-value = 0.086).
    Conclusions: While a weak, non-statistically significant correlation was observed between search quality and article selection, the analysis revealed patterns that may inform future instructional design. Educators should consider emphasizing the importance of selecting up-to-date and high-quality evidence and addressing common search errors. Further research, incorporating direct observation and baseline assessments, is needed to draw more definitive conclusions.
    Keywords:  Assessment; Curriculum; Evidence-Based Medicine; Librarians; Medical Students
    DOI:  https://doi.org/10.5195/jmla.2025.2213
  12. J Med Libr Assoc. 2025 Oct 23. 113(4): 342-348
       Background: Systematic reviews are increasingly appearing in doctoral theses and being supported by librarians. There is, however, evidence that students struggle to undertake systematic reviews.
    Case Presentation: We sought to understand the perspectives of, and confidence utilising systematic review search methods following an online escape room teaching intervention as part of our in-person orientation session for Doctorate in Clinical Psychology trainees. Following the session, trainees were invited to participate in an online survey to which we received a 90% response rate (n=35). The escape room was enjoyed by most trainees with many using the words "fun" and "engaging" to describe the intervention, this despite more participants finding the escape room difficult. The average scores for confidence in utilising search syntax were positive, but there was a wide range of scores. Many of the comments that trainees made centred on time pressure to escape. We believe that allowing the trainees more time would increase their enjoyment of the game and aid their learning.
    Conclusion: Our systematic review escape room demonstrates that key methodological concepts and search skills can be taught in an active, fun, and engaging way that helps introduce and scaffold learning for later in-depth teaching.
    Keywords:  Active learning; Escape Rooms; Research students; Search skills; Systematic reviews
    DOI:  https://doi.org/10.5195/jmla.2025.2167
  13. J Med Libr Assoc. 2025 Oct 23. 113(4): 290-297
       Objective: Research published in languages other than English (LOTE) is often ignored in evidence syntheses, marginalising diverse knowledge and global perspectives. While the extent of LOTE inclusion and the associated attitudes of LOTE inclusion amongst authors of systematic reviews has been well characterised, LOTE inclusion in other evidence synthesis forms has yet to be explored. Scoping reviews, in comparison to systematic reviews, examine a broader range of sources to build a conceptual summary of a field of inquiry, making LOTE literature an important source of information for scoping review authors. This study therefore aimed to characterise the current state of LOTE inclusion intentions in scoping reviews.
    Methods: Peer-reviewed, PubMed indexed scoping review protocols published from 01-Jan-2024 to 11-Aug-2024 were analysed for LOTE inclusion. Author affiliation, which LOTEs (if any) were included, and what methods authors planned to use to read LOTE literature were recorded.
    Results: Overall, LOTE inclusion intentions and attitudes were diverse, with just under half of the 249 protocols analysed including a LOTE. Many LOTE-included articles relied on the authorship team's own LOTE proficiency to gather evidence. Machine translation was also intended to be used in one quarter of the LOTE-included protocols. Only 30% of the exclusive protocols planned to exclude LOTEs at the screening stage, allowing for readers to identify the number of LOTE articles.
    Conclusion: This analysis demonstrates the need for increased LOTE inclusion and reporting guidelines for scoping reviews, as well as the importance of analysing LOTE inclusion for other forms of evidence synthesis.
    Keywords:  Evidence Synthesis; Language Bias; Scoping Review
    DOI:  https://doi.org/10.5195/jmla.2025.2170
  14. Cureus. 2025 Oct;17(10): e93984
       INTRODUCTION: Accurate and up-to-date educational resources are vital for medical professionals treating pneumonia to ensure alignment with evolving clinical guidelines, improve diagnostic precision, and support effective, evidence-based care that enhances patient outcomes.
    METHODS: Readability of the content was compared and assessed using the Flesch-Kincaid Reading Ease and Grade Level metrics via an online calculator, evaluating parameters such as word and sentence counts, average sentence length, and proportion of difficult words. Data were compiled in Excel (Microsoft Corp., Redmond, WA, USA), and statistical tools were used for analysis, with significance set at a p-value < 0.05.
    RESULTS: In this limited-scope study, ChatGPT (OpenAI, San Francisco, CA, USA)-generated content on pneumonia was found to be significantly shorter and denser than UpToDate, with more complex vocabulary, though both sources showed comparable readability scores across standard metrics.
    CONCLUSION: This suggests ChatGPT may offer quicker, more accessible summaries, while UpToDate provides more balanced, clinically grounded content, highlighting the potential of a combined approach for effective medical education. The clinical accuracy of the AI-generated content was not reviewed by human experts, thus underlining the need for broader studies across diverse clinical topics with multiple reviewers.
    Keywords:  artificial intelligence; chatgpt; clinical decision support; educational content; medical education; pneumonia; uptodate
    DOI:  https://doi.org/10.7759/cureus.93984
  15. J Laparoendosc Adv Surg Tech A. 2025 Nov 04.
      Background: Smoking is associated with higher complication and recurrence rates in ventral and inguinal hernia repairs, but evidence is fragmented. This study evaluated the efficacy of AI-based large language models (LLMs) for identifying literature on the impact of smoking on hernia repairs. Methods: ChatGPT 4.0, ChatGPT 4o, Microsoft Copilot, and Google Gemini were instructed to search PubMed, Embase, and Scopus for retrospective/prospective studies and randomized controlled trials regarding smoking's effects on ventral and inguinal hernia repairs. The models' outputs were cross-checked against previous systematic reviews to assess accuracy. Results: The artificial intelligence (AI) tools generated 24 citations, of which only nine (37.5%) proved valid and relevant. Thirteen (54.2%) were fabricated references, and two (8.3%) cited studies that did not match the specified criteria. Additionally, the AIs identified two studies missed by previous systematic reviews but overlooked 35 (79.5%) recognized by those reviews. Conclusions: Although LLMs can quickly compile potentially relevant references, they are prone to fabricating or omitting crucial studies. Human verification remains essential for conducting reliable, comprehensive literature searches in systematic reviews and meta-analyses.
    Keywords:  artificial intelligence; hernia repair; large language models; smoking
    DOI:  https://doi.org/10.1177/10926429251393122
  16. J Adolesc Health. 2025 Nov 13. pii: S1054-139X(25)00443-4. [Epub ahead of print]
       PURPOSE: The study aims to evaluate the ability of ChatGPT-4 to generate reliable and accurate responses concerning exercise and rehabilitation strategies for adolescent patients with myositis.
    METHODS: Seventy frequently asked questions related to exercise and rehabilitation in adolescent myositis were developed and classified into 7 thematic categories. Information reliability was assessed using the modified DISCERN (mDISCERN) tool, quality was evaluated with the Global Quality Scale (GQS), accuracy was measured using a five-point Likert scale, and readability was assessed with the Flesch Reading Ease scale. Two independent physiotherapists with expertise in rheumatologic rehabilitation independently evaluated the responses.
    RESULTS: The mDISCERN scale scores ranged from 3 to 3.67, with a mean of 3.36. The GQS scores varied from 3.46 to 5.0, with a mean of 3.86. Accuracy scale scores ranged from 3.9 to 5.0, with a mean of 4.26. The Flesch Reading Ease scores ranged from 42.92 to 55.61, with a mean of 47.79. The intraclass correlation coefficients for the mDISCERN, GQS, and accuracy scales were 0.773, 0.712, and 0.710, respectively.
    DISCUSSION: This study emphasized that ChatGPT-4 responses regarding adolescent myositis are generally accurate, with moderate to good reliability. However, the high reading level, requiring college-level education, may limit accessibility. These limitations underscore the need for healthcare professionals to supervise exercise planning and for further development of domain-specific AI models with enhanced reliability and readability.
    Keywords:  Artificial intelligence; ChatGPT4; Chatbot; Education; Exercise; Patient information; Readability; Rheumatic diseases
    DOI:  https://doi.org/10.1016/j.jadohealth.2025.09.015
  17. Healthcare (Basel). 2025 Oct 23. pii: 2670. [Epub ahead of print]13(21):
       BACKGROUND/OBJECTIVES: Rotator cuff (RC) tears are a leading cause of shoulder pain and disability. Artificial intelligence (AI)-based chatbots are increasingly applied in healthcare for diagnostic support and patient education, but the reliability, quality, and readability of their outputs remain uncertain. International guidelines (AMA, NIH, European health communication frameworks) recommend that patient materials be written at a 6th-8th grade reading level, yet most online and AI-generated content exceeds this threshold.
    METHODS: We compared responses from three AI chatbots-ChatGPT-4o (OpenAI), Gemini 1.5 Flash (Google), and DeepSeek-V3 (Deepseek AI)-to 20 frequently asked patient questions about RC tears. Four orthopedic surgeons independently rated reliability and usefulness (7-point Likert) and overall quality (5-point Global Quality Scale). Readability was assessed using six validated indices. Statistical analysis included Kruskal-Wallis and ANOVA with Bonferroni correction; inter-rater agreement was measured using intraclass correlation coefficients (ICCs).
    RESULTS: Inter-rater reliability was good to excellent (ICC 0.726-0.900). Gemini 1.5 Flash achieved the highest reliability and quality, ChatGPT-4o performed comparably but slightly lower in diagnostic content, and DeepSeek-V3 consistently scored lowest in reliability and quality but produced the most readable text (FKGL ≈ 6.5, within the 6th-8th grade target). None of the models reached a Flesch Reading Ease (FRE) score above 60, indicating that even the most readable outputs remained more complex than plain-language standards.
    CONCLUSIONS: Gemini 1.5 Flash and ChatGPT-4o generated more accurate and higher-quality responses, whereas DeepSeek-V3 provided more accessible content. No single model fully balanced accuracy and readability.
    CLINICAL IMPLICATIONS: Hybrid use of AI platforms-leveraging high-accuracy models alongside more readable outputs, with clinician oversight-may optimize patient education by ensuring both accuracy and accessibility. Future work should assess real-world comprehension and address the legal, ethical, and generalizability challenges of AI-driven patient education.
    Keywords:  artificial intelligence; chatbots; digital health; health literacy; large language models; patient education; rotator cuff injuries
    DOI:  https://doi.org/10.3390/healthcare13212670
  18. J Med Syst. 2025 Nov 10. 49(1): 158
      The cost-effective open-source artificial intelligence (AI) model DeepSeek-R1 in China holds significant potential for healthcare applications. As a health education tool, it could help patients acquire health science knowledge and improve health literacy. Low back pain (LBP), the most common musculoskeletal problem globally, has seen increasing use of large language model (LLM)-based AI chatbots by patients to access health information, making it critical to further examine the quality of such information. This study aimed to evaluate the response quality and readability of answers generated by DeepSeek-R1 to common patient questions about LBP. Ten questions were formulated using inductive methods based on literature analysis and Baidu Index data, which were presented to DeepSeek-R1 on March 10, 2025. The evaluation spanned readability, understandability, actionability, clinician assessment, and reference assessment. Readability was measured using the Flesch-Kincaid Grade Level, Flesch Reading Ease Scale, Gunning Fog Index, Coleman-Liau Index, and Simple Measure of Gobbledygook (SMOG Index). Understandability and actionability were assessed via the Patient Education Materials and Assessment Tool for Printable Materials (PEMAT-P). Clinicians evaluated accuracy, completeness, and correlation. A reference evaluation tool was used to assess reference quality and the presence of hallucinations. Readability analysis indicated that DeepSeek's responses were overall "difficult to read", with Flesch-Kincaid Grade Level (mean 12.39, SD 1.91), Flesch Reading Ease Scale (mean 19.55, Q1 12.94, Q3 29.78), Gunning Fog Index (mean 13.95, SD 2.61), Coleman-Liau Index (mean 17.46, SD 2.30), and SMOG Index (mean 11.04, SD 1.37). PEMAT-P revealed good understandability but weak actionability. Consensus among five clinicians confirmed satisfactory accuracy, completeness, and relevance. References Assessment identified 9 instances (14.8%) of hallucinated references, while Supporting was rated as moderate, with most references sourced from authoritative platforms. Our study demonstrates the potential of DeepSeek-R1 in the educational content for patients with LBP. It can be employed as a supplement to patient education tools rather than substituting for clinical judgment.
    Keywords:  Artificial intelligence; DeepSeek-R1; Large language models; Low back pain; Patient education; Readability assessment
    DOI:  https://doi.org/10.1007/s10916-025-02282-0
  19. Clin Rheumatol. 2025 Nov 10.
       INTRODUCTION: The study assesses the quality, readability, reliability, and usefulness of exercise-related information generated by two large language models (LLMs), ChatGPT-4 and DeepSeek-V3, in response to frequently asked questions by patients with ankylosing spondylitis (AS).
    METHOD: This cross-sectional comparative study developed a structured assessment framework using a set of exercise and rehabilitation-related questions, distributed across four key domains: exercise and physical activity (C1; 33 items), posture and mobility (C2; 6 items), breathing and pulmonary health (C3; 6 items), and general topics (C4; 5 items). Information quality was assessed using the modified DISCERN (mDISCERN) tool, while content reliability was evaluated with the Reliability Score and perceived usefulness was measured using the Usefulness Score. Readability was assessed using the Flesch Reading Ease (FRE) scale. Three independent physiotherapists with expertise in rheumatologic rehabilitation independently evaluated the responses.
    RESULTS: In total score comparisons, DeepSeek-V3 achieved significantly higher scores than ChatGPT-4 on the mDISCERN (4(3-4) vs. 3(3-3); p < 0.001), reliability (5(5-6) vs. 5(4-5); p < 0.001), and usefulness (6(5-6) vs. 5(5-6); p < 0.001). Domain-specific analysis showed higher usefulness scores for DeepSeek-V3 in C1 (p = 0.004), C2 (p = 0.019), and C4 (p = 0.005). Mean FRE scores were 30.4 ± 14.37 for ChatGPT-4 and 28.77 ± 17.77 for DeepSeek-V3, both classified as very difficult (p > 0.05).
    CONCLUSION: This study highlighted that responses generated by DeepSeek-V3 related to AS were generally more accurate and demonstrated greater reliability compared to those produced by ChatGPT-4. However, the complex language used by both LLMs may reduce accessibility for patients with limited health literacy. These limitations highlight the importance of healthcare professional oversight in exercise planning. Key Points • DeepSeek-V3 provided more accurate and reliable responses than ChatGPT-4 regarding exercise in AS. • Domain-specific analysis showed DeepSeek-V3 was particularly more useful in exercise, posture, and general topics. • Both LLMs generated content with very difficult readability, requiring college-level comprehension. • Healthcare professional supervision is essential when using LLMs in patient education.
    Keywords:  Artificial intelligence; Chatbot; Exercise; Patient ınformation; Readability; Rheumatic diseases
    DOI:  https://doi.org/10.1007/s10067-025-07789-y
  20. J Clin Med. 2025 Nov 03. pii: 7804. [Epub ahead of print]14(21):
      Background/Objectives: Artificial Intelligence (AI)-based chatbots such as ChatGPT are easily available and are quickly becoming a source of information for patients as opposed to traditional Google searches. We assessed the quality of information on bladder cancer, provided by various AI chatbots such as ChatGPT 4o, Google Gemini 2.0 flash, Grok 3, Claude Sonnet 3.7 and DeepSeek R1. Their responses were analysed in terms of Readability Indices, and two consultant urologists rated the quality of information provided using the validated DISCERN tool. Methods: The top 10 most frequently asked questions about bladder cancer were identified using Google Trends. These questions were then provided to five different AI chatbots, and their responses were collected. No prompts were used, reflecting natural language queries that patients would use. The responses were analysed in terms of their readability using five validated indices: Flesch Reading Ease (FRE), the Flesch-Kincaid Reading Grade Level (FKRGL), the Gunning Fog Index, the Coleman-Liau Index and the SMOG index. Two consultant urologists then independently assessed the responses of various AI chatbots using the DISCERN tool, which rates the quality of the health information on a five-point LIKERT scale. Inter-rater agreement was calculated using Cohen's Kappa and the intraclass correlation coefficient (ICC). Results: ChatGPT 4o was the overall winner in readability scores, with the highest Flesch Reading Ease score (59.4) and the lowest average reading grade level (7.0) required to understand the material. Grok 3 was a close second (FRE 58.3, grade level 8.7). Claude 3.7 Sonnet used the most complex language in its answers and therefore scored the lowest FRE score of 44.9, with the highest grade level (9.5) and also the highest complexity on other indices. In the DISCERN analysis, Grok 3 received the highest average score (52.0), followed closely by ChatGPT 4o (50.5). The inter-rater agreement was highest for ChatGPT 4o (ICC: 0.791; Kappa: 0.437), while it was lowest for Grok 3 (ICC: 0.339, Kappa 0.0, Weighted Kappa 0.335). Conclusions: All AI chatbots can provide generally good-quality answers to questions about bladder cancer with zero hallucinations. ChatGPT 4o was the overall winner, with the best readability metrics, strong DISCERN ratings and highest inter-rater agreement.
    Keywords:  AI chatbot; LLM; bladder cancer; readability matrix
    DOI:  https://doi.org/10.3390/jcm14217804
  21. J Burn Care Res. 2025 Nov 12. pii: iraf211. [Epub ahead of print]
       INTRODUCTION: This study aims to evaluate the accuracy and quality of responses generated by ChatGPT-4o® to frequently asked questions (FAQs) posed by practicing physicians regarding the initial assessment of pediatric burn injuries, with assessment of pediatric burn specialists.
    MATERIAL AND METHODS: Thirty-four FAQs about pediatric burn care were posed to ChatGPT-4o® twice, two weeks apart, in a blinded manner by four experienced pediatric surgeons who work at a national tertiary referral burn center. Questions were divided into five subgroups; initial assessment and triage, fluid resuscitation and hemodynamic management, wound care and infection prevention, pain management and sedation, special situations and follow-up. The reliability of ChatGPT-4o's answers was evaluated utilizing the modified five-point DISCERN tool (mDISCERN). The comprehensive quality of the answers were assessed using the Global Quality Score (GQS). Inter-rater reliability was measured using intraclass correlation coefficients (ICC).
    RESULTS: ChatGPT-4o® demonstrated high-quality and reliable responses to questions. The median GQS was 4.75 (range: 3.50-5.00. The mDISCERN median score was 9.25 (range: 7.00-10.00), reflecting strong informational reliability. There was a very strong correlation between GQS and mDISCERN scores (r = 0.858, p < .001), indicating consistent alignment between content quality and reliability. Inter-rater reliability analysis showed excellent consistency for average scores (ICC = 0.87, p < .001), supporting the robustness of the reviewers' assessments.
    CONCLUSIONS: ChatGPT-4o® demostrated itself to be a high-quality and reliable source of information for the initial evaluation of pediatric burn patients, providing substantial support for healthcare professionals in clinical decision-making.
    Keywords:  ChatGPT-4o®; Pediatric burn; artifical intelligence; burn assessment
    DOI:  https://doi.org/10.1093/jbcr/iraf211
  22. Dermatol Pract Concept. 2025 Oct 01. 15(4):
       INTRODUCTION: Androgenetic alopecia (AGA) is a common cause of hair loss worldwide. Accurate patient education may improve treatment adherence and outcomes.
    OBJECTIVE: To compare the accuracy, readability, and user experience of ChatGPT 4.0, Gemini 1.5 Flash, and Deepseek R1 in answering common patient questions about AGA.
    METHODS: In February 2025, a cross-sectional study was conducted using 12 frequently asked patient questions on AGA, sourced from online platforms. The questions were submitted to ChatGPT 4.0, Gemini 1.5 Flash, and Deepseek R1. Two dermatologists independently assessed responses using a validated 4-point accuracy scale. Readability was measured with the Flesch-Kincaid Grade Level and Flesch Reading Ease Score. User experience was evaluated based on response speed, presence of visual aids, citation usage, and overall satisfaction. Inter-rater reliability was analyzed via Cohen's kappa, and statistical comparisons were made between models.
    RESULTS: ChatGPT 4.0 and Gemini 1.5 Flash successfully answered all 12 questions, with most responses rated as "satisfactory with minimal corrections." Deepseek R1 answered only five questions and frequently provided inaccurate content, especially when differentiating between AGA and cicatricial alopecia. It also lacked warnings about potential misinformation. Gemini 1.5 Flash included visual aids and citations, improving interpretability. All models generated responses at a high school reading level. In terms of user experience, ChatGPT 4.0 and Gemini 1.5 Flash outperformed Deepseek R1.
    CONCLUSIONS: ChatGPT 4.0 and Gemini 1.5 Flash provided accurate, readable, and user-friendly responses on AGA-related questions, making them promising tools for patient education under physician guidance. Deepseek R1's limitations highlight the need for cautious implementation.
    DOI:  https://doi.org/10.5826/dpc.1504a5929
  23. Aesthetic Plast Surg. 2025 Nov 13.
       BACKGROUND: Children with microtia and their parents require comprehensive information to make informed decisions about treatment options.
    OBJECTIVE: We evaluated the effectiveness of various large language models (LLMs) in providing preoperative education for congenital microtia reconstruction (CMR) by analyzing their responses to related inquiries.
    METHODS: Ten plastic surgeons developed 13 CMR-related preoperative education strategies and input 14 text commands into Claude-3-Opus, GPT-4-Turbo, and Gemini-1.5-Pro during an online session. Five experts evaluated these language model's responses for correctness, completeness, logic, and potential harm, while five postoperative patients' parent reviewed the education materials for readability and value. All responses were also analyzed for readability using the context package.
    RESULTS: The results showed no statistically significant differences among Gemini, Claude, and GPT in the evaluation metrics of accuracy, completeness, and potential risk. In terms of logicality and overall rating, Gemini's responses were significantly superior to GPT. Preoperative patient education materials generated by GPT received the highest DISCERN scores, significantly outperforming those from Claude and Gemini. From the perspective of patient's parent's, there are no statistically significant differences among Gemini, Claude, and GPT. Objective assessments of readability confirmed that Claude's materials were easier to understand compared to those from the other models.
    CONCLUSION: Claude-3-Opus, GPT-4-Turbo, and Gemini-1.5-Pro effectively addressed patient inquiries and produced clear pre-surgical education materials. However, these LLMs should not be used independently for patient education without expert supervision to ensure accuracy and completeness.
    LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
    Keywords:  Artificial intelligence; ChatGPT; LLMs; Large language models; Microtia; Online medical consultation; Patients education
    DOI:  https://doi.org/10.1007/s00266-025-05314-9
  24. Healthcare (Basel). 2025 Oct 30. pii: 2758. [Epub ahead of print]13(21):
      Background/Objectives: Autism is one of the most prevalent neurodevelopmental conditions globally, and healthcare professionals including pediatricians, developmental specialists, and speech-language pathologists, play a central role in guiding families through diagnosis, treatment, and support. As caregivers increasingly turn to digital platforms for autism-related information, artificial intelligence (AI) tools such as ChatGPT, Gemini, and Microsoft Copilot are emerging as popular sources of guidance. However, little is known about the quality, readability, and reliability of information these tools provide. This study conducted a detailed comparative analysis of three widely used AI models within defined linguistic and geographic contexts to examine the quality of autism-related information they generate. Methods: Responses to 44 caregiver-focused questions spanning two key domains-foundational knowledge and practical supports-were evaluated across three countries (USA, England, and Türkiye) and two languages (English and Turkish). Responses were coded for accuracy, readability, actionability, language framing, and reference quality. Results: Results showed that ChatGPT generated the most accurate content but lacked reference transparency; Gemini produced the most actionable and well-referenced responses, particularly in Turkish; and Copilot used more accessible language but demonstrated lower overall accuracy. Across tools, responses often used medicalized language and exceeded recommended readability levels for health communication. Conclusions: These findings have critical implications for healthcare providers, who are increasingly tasked with helping families evaluate and navigate AI-generated information. This study offers practical recommendations for how providers can leverage the strengths and mitigate the limitations of AI tools when supporting families in autism care, especially across linguistic and cultural contexts.
    Keywords:  Artificial Intelligence (AI); ChatGPT; Copilot; Gemini; Large Language Models (LLMs); autism; healthcare communication
    DOI:  https://doi.org/10.3390/healthcare13212758
  25. J Orthod. 2025 Nov 09. 14653125251391435
       OBJECTIVE: To evaluate the content and quality of videos created by artificial intelligence (AI) video generator platforms, specifically focused on oral hygiene maintenance and dietary advice for orthodontic patients.
    METHODS: This mixed-method study evaluated five AI video generation platforms: InVideo, VEED.IO, VideoGen, Lumen5 and Steve.AI. A standardised base prompt was used across all platforms, with minor modifications to accommodate each tool's limitations and functionality. Two orthodontists assessed the content of the voice-overs and captions using a checklist for oral hygiene (OHC) and dietary advice (DAC), as well as the Global Quality Scale (GQS) for overall quality. Descriptive analysis evaluated the consistency between voice-over, captions and visuals. A thematic qualitative analysis was conducted, and quantitative comparisons of checklist scores across platforms were made using non-parametric tests.
    RESULTS: All videos were rated poorly, each receiving a GQS score of 2, indicating limited usefulness for patient education. VideoGen recorded the highest OHC score, while Steve.AI (Live) recorded the lowest. For the DAC, Lumen5 achieved the highest score, whereas InVideo and VEED.IO obtained the lowest. However, the score between all the platforms for DAC and OHC were not statistically different (P > 0.05). Although flossing advice and fluoride toothpaste recommendations were generally included, key details, such as brushing duration and specific dietary instructions, were often missing. Many videos also contained irrelevant or inaccurate visuals. Thematic analysis identified three main themes: oral hygiene maintenance, dietary advice and orthodontic appointments.
    CONCLUSION: This study found that AI video generation platforms produce content of poor quality and relevance for oral hygiene maintenance and dietary advice for orthodontic patients.
    Keywords:  artificial intelligence; dietary advice; oral hygiene; orthodontics; video
    DOI:  https://doi.org/10.1177/14653125251391435
  26. Cancer Med. 2025 Nov;14(21): e71364
       BACKGROUND: Artificial intelligence (AI) chatbots perform well in answering English cancer questions. For Spanish, their performance is unknown and may differ by free vs. paywall versions.
    METHODS: We evaluated the quality (range: 1-5 points), actionability (range: 0-100%), and readability (range: 1-13 grades) of six popular AI chatbots in responding to the 15 most searched Spanish questions regarding breast, prostate, and colon cancer.
    RESULTS: The quality of overall AI chatbot responses was good (mean [95% CI]: 3.5 [3.4-3.6] points), while the actionability was low (mean [95% CI]: 35.6% [30.8%-40.3%]). The readability was high-school-level (mean [95% CI]: 9.2 [8.8-9.6] grades), not concordant with the American Medical Association recommendation (≤ 6th grade). The quality, actionability, and readability did not differ by free and paywall versions (p > 0.05).
    CONCLUSION: Our findings suggested AI chatbots may generate good-quality responses to Spanish cancer questions, regardless of free or paywall versions. However, further improvement in actionability and readability is needed to benefit Spanish-speaking patients.
    Keywords:  Hispanic Americans; artificial intelligence; health literacy; natural language processing; neoplasms; patient education
    DOI:  https://doi.org/10.1002/cam4.71364
  27. J Cancer Educ. 2025 Nov 09.
      
    Keywords:  Artificial intelligence; ChatGPT; Natural language processing; Pathology; Radiation oncology
    DOI:  https://doi.org/10.1007/s13187-025-02783-z
  28. Sci Rep. 2025 Nov 13. 15(1): 39869
      Video-assisted thoracoscopic segmentectomy (VATS) is increasingly performed as a parenchyma-sparing procedure for early-stage lung cancer, yet standardized educational resources remain limited. YouTube is widely accessed by surgeons and trainees, but the educational quality of its content is largely unregulated. This study systematically evaluated YouTube videos on VATS segmentectomy using the validated LAParoscopic surgery Video Educational GuidelineS (LAP-VEGaS) tool. A structured search was performed on June 12, 2025, and 34 videos with ≥ 2500 views were included. Two experienced thoracic surgeons independently assessed all videos, and inter-rater agreement was measured using Cohen's kappa. The mean LAP-VEGaS score was 6.6 (range 2-14), with only 23.5% of videos reaching the validated threshold (≥ 11) for adequate educational quality. No significant correlation was observed between LAP-VEGaS scores and popularity metrics such as views, likes, or duration, although narration was strongly associated with higher scores. To our knowledge, this is the first study systematically evaluating VATS segmentectomy videos on YouTube using LAP-VEGaS. These findings demonstrate that most YouTube videos on VATS segmentectomy are educationally inadequate and highlight the need for peer-reviewed, curated repositories to ensure reliable and high-quality training materials for thoracic surgical education.
    Keywords:  LAP VEGaS; Surgical education; Thoracic surgery; VATS segmentectomy; Video assessment; YouTube
    DOI:  https://doi.org/10.1038/s41598-025-23479-w
  29. Int Ophthalmol. 2025 Nov 14. 45(1): 475
       PURPOSE: YouTube is a widely accessed platform for health-related information, yet its ophthalmologic content remains largely unregulated. Despite growing interest in video evaluations, no prior study has systematically assessed the quality of YouTube videos on retinal vein occlusion (RVO).
    METHODS: A cross-sectional analysis of the top 100 English-language YouTube videos on RVO was performed. Videos were independently scored using the Global Quality Score (GQS), DISCERN, and JAMA criteria.
    RESULTS: The mean GQS, DISCERN, and JAMA scores were 3.49 ± 1.14, 17.17 ± 3.93, and 2.34 ± 1.17, respectively. Videos uploaded by professionals had significantly higher quality scores (p < 0.05), while no significant differences were observed across RVO subtypes (BRVO, CRVO, general RVO). Approximately 51% of videos were rated below high-quality thresholds.
    CONCLUSIONS: The study reveals substantial variation in the reliability of YouTube content related to RVO. There is a critical need for expert involvement in producing and curating online ophthalmic education materials to improve digital health literacy.
    Keywords:  Digital health literacy; Patient education; Retinal vein occlusion; Social media; Video analysis; Video quality assessment; YouTube
    DOI:  https://doi.org/10.1007/s10792-025-03854-2
  30. Front Public Health. 2025 ;13 1627885
       Background: Bipolar disorder is a prevalent mental health issue characterized by recurrent episodes of mania and depression, significantly impacting patients' quality of life. With the rise of short video sharing platforms, there is an urgent need to evaluate the quality and reliability of the medical information disseminated regarding this disorder.
    Objective: This study aimed to assess the quality and reliability of videos related to bipolar disorder available on popular Chinese short video platforms, including TikTok, Kwai, Bilibili, WeChat, Xiaohongshu, and Baidu.
    Methods: A cross-sectional content analysis was conducted in May 2025, using keywords related to bipolar disorder to retrieve relevant videos from selected platforms. The quality of the videos was evaluated using multiple standardized assessment tools, including the JAMA Benchmarking Criteria, GQS, modified DISCERN, PEMAT, and HONCODE.
    Results: Significant differences in video quality and audience engagement metrics were observed across platforms. TikTok and Kwai had higher quality scores, while WeChat resulted in more comments. Most videos were created by medical professionals, although independent users also contributed content. Overall, video quality was inconsistent and not necessarily correlated with engagement metrics, highlighting the necessity for improved standards in disseminating health-related information on social media.
    Conclusion: On Chinese short video platforms, clinical practitioners are the main creators of bipolar disorder-related content, but their scientific nature, production quality, and information transparency still need to be improved. It is suggested to improve the platform management, creator training, and algorithm optimization, so as to promote the improvement of public mental health literacy.
    Keywords:  bipolar disorder; content analysis; health information; quality and reliability assessment; short videos
    DOI:  https://doi.org/10.3389/fpubh.2025.1627885
  31. JMIR Infodemiology. 2025 Nov 13. 5 e75973
       Background: Social media platforms are increasingly used for both sharing and seeking health-related information online. TikTok has become one of the most widely used social networking platforms. One health-related topic trending on TikTok recently is attention-deficit/hyperactivity disorder (ADHD). However, the accuracy of health-related information on TikTok remains a significant concern. Misleading information about ADHD on TikTok can increase stigmatization and lead to false "self-diagnosis," pathologizing of normal behavior, and overuse of care.
    Objective: This study aims to investigate the quality and usefulness of popular TikTok videos about ADHD and to explore how this content is perceived by the viewers based on an in-depth analysis of the video comments.
    Methods: We scraped data from the 125 most liked ADHD-related TikTok videos uploaded between July 2021 and November 2023 using a commercial scraping software. We categorized videos based on the usefulness of their content as "misleading," "personal experience," or "useful" and used the Patient Education Materials Assessment Tool for Audiovisual Materials to evaluate the video quality regarding understandability and actionability. By purposive sampling, we selected 6 videos and analyzed the content of 100 randomly selected user comments per video to understand the extent of self-identification with ADHD behavior among the viewers. All qualitative analyses were carried out independently by at least 2 authors; the disagreement was resolved by discussion. Using SPSS (version 27; IBM Corp), we calculated the interrater reliability between the raters and the descriptive statistics for video and creator characteristics. We used one-way ANOVA to compare the usefulness of the videos.
    Results: We assessed 50.4% (63/125) of the videos as misleading, 30.4% (38/125) as personal experience, and 19.2% (24/125) as useful. The Patient Education Materials Assessment Tool for Audiovisual Materials scores for all videos for understandability and actionability are 79.5% and 5.1%, respectively. With a score of 92.3%, useful videos scored significantly higher for understandability than misleading and personal experience videos (P<.001). For actionability, there was no statistically significant difference depending on the videos' usefulness (P=.415). Viewers resonated with the ADHD-related behaviors depicted in the videos in 220 out of 600 (36.7%) of the comments and with ADHD in 32 out of 600 (5.3%) of the comments. Self-attribution of behavioral patterns varied significantly, depending on the usefulness of the videos, with personal experience videos showing the most comments on self-attribution of behavioral patterns (102/600, 17% of comments; P<.001). For self-attribution of ADHD, we found no significant difference depending on the usefulness of the videos (P=.359).
    Conclusions: The high number of misleading videos on ADHD on TikTok and the high percentage of users who self-identify with the symptoms and behaviors presented in these videos can potentially increase misdiagnosis. This highlights the need to critically evaluate health information on social media and for health care professionals to address misconceptions arising from these platforms.
    Keywords:  ADHD; TikTok; attention-deficit/hyperactivity disorder; health information; misinformation; social media
    DOI:  https://doi.org/10.2196/75973
  32. J Med Libr Assoc. 2025 Oct 23. 113(4): 310-317
       Objectives: This interview study is a follow-up to a state-wide survey of school nurses' information needs conducted in 2022. Few studies have explored school nurses' information needs, with little focus on searching behaviors or barriers to practice.
    Methods: The principal investigator interviewed participants online about their thoughts on survey results, how they find information, and challenges within the profession.
    Results: After interviews with school nurses within rural, suburban, and urban districts in the state, the authors found that school nurses required information on finite topics but had little access to subscription resources, little training in critical analysis, and lacked time for professional development.
    Conclusion: School nurses within Illinois have routine information needs, most of which can be answered using a series of go-to resources. They are understaffed and overworked, which results in them having little time to do more than surface-level searching for care-related queries. Medical librarians may be able to assist this oft overlooked population with their information needs by providing workshops and resources.
    Keywords:  Continuing Nursing Education; School nurse; information literacy; searching behavior
    DOI:  https://doi.org/10.5195/jmla.2025.2137
  33. J Health Commun. 2025 Nov 13. 1-8
      Research suggests associations between public understanding and support for evidence-based responses to the ongoing opioid crisis, yet communication inequality theory indicates that social position may systematically influence access to health information. This study examines demographic correlates of both active information seeking and passive information exposure across multiple channels, analyzing a nationally representative sample of 6,543 US adults. The findings advance communication inequality theory by revealing distinct information pathways: lower educational attainment was associated with actively seeking information on television, while higher attainment correlated with web searches. Political affiliations aligned with distinct media ecosystems, and racial identity corresponded with significantly different rates of passive exposure from sources like healthcare professionals and television. These patterns, along with higher information seeking from personal and medical networks among those with a family or personal history of opioid use, suggest that social groups inhabit fundamentally different information realities, potentially contributing to divergent understandings of the crisis. These findings highlight the need to design communication strategies that account for how social position shapes information pathways. Future research should examine whether exposure through these channels relates to public understanding and policy support.
    Keywords:  Health communication; communication inequality; health disparities; information seeking; opioid crisis
    DOI:  https://doi.org/10.1080/10810730.2025.2588343
  34. JMIR Form Res. 2025 Nov 12. 9 e75395
       Background: The prevalence of food and drug allergies has been steadily increasing in Germany. These conditions not only impair the quality of life of those affected but also place an additional burden on the health care system. At the same time, an increasing number of people are using the internet and other digital sources to seek health-related information.
    Objective: This study aimed to use the Google Ads Keyword Planner to identify the information needs and knowledge gaps of the internet-using population in Germany and to provide a foundation for future prevention and educational strategies regarding food and drug allergies.
    Methods: Relevant keywords related to selected food and drug allergies were extracted using the Google Ads Keyword Planner and analyzed according to predefined criteria. The observation period was from September 2022 to October 2024.
    Results: A total of 633 keywords related to specific types of food and drug allergies were identified, generating a combined search volume of 3,649,390 queries. The most frequently searched terms nationwide were "histamine allergy" (368,980/3,649,390, 10.1%), "penicillin allergy" (266,410/3,649,390, 7.3%), and "nut allergy" (103,850/3,649,390, 2.8%). Although "histamine allergy" was the most frequently searched term in this analysis, most searches for "histamine allergy" likely referred to an intolerance rather than a true immunoglobulin E-mediated allergy. Seasonal patterns were also observed, with increased searches for the categories "nut" and "penicillin" in the winter months and for "histamine" in the spring months.
    Conclusions: This study demonstrates the potential of Google search query data analysis in a medical context and, in particular, underscores its relevance for understanding the public interest in food and drug allergies in Germany. The findings highlight the need for improved, easily accessible educational resources and for implementing allergy-specific, socially relevant health campaigns to address the unmet information needs of the population living in Germany regarding food and drug allergies.
    Keywords:  digital health literacy; drug allergies; food allergies; public health informatics; web search analysis
    DOI:  https://doi.org/10.2196/75395
  35. Front Public Health. 2025 ;13 1542448
       Introduction: Information design and the design process is vital as part of the health communication strategy to tackle and prevent antimicrobial resistance. Various methods have been developed to achieve holistic tackling of antimicrobial resistance. In primary healthcare and low-resource settings, community healthcare workers and end-user participation allow for interventions to be more effective in meeting the target population's demands and needs.
    Methods: During this study, an antimicrobial resistance health information leaflet and a trainer's manual were designed in Makana Local Municipality's primary healthcare settings. The developed materials were assessed for readability using seven readability formulas and suitability using the Patient Education Materials Assessment Tool and the Suitability Assessment of Materials instrument.
    Results: The health information leaflet scored a final readability of grade 14, classifying it as 'difficult' to read because some medical terms could not be substituted. However, due to written and verbal explanations provided, the community healthcare workers and pharmacist assistants found it easy to understand the health information leaflet and requested no further changes. The finalized health information leaflet obtained a Patient Education Materials Assessment Tool understandability score of 92%, Patient Education Materials Assessment Tool actionability score of 97%, and a Suitability Assessment of Materials instrument score of 91%, proving that it was suitable for its target population.
    Discussion: The workshops and trainer's manual resulted in a significant increase in the peer educators' antimicrobial resistance-related knowledge. The participants felt empowered and prepared to be the change agents amongst their peers and communities because of the collaborative approach used in the study. The health information leaflet and trainer's manual on antimicrobial resistance can come in handy for the community healthcare workers and peer educators to use as resources for future home visits and awareness raising campaigns.
    Keywords:  Makana municipality; antimicrobial resistance; communicative ecology; community healthcare workers; health information materials; health promotion; peer educators; readability
    DOI:  https://doi.org/10.3389/fpubh.2025.1542448
  36. Public Health. 2025 Nov 07. pii: S0033-3506(25)00482-2. [Epub ahead of print]249 106036
       OBJECTIVES: Using Bandura's self-efficacy theory and the health belief model, the aim of this study is identify the information sources related to the promotion of healthy lifestyles during pregnancy, analysing their accessibility, relevance, and reliability.
    STUDY DESIGN: Qualitative Study.
    METHODS: A thematic analysis of 25 semi-structured interviews with pregnant women at different stages of pregnancy was carried out, considering sociodemographic and clinical variables such as age, educational level, parity, and access to social networks, among others. The discourses and data were processed through coding and categorization, as per the objectives of the study. Validation was carried out through researcher triangulation.
    RESULTS: Health professionals, the internet, and social media are among the variety of information sources used by pregnant women, with substantial variability in the accessibility and reliability of the sources and their direct influence on the acceptance and adherence to recommendations. The results of the study showed a strong reliance and trust in the recommendations from healthcare professionals, but also a growing influence of social media.
    CONCLUSIONS: Information provided by healthcare professionals is considered the most reliable and accepted, but pregnant women supplement this information by consulting other sources, where social networks are gaining ground, especially among young pregnant women. This highlights the need to develop strategies to improve the quality of online information.
    Keywords:  Healthy lifestyles; Information sources; Pregnancy; Qualitative research; Self-care
    DOI:  https://doi.org/10.1016/j.puhe.2025.106036
  37. J Med Libr Assoc. 2025 Oct 23. 113(4): 378-382
      In Fall 2019, the Midcontinental Chapter of the Medical Library Association (MCMLA) welcomed a new incoming chair who outlined four priorities for their tenure including "adopting Diversity & Inclusion (D&I) values, policies, and practices in every aspect of the organization" [1]. These priorities led to the MCMLA Executive Committee approving the creation of the Diversity and Inclusion (D&I) Task Force. The task force created a survey to capture the makeup of the current MCMLA membership, as well as to assess the diversity climate of the organization.
    DOI:  https://doi.org/10.5195/jmla.2025.2159
  38. J Med Libr Assoc. 2025 Oct 23. 113(4): 374-377
       Background: In 2023, JJ Pionke became President of the Midwest Chapter of the Medical Library Association (MWCMLA). He determined that for his presidential year, he would form a task force to determine the accessibility levels of the chapter and remediate accessibility issues as appropriate.
    Case Presentation: To accomplish the accessibility audit of the MWCMLA, Pionke formed an Accessibility Task Force that was time limited to one year. Task force meetings were held once a month to keep people accountable and to share out progress and requests for assistance. The task force was broken up into four teams: annual meeting, policy, social media, and website. Task force members could be on more than one team. The goals of each team were generally the same: what are other organizations doing, what do we have already if anything, and develop best practices/policy/etc. as needed.
    Conclusions: The teams fulfilled their mandate by creating best practices/guidelines/policies documents. Some accessibility remediation was needed for the chapter website. The task force's findings and materials were shared out among the MWCMLA as well as passed on to the presidents of the other chapters, many of whom had expressed interest in the results.
    Keywords:  Disability; Midwest Chapter of the Medical Library Association; accessibility; policy; presidential project; project management; task force
    DOI:  https://doi.org/10.5195/jmla.2025.2092