bims-librar Biomed News
on Biomedical librarianship
Issue of 2025–08–31
34 papers selected by
Thomas Krichel, Open Library Society



  1. Int J Environ Res Public Health. 2025 Aug 19. pii: 1298. [Epub ahead of print]22(8):
      Public libraries serve as vital community hubs that foster engagement, empowerment, and education, particularly for vulnerable populations, including refugee children and families. This study examines how Oklahoma's public libraries contribute to refugee resilience and identifies challenges they face in providing these essential services. Using a qualitative method approach, including 20 semi-structured interviews with library staff, questionnaire surveys, and observations conducted across three Oklahoma library systems (Metropolitan, Pioneer, and Tulsa City-County) the study explored programs, services, and strategies that support refugee adaptation and integration. Findings reveal that libraries excel in three key areas: cognitive services (language literacy, digital access, educational resources), socio-cultural services (community building, cultural exchange), and physiological services (safe spaces, welcoming environments). These services contribute to building human, social, and economic capital, with human capital consistently ranked as most crucial for refugee resilience. However, libraries face significant challenges, with language barriers, program gaps, and outreach limitations being the most prevalent obstacles. Additional barriers include facility constraints, transportation difficulties, resource limitations, and privacy concerns. The study proposes nine comprehensive guidelines for creating sustainable pathways to refugee resilience through enhanced library services, emphasizing proactive community engagement, staff training, multilingual resources, advocacy, strategic partnerships, tailored programming, transportation solutions, cultural competence, and welcoming environments. This study contributes to understanding how public libraries can function as inclusive institutions that support refugee children's successful integration and development in their new communities.
    Keywords:  Oklahoma; community integration; inclusive design; public libraries; refugee children; resilience
    DOI:  https://doi.org/10.3390/ijerph22081298
  2. J Med Libr Assoc. 2025 Jul 01. 113(3): 259-264
       Background: Health science libraries have invested in virtual reality technology and spaces to support use of this technology for teaching, learning, and research. Virtual reality has many uses within health sciences education such as simulation, exploration and learning, and soft skills development. It can also be used to build empathy in health sciences students through applications that provide an immersive, first-person perspective.
    Case Presentation: This case describes how a health sciences library and liaison librarians partnered with a course instructor to support a class utilizing the library's virtual reality resources. Librarians were collaborators in the development of the class and facilitated class sessions in the Virtual Reality Studio. Class sessions utilized the Beatriz Lab by Embodied Labs to increase empathy in medical students who were interested in working with geriatric or Alzheimer's patients.
    Conclusion: Liaison librarians support teaching and learning through a variety of tools and resources, including virtual reality. By partnering with instructors, librarians can use their instruction and collection knowledge to design and facilitate classes that are meaningful and interactive. Virtual reality applications provide another resource that librarians can incorporate into their course-integrated instruction sessions.
    Keywords:  Instruction; Librarians; Libraries; Medical Education; Virtual Reality
    DOI:  https://doi.org/10.5195/jmla.2025.2090
  3. J Med Libr Assoc. 2025 Jul 01. 113(3): 241-246
       Background: To compare the library's health information service usage area and customer topics with the hospital's reasons for hospitalizations to examine commonalities and explore potential growth opportunities within the community.
    Case Presentation: Researchers partnered with the hospital for this project. IRB approval was received. Researchers gathered the health information service's 2022 data, which was de-identified. Data analyzed included zip code and customer topics, which were coded according to the hospital's business line, which was defined as why a patient was hospitalized or used the ED. The health information service's business lines were compared with the hospital's business lines. Lastly, researchers also reviewed the hospital's targeted zip codes to see if those overlapped with the top zip codes that utilize the health information service. The top zip codes that used the library's health information service were 37920, 37918, 37917, 37919, and 37876. Usage of the health information service varied across zip codes and topics. The most requested topics for the health information service and reasons for hospitalizations/ED visits were General Medicine in three of the five zip codes. Based on the data's results, librarians performed outreach to organizations in the targeted zip codes to increase visibility of the library's services.
    Conclusion: The reasons people requested health information from the library aligned with hospitalizations and ED visits in most of the zip codes. Providing further outreach to the hospital's targeted zip codes will benefit both the hospital and the library by increasing usage of the health information service.
    Keywords:  Health literacy; Hospital; Social determinants of health; consumer library; hospital librarianship; outreach
    DOI:  https://doi.org/10.5195/jmla.2025.2053
  4. Health Info Libr J. 2025 Aug 24.
      Adopting a collaborative partnership approach to designing and delivering E-Learning programmes is an effective way to enhance the delivery of information skills training for end users. The experiences of the national NHS England Knowledge and Library Services Team working collaboratively in partnership to develop three E-Learning programmes are described. These cover skills development for the health care workforce in the areas of literature searching, critical appraisal and knowledge mobilisation. Working with subject matter experts, partners based in knowledge and library service teams, learning technologists and specialist media training design teams has led to improvements in E-Learning planning, design and delivery. As an enhancement to more traditional face-to-face information training sessions, the E-Learning modules have been launched a total of 24,029 times between April 2023 and July 2024.
    Keywords:  collaboration; eLearning; information skills training
    DOI:  https://doi.org/10.1111/hir.70001
  5. Bioinform Adv. 2025 ;5(1): vbaf155
       Motivation: Biocuration workflows often rely on comprehensive literature searches for specific biological entities. However, standard search engines such as MEDLINE and PubMed Central provide an incomplete picture of the scientific literature because they do not index the increasing amount of valuable information published in supplementary data files. Over two years, we addressed this gap by systematically extracting text from a large proportion (85%) of these files, resulting in 35 million searchable documents. To assess the information gain provided by supplementary data files beyond the manuscripts, we searched both for mentions of dozens of Global Core Biodata Resources (GCBRs), which are fundamental biological databases essential for the life sciences. We searched for mentions of GCBR names and accession numbers, which uniquely identify biological entities within these resources.
    Results: The recall gain from using the supplementary data files to search for articles mentioning resource names is 6%. In addition, 97% of all accession numbers identified were published in the supplementary data files, highlighting their increasing importance for highly specific topics or curation pipelines. We show that the number of accession numbers published in the supplementary data files is increasing year on year, but that 87% of these are published in Excel files. This format facilitates human readability and accessibility, but severely limits machine reusability and interoperability. We therefore discuss alternative and complementary approaches to the publication of research data.
    Availability and implementation: All extracted data are accessible and searchable as a collection on the BiodiversityPMC platform (https://biodiversitypmc.sibils.org/).
    DOI:  https://doi.org/10.1093/bioadv/vbaf155
  6. J Med Libr Assoc. 2025 Jul 01. 113(3): 195-203
       Background: The authors sought to develop and validate a search filter to retrieve research about acute mental health concerns during public health emergencies. They did so as a response to a recommendation from a previously published paper on searching for evidence in emergency contexts.
    Methods: The definition of acute mental health was adapted from the DSM-5 and the DynaMed entries on acute stress and posttraumatic stress disorder. The definition of public health emergencies was adapted from the Canadian Medical Protective Association. The authors retrieved systematic reviews on mental health concerns pertaining to people in the community and healthcare workers during public health emergencies from MEDLINE. The authors formulated gold standard sets for each population group using articles included in these reviews. The authors then separated the articles into development and validation sets. Keywords and Medical Subject Heading (MeSH) terms from the title and abstracts in the Ovid records in the development sets were used to formulate the filter. The filter was tested via the relative recall method using the validation sets. The authors then tested the filter for precision by conducting MEDLINE (Ovid) searches for the following topics for acute mental health: (i) children/adolescents and earthquakes; (ii) children/adolescents and Ebola outbreaks; (iii) healthcare workers and earthquakes; and (iv) healthcare workers and Ebola outbreaks.
    Results: The MEDLINE filter demonstrated 100% recall against the people in the community validation set and 98% recall against the healthcare worker validation set. The filter demonstrated the following percentages for the precision tests: (i) 94% for children/adolescents and earthquakes; (ii) 81% for children/adolescents and Ebola outbreaks; (iii) 81% for healthcare workers and earthquakes; and (iv) 79% for healthcare workers and Ebola outbreaks.
    Conclusion: The authors developed a validated search filter that could be used to find evidence related to acute mental health concerns in public health emergencies. The authors recommend that researchers adapt and modify the search filter to reflect the unique mental health issues of their population groups.
    Keywords:  Search filter validation; emergencies; search hedge validation
    DOI:  https://doi.org/10.5195/jmla.2025.2081
  7. J Med Libr Assoc. 2025 Jul 01. 113(3): 233-240
       Objective: Academic health science library catalogs were analyzed to determine the presence and frequency of graphic medicine titles in print format in the collections. The secondary objectives were to gauge if students could access graphic medicine titles, through other libraries within the same system or as eBooks, and to examine if libraries highlighted graphic medicine collections and their uses on their websites.
    Methods: A convenience sample of health science libraries was created from the Association of Academic Health Science Libraries member list. A title list was developed from collection resources and award lists for graphic medicine and graphic novels. Data was collected from public-facing library catalogs.
    Results: Fifty-six percent of the libraries analyzed had at least one title from the list in their collections available as print. An additional thirty percent had at least one title available as an eBook, leaving only fourteen percent with no graphic medicine titles.
    Conclusions: This study provides a starting point in describing the prevalence and breadth of graphic medicine collections in academic health science libraries. Although their presence may be small, our findings suggest that graphic medicine is being collected by academic health science libraries. Academic librarians can support the growing interest in the comic art format by incorporating graphic medicine into their collections and educating their patrons on this important genre.
    Keywords:  Graphic medicine; collection development; health science libraries
    DOI:  https://doi.org/10.5195/jmla.2025.1962
  8. Eur J Obstet Gynecol Reprod Biol. 2025 Aug 18. pii: S0301-2115(25)00936-4. [Epub ahead of print]314 114660
       INTRODUCTION: Reduced fetal movements (RFM) is a warning sign during pregnancy and must be assessed immediately. As pregnant women search online for RFM information before consulting with healthcare professionals, the accuracy of online information is crucial. Misinformation can lead to delayed medical attention and adverse outcomes.
    AIM: The study aimed to examine the accuracy of RFM online information by analysing Google search results for misinformation.
    METHODS: Seven misinformation targets were identified. Google searches were conducted to identify webpages with RFM content. Webpages were evaluated against each target.
    RESULTS: Half of the search results contained misinformation driven by commercial US webpages. The most common misinformation targets were advice to conduct a kick count, claims that a set number of kicks in a timeframe indicates the baby is well and suggestions for ways to stimulate movements, appearing in 49.5%, 43.2% and 25.0% of the results, respectively. Public health websites from the UK and Ireland appeared less frequently but were more accurate and appeared higher in the search results than less accurate webpages.
    CONCLUSION: The poor quality of online information about RFM is concerning. The commonality of kick counting content on US commercial webpages might be due to the lack of US clinical guidelines. To improve the quality of online information, Google should deprioritise webpages containing misinformation, add a warning to RFM searches and have medical experts vet results. Healthcare providers should offer accurate information and advise against using commercial webpages. Public health and professional organisations should collaborate with websites to counter misinformation.
    Keywords:  Antenatal education; Evidence-based practice; Internet; Pregnancy; Reduced fetal movements; Stillbirth
    DOI:  https://doi.org/10.1016/j.ejogrb.2025.114660
  9. Eur Arch Otorhinolaryngol. 2025 Aug 23.
       OBJECTIVE: This study aims to evaluate the accuracy of ChatGPT-4.0 in providing information on tympanostomy tube indications in children, comparing its responses with established clinical guidelines and examining its ability to update itself over time.
    METHODS: Sixteen clinical scenarios from the American Academy of Otolaryngology-Head and Neck Surgery Foundation (AAO-HNSF) guidelines were assessed using 18 specific questions. Responses were evaluated by two otolaryngologists and ChatGPT itself. The final validation was conducted by a senior otolaryngologist. Cohen's Kappa analysis was performed to assess inter-rater reliability.
    RESULTS: ChatGPT-4.0 correctly answered 15.5 out of 16 scenarios (96.8%). The second-stage question of scenario 7 was evaluated as incorrect. When current literature was referenced, all responses reached 100% accuracy. Among the correct answers, 4 scenarios were not fully aligned with the guidelines. However, when responses were based on current literature, all of these answers were found to be fully compliant. The agreement among the three evaluators was perfect, as confirmed by Cohen's Kappa analysis. Despite using an updated version (ChatGPT-4.0) and over a year having passed, it was observed that ChatGPT-3.5 answered a previously incorrect scenario in the same incorrect manner. This suggests that the model may have limited capacity for self-updating over time. These findings are consistent with previous research, indicating that ChatGPT provides highly accurate responses regarding tympanostomy tube placement and largely aligns with existing guidelines.
    CONCLUSION: ChatGPT-4.0 demonstrates high accuracy in providing guideline-based medical information, but its ability to update itself over time appears to be limited. However, when prompted to reference current literature, its accuracy improves significantly. These findings highlight the importance of structured prompting and critical evaluation of AI-generated medical guidance.
    Keywords:  Accuracy evaluation; ChatGPT; Clinical guidelines; Medical AI; Tympanostomy tube
    DOI:  https://doi.org/10.1007/s00405-025-09630-3
  10. J Fluency Disord. 2025 Aug 15. pii: S0094-730X(25)00051-8. [Epub ahead of print]85 106149
       OBJECTIVE: This study aimed to examine how frequently asked questions regarding stuttering were comprehended and answered by ChatGPT.
    METHODS: In this exploratory study, eleven common questions about stuttering were asked in a single conversation with the GPT-4o mini. While being blind relative to the source of the answers (whether by AI or SLPs), a panel of five certified speech and language pathologists (SLPs) was requested to differentiate if responses were produced by the ChatGPT chatbot or provided by SLPs. Additionally, they were instructed to evaluate the responses based on several criteria, including the presence of inaccuracies, the potential for causing harm and the degree of harm that could result, and alignment with the prevailing consensus within the SLP community. All ChatGPT responses were also evaluated utilizing various readability features, including the Flesch Reading Ease Score (FRES), Gunning Fog Scale Level (GFSL), and Dale-Chall Score (D-CS), the number of words, number of sentences, words per sentence (WPS), characters per word (CPW), and the percentage of difficult words. Furthermore, Spearman's rank correlation coefficient was employed to examine relationship between the evaluations conducted by the panel of certified SLPs and readability features.
    RESULTS: A substantial proportion of the AI-generated responses (45.50 %) were incorrectly identified by SLP panel as being written by other SLPs, indicating high perceived human-likeness (origin). Regarding content quality, 83.60 % of the responses were found to be accurate (incorrectness), 63.60 % were rated as harmless (harm), and 38.20 % were considered to cause only minor to moderate impact (extent of harm). In terms of professional alignment, 62 % of the responses reflected the prevailing views within the SLP community (consensus). The means ± standard deviation of FRES, GFSL, and D-CS were 26.52 ± 13.94 (readable for college graduates), 18.17 ± 3.39 (readable for graduate students), and 9.90 ± 1.08 (readable for 13th to 15th grade [college]), respectively. Furthermore, each response contained an average of 99.73 words, 6.80 sentences, 17.44 WPS, 5.79 CPW, and 27.96 % difficult words. The correlation coefficients ranged between significantly large negative value (r = -0.909, p < 0.05) to very large positive value (r = 0.918, p < 0.05).
    CONCLUSION: The results revealed that the emerging ChatGPT possesses a promising capability to provide appropriate responses to frequently asked questions in the field of stuttering, which is attested by the fact that panel of certified SLPs perceived about 45 % of them to be generated by SLPs. However, given the increasing accessibility of AI tools, particularly among individuals with limited access to professional services, it is crucial to emphasize that such tools are intended solely for educational purposes and should not replace diagnosis or treatment by qualified SLPs.
    Keywords:  Artificial intelligence; ChatGPT; Health literacy; Patient education; Stuttering
    DOI:  https://doi.org/10.1016/j.jfludis.2025.106149
  11. J Clin Med. 2025 Aug 12. pii: 5697. [Epub ahead of print]14(16):
      Background: Large language models (LLMs) such as ChatGPT, Google Gemini, and Microsoft Copilot are increasingly used by patients seeking medical information online. While these tools provide accessible and conversational explanations, their accuracy and safety in emotionally sensitive scenarios-such as an incidental cancer diagnosis-remain uncertain. Objective: To evaluate the quality, completeness, readability, and safety of responses generated by three state-of-the-art LLMs to common patient questions following the incidental discovery of a kidney tumor. Methods: A standardized use-case scenario was developed: a patient learns of a suspicious renal mass following a computed tomography (CT) scan for back pain. Ten plain-language prompts reflecting typical patient concerns were submitted to ChatGPT-4o, Microsoft Copilot, and Google Gemini 2.5 Pro without additional context. Responses were independently assessed by five board-certified urologists using a validated six-domain rubric (accuracy, completeness, clarity, currency, risk of harm, hallucinations), scored on a 1-5 Likert scale. Two statistical approaches were applied to calculate descriptive scores and inter-rater reliability (Fleiss' Kappa). Readability was analyzed using the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKGL) metrics. Results: Google Gemini 2.5 Pro achieved the highest mean ratings across most domains, notably in accuracy (4.3), completeness (4.3), and low hallucination rate (4.6). Microsoft Copilot was noted for empathetic language and consistent disclaimers but showed slightly lower clarity and currency scores. ChatGPT-4o demonstrated strengths in conversational flow but displayed more variability in clinical precision. Overall, 14% of responses were flagged as potentially misleading or incomplete. Inter-rater agreement was substantial across all domains (κ = 0.68). Readability varied between models: ChatGPT responses were easiest to understand (FRE = 48.5; FKGL = 11.94), while Gemini's were the most complex (FRE = 29.9; FKGL = 13.3). Conclusions: LLMs show promise in patient-facing communication but currently fall short of providing consistently accurate, complete, and guideline-conform information in high-stakes contexts such as incidental cancer diagnoses. While their tone and structure may support patient engagement, they should not be used autonomously for counseling. Further fine-tuning, clinical validation, and supervision are essential for safe integration into patient care.
    Keywords:  AI in healthcare; ChatGPT; Google Gemini; Microsoft Copilot; incidental kidney tumor; large language models; medical misinformation; patient communication
    DOI:  https://doi.org/10.3390/jcm14165697
  12. J Endod. 2025 Aug 22. pii: S0099-2399(25)00524-2. [Epub ahead of print]
       INTRODUCTION: This study aims to evaluate and compare the performance of three advanced chatbots-ChatGPT-4 Omni (ChatGPT-4o), DeepSeek, and Gemini Advanced-on answering questions related to pulp therapies for immature permanent teeth. The primary outcomes assessed were accuracy, completeness, and readability, while secondary outcomes focused on response time and potential correlations between these parameters.
    METHODS: A total of 21 questions were developed based on clinical resources provided by the American Association of Endodontists, including position statements, clinical considerations, and treatment options guides, and assessed by three experienced pediatric dentists and three endodontists. Accuracy and completeness scores, as well as response times, were recorded, and readability was evaluated using Flesch Kincaid Reading Ease Score, Flesch Kincaid Grade Level, Gunning Fog Score, SMOG Index, and Coleman Liau Index.
    RESULTS: Results revealed significant differences in accuracy (P < .05) and completeness (P < .05) scores among the chatbots, with ChatGPT-4o and DeepSeek outperforming Gemini Advanced in both categories. Significant differences in response times were also observed, with Gemini Advanced providing the quickest responses (P < .001). Additionally, correlations were found between accuracy and completeness scores (ρ: .719, P < .001), while response time showed a positive correlation with completeness (ρ: .144, P < .05). No significant correlation was found between accuracy and readability (P > .05).
    CONCLUSIONS: ChatGPT-4o and DeepSeek demonstrated superior performance in terms of accuracy and completeness when compared to Gemini Advanced. Regarding readability, DeepSeek scored the highest, while ChatGPT-4o showed the lowest. These findings highlight the importance of considering both the quality and readability of artificial intelligence-driven responses, in addition to response time, in clinical applications.
    Keywords:  Artificial intelligence; ChatGPT; DeepSeek; Gemini; endodontics; large language models
    DOI:  https://doi.org/10.1016/j.joen.2025.08.011
  13. Facial Plast Surg. 2025 Aug 25.
       INTRODUCTION: Patients frequently ask questions after Mohs facial reconstruction. AI tools, particularly large language models (LLMs), may optimize this communication. Objectives & Hypotheses: We evaluated four LLMs-Claude AI, ChatGPT, Microsoft Copilot, and Google Gemini-on responses to post-operative questions, hypothesizing variation in quality, accuracy, comprehensiveness, and readability.
    STUDY DESIGN: Prospective observational study following STROBE guidelines.
    METHODS: 31 common post-operative questions were created. Each was submitted to all four LLMs using a standardized prompt. Responses were evaluated by blinded facial plastic surgeons using validated scoring tools (EQIP, Likert scales, readability formulas). IRB exemption was granted.
    RESULTS: Claude AI outperformed others in quality (EQIP: 90.3), accuracy (4.55/5), and comprehensiveness (4.60/5). All LLMs exceeded the recommended 6th-grade reading level.
    CONCLUSIONS: LLMs show potential for supporting post-operative communication, but variation in readability and content depth highlights the continued need for physician oversight.
    Keywords:  Mohs reconstruction; large language models artificial intelligence Acknowledgments: AI was used to assist with editing and formatting of this manuscript.
    DOI:  https://doi.org/10.1055/a-2689-2685
  14. Medicine (Baltimore). 2025 Aug 22. 104(34): e43951
       BACKGROUND: Total knee arthroplasty (TKA) is a surgical intervention that significantly improves patients' quality of life, but the preoperative process can cause uncertainty, anxiety, and a lack of information. In recent years, artificial intelligence (AI)-powered chatbots and large language models have begun to play important roles in patient information processes in the healthcare field. In this study, the answers given by chat generative pretrained transformer (ChatGPT)-4.0 and DeepSeek-V3 AI programs to the 10 most frequent questions about TKA asked by patients before surgery were compared, and the effectiveness of AI in the patient information process was analyzed with the evaluations of orthopedists.
    METHODS: Using Google Trends, patient forums, and clinical experiences, the 10 questions that TKA patients are most curious about in the preoperative, peroperative, and postoperative periods were determined. These questions were directed to ChatGPT-4.0 and DeepSeek-V3, and the answers were recorded. Five orthopedists (minimum 5 year surgical experienced) evaluated the answers using a Likert scale (1-5) according to criteria such as scientific accuracy, explanatory power, understandability for the patient, and detailed content.
    RESULTS: The mean Likert score of ChatGPT-4.0 (4.7 ± 0.2) was found higher than the mean Likert score of DeepSeek-V3 (3.5 ± 0.3) (P < .05). ChatGPT-4.0 provided more comprehensive and detailed information, while DeepSeek-V3 provided superficial answers, especially in the answers to questions such as "life of the prosthesis," "postoperative complications," and "return to daily activities."
    CONCLUSION: Our study showed that ChatGPT-4.0 is more effective than DeepSeek-V3 in terms of patient information regarding total knee replacement. It is emphasized that AI-supported systems are a fast and accessible source of information for patient education; however this information must be inspected by medical authorities for accuracy. Future studies should be conducted with larger patient populations, to increase the reliability of AI-based patient information systems and ensure their integration into clinical practice.
    LEVEL OF EVIDENCE: Level 5.
    Keywords:  ChatGPT-4.0; DeepSeek-V3; artificial intelligence; patient education; total knee arthroplasty
    DOI:  https://doi.org/10.1097/MD.0000000000043951
  15. Dent J (Basel). 2025 Jul 24. pii: 343. [Epub ahead of print]13(8):
      Objectives: The present cross-sectional analysis aimed to investigate whether Large Language Model-based chatbots can be used as reliable sources of information in orthodontics by evaluating chatbot responses and comparing them to those of dental practitioners with different levels of knowledge. Methods: Eight true and false frequently asked orthodontic questions were submitted to five leading chatbots (ChatGPT-4, Claude-3-Opus, Gemini 2.0 Flash Experimental, Microsoft Copilot, and DeepSeek). The consistency of the answers given by chatbots at four different times was assessed using Cronbach's α. Chi-squared test was used to compare chatbot responses with those given by two groups of clinicians, i.e., general dental practitioners (GDPs) and orthodontic specialists (Os) recruited in an online survey via social media, and differences were considered significant when p < 0.05. Additionally, chatbots were asked to provide a justification for their dichotomous responses using a chain-of-through prompting approach and rating the educational value according to the Global Quality Scale (GQS). Results: A high degree of consistency in answering was found for all analyzed chatbots (α > 0.80). When comparing chatbot answers with GDP and O ones, statistically significant differences were found for almost all the questions (p < 0.05). When evaluating the educational value of chatbot responses, DeepSeek achieved the highest GQS score (median 4.00; interquartile range 0.00), whereas CoPilot had the lowest one (median 2.00; interquartile range 2.00). Conclusions: Although chatbots yield somewhat useful information about orthodontics, they can provide misleading information when dealing with controversial topics.
    Keywords:  artificial intelligence; chatbots; health information; malocclusion; orthodontics
    DOI:  https://doi.org/10.3390/dj13080343
  16. Cureus. 2025 Jul;17(7): e88824
       OBJECTIVES:  Educating pediatric patients and their caregivers about the disease is crucial for improving treatment adherence, recognizing complications early, and alleviating anxiety. AI tools such as ChatGPT and Google Gemini offer personalized education, benefiting patients and providers, and are increasingly utilized in healthcare. This study aims to compare patient education guides created by ChatGPT and Google Gemini for acute otitis media, pneumonia, and pharyngitis.
    METHODS: Patient information guides on pediatric diseases generated by ChatGPT and Google Gemini were evaluated by comparing various variables (words, sentences, average words per sentence, average syllables per word, grade level, and ease score) and further assessed for ease using the Flesch-Kincaid calculator, similarity using Quillbot, and reliability using the Modified Discern score. Statistical analysis was done using R v4.3.2.
    RESULTS: Both tools' responses were statistically compared. No significant difference was found in word count (ChatGPT: 477.3; Google Gemini: 394.0; p=0.0765) or sentences (ChatGPT: 35.33; Google Gemini: 46.33; p=0.184). Google Gemini scored slightly higher in ease (ChatGPT: 37.79; Google Gemini: 57.10) and grade level (ChatGPT: 11.40; Google Gemini: 7.43), but these were not statistically significant (p>0.05), indicating no clear superiority.
    CONCLUSIONS FOR PRACTICE: In a comparison of patient education guides created by both tools for acute otitis media, pneumonia, and pharyngitis, there was no statistically significant difference to determine the superiority of one AI tool over the other. Further studies should comprehensively evaluate various AI tools across a broader range of diseases. It is also important to assess whether AI tools can provide real-time, verifiable content based on the latest medical advancements.
    Keywords:  artificial intelligence; chatgpt; educational tool; google gemini; patient education brochure; pediatric diseases
    DOI:  https://doi.org/10.7759/cureus.88824
  17. J Prosthodont. 2025 Aug 22.
       PURPOSE: This study aims to evaluate the readability and accuracy of content produced by ChatGPT, Copilot, Gemini, and the American College of Prosthodontists (ACP) for patient education in prosthodontics.
    MATERIALS AND METHODS: A series of 26 questions were selected from the ACP's list of questions (GoToAPro.org FAQs) and their published answers. Answers to the same questions were generated from ChatGPT-3.5, Copilot, and Gemini. The word counts of responses from chatbots and the ACP were recorded. The readability was calculated using the Flesch Reading Ease Scale and Flesch-Kincaid Grade Level. The responses were also evaluated for accuracy, completeness, and overall quality. Descriptive statistics were used to calculate mean and standard deviations (SD). One-way analysis of variance was performed, followed by the Tukey multiple comparisons to test differences across chatbots, ACP, and various selected topics. The Pearson correlation coefficient was used to examine the relationship between each variable. Significance was set at α < 0.05.
    RESULTS: ChatGPT had a higher word count, while ACP had a lower word count (p < 0.001). The cumulative scores of the prosthodontist topic had the lowest Flesch Reading Ease Scale score, while brushing and flossing topics displayed the highest score (p < 0.001). Brushing and flossing topics also had the lowest Flesch-Kincaid Grade Level score, whereas the prosthodontist topic had the highest score (p < 0.001). Accuracy for denture topics was the lowest across the chatbots and ACP, and it was the highest for brushing and flossing topics (p = 0.006).
    CONCLUSIONS: This study highlights the potential for large language models to enhance patient's prosthodontic education. However, the variability in readability and accuracy across platforms underscores the need for dental professionals to critically evaluate the content generated by these tools before recommending them to patients.
    Keywords:  accuracy; dentures; implants; prosthodontics; readability
    DOI:  https://doi.org/10.1111/jopr.70022
  18. J Occup Environ Hyg. 2025 Aug 25. 1-14
      Noise-induced hearing loss and tinnitus are two of the most prevalent service-connected disabilities of United States military veterans. Educational materials meant for hearing conservation program-eligible Airmen were evaluated from active-duty, continental United States (CONUS) Air Force bases for compliance with US Air Force (USAF), Department of Defense (DoD), and Occupational Safety and Health Administration (OSHA) regulations. Understandability and actionability were assessed using the Patient Education Materials Assessment Tool for Audiovisual Materials (PEMAT-A/V), while readability was assessed with Flesch-Kincaid Grade Level (FKGL). Educational materials were received from 44 of 61 (72%) active-duty, CONUS bases, with 27 bases sending one item and 17 bases sending multiple items, for a total of 67 educational materials, which were evaluated by three evaluators. Educational materials were categorized into one of four types: (A) supervisor's guide to hearing conservation (n = 21); (B) new worker hearing conservation training (n = 20); (C) two-page hearing conservation program training pamphlet (n = 14); and (D) other (n = 12). Overall mean compliance was 84% (CI: 63,100) (USAF), 83% (CI: 62,100) (DoD), and 88% (CI: 67,100) (OSHA), respectively. Overall mean understandability was 75% (CI: 63,87) and actionability was 89% (67,100). There was good agreement between the three evaluators for each of the criteria (87-90%). Overall mean readability was grade level 10.68 ± 1.68 on the FKGL scale. For educational materials, 65 of the 67 (97%) were above the recommended 6th-grade reading level, and 62 of the 67 (93%) educational materials were above the average American 8th-grade reading level. This study established compliance, understandability, actionability, and readability scores for educational materials that military service members receive upon entry into the hearing conservation program, gathered from active-duty, CONUS Air Force bases. Using the determined scores, recommendations, such as providing active feedback and condensing information, were given to improve future hearing conservation educational materials.
    Keywords:  Flesch-Kincaid Grade Level (FKGL); Patient Education Materials Assessment Tool (PEMAT); military; noise-induced hearing loss; regulation; veterans
    DOI:  https://doi.org/10.1080/15459624.2025.2529983
  19. Ann Allergy Asthma Immunol. 2025 Aug 23. pii: S1081-1206(25)00420-X. [Epub ahead of print]
      
    Keywords:  Accessibility; FPIES; Food Protein-Induced Enterocolitis Syndrome; Readability
    DOI:  https://doi.org/10.1016/j.anai.2025.08.010
  20. JMIR Dermatol. 2025 Aug 22. 8 e72773
       Unlabelled: This research letter evaluates the quality and readability of hidradenitis suppurativa (HS) websites found on Google and Bing with the DISCERN instrument and Flesch-Kincaid Readability metrics. Comprehensive and reliable articles can lead to increased knowledge about HS and further enhance physician-patient relationships and shared decision-making. This study's aim was to identify reliable resources to help bridge knowledge gaps and support informed discussions on management and treatment options.
    Keywords:  hidradenitis suppurativa; online resources; patient education
    DOI:  https://doi.org/10.2196/72773
  21. Phlebology. 2025 Aug 25. 2683555251372218
      ObjectiveThe incidence of lipedema is poorly described due to its confusion with lymphedema. Patient education is crucial for treatment and prevention strategies but also for improving healthcare outcomes. This study assessed and compared the quality of English and Spanish online resources for patients suffering from lipedema using a multimetric approach.MethodsA deidentified Google search using the terms "lipedema" and "lipedema español" was conducted. The first 10 academic/organizational websites in each language were selected. Quality assessment was performed using the Patient Education and Materials Assessment Tool (PEMAT), Cultural Sensitivity Assessment Tool (CSAT), Simple Measure of Gobbledygook (SMOG), and facticity criteria to evaluate understandability and actionability, cultural sensitivity, readability, and factual quality, respectively.ResultsEnglish webpages scored 73.70% for understandability and 35.0% for actionability, while Spanish webpages scored 75.05% and 21.0%, respectively; no significant differences were found between languages in understandability (p = .970) and actionability (p = .895). A significantly higher proportion of Spanish resources was found to be culturally sensible than English resources (90% vs 70%; p < .001). However, no significant differences were found in the cultural sensitivity score (English 2.87 vs Spanish 3.01; p = .677). The grade reading level for Spanish materials was significantly lower compared to English materials (11.08 vs 13.45; p = .006). Factual quality was low across both languages according to the facticity framework, though English materials scored higher than Spanish (2.20 vs 1.00; p = .051).ConclusionOur results suggest that online English and Spanish materials on lipedema have inadequate actionability, facticity, and reading grade levels for patients. Nonetheless, the levels of understandability and cultural sensitivity are acceptable. Enhancing the quality of online health literature for lipedema patients presents an opportunity to alleviate psychosocial burdens and address misconceptions.
    Keywords:  cultural sensitivity; health literacy; lipedema; readability; understandability
    DOI:  https://doi.org/10.1177/02683555251372218
  22. Otolaryngol Head Neck Surg. 2025 Aug 26.
       OBJECTIVE: This study evaluates and compares the readability of pediatric otolaryngology patient education materials generated by ChatGPT4o and those retrieved from Google searches. The goal is to determine whether artificial intelligence (AI)-generated content improves accessibility compared to institutionally affiliated online resources.
    STUDY DESIGN: Cross-sectional readability analysis.
    SETTING: Online educational materials focused on pediatric otolaryngology topics.
    METHODS: Educational articles covering 10 pediatric otolaryngology conditions were sourced either via Google search or generated using ChatGPT4o. All texts were standardized by removing extraneous formatting. Readability was assessed using six validated metrics: Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease Score (FRES), Gunning-Fog Index, Simple Measure of Gobbledygook (SMOG), Coleman-Liau Index, and Automated Readability Index (ARI). Statistical comparisons were performed using paired t tests or Wilcoxon signed-rank tests to evaluate differences in scores between sources.
    RESULTS: ChatGPT4o-generated content demonstrated significantly higher FKGL, Gunning-Fog, ARI, and SMOG scores and lower FRES scores compared to Google-sourced materials, indicating greater complexity (P < .05). These differences were most pronounced for simpler conditions such as allergic rhinitis and otitis externa. For more complex topics like laryngomalacia and cleft lip and palate, readability scores were not significantly different between the two sources (P > .05).
    CONCLUSION: ChatGPT4o-generated patient education materials are generally more difficult to read than Google-sourced content, especially for less complex conditions. Given the importance of readability in patient education, AI-generated materials may require further refinement to improve accessibility without compromising accuracy. Enhancing clarity could increase the utility of AI tools for educating parents and caregivers in pediatric otolaryngology.
    Keywords:  AI‐generated content; GPT‐4o; health literacy; patient education; pediatric otolaryngology; readability
    DOI:  https://doi.org/10.1002/ohn.70011
  23. Int Dent J. 2025 Aug 21. pii: S0020-6539(25)00241-2. [Epub ahead of print]75(5): 100955
       INTRODUCTION AND AIMS: Nowadays, patients seek medical information online. Patient-oriented content must be easy to read and trustworthy. This study aimed to assess the quality and readability of online information about apical surgery and apicoectomy in English and Spanish.
    METHODS: The authors performed the following systematic searches on Google in February 2023: "apicoectomy", "apicectomía", "apical surgery", and "cirugía apical". The first 100 websites of each query were selected. English readability was assessed using Flesch-Kincaid Reading Grade Level, Flesch Reading Ease Score, Gunning Fog Index, Coleman-Liau Index, Automated Readability Index and Simple Measure of Gobbledygook Index. The Fernández-Huerta Index and INFLESZ were used to assess Spanish readability. Quality was measured using the DISCERN tool.
    RESULTS: A total of 165 sites were included in total. Readability for English-language sites ("apical surgery" and "apicoectomy") was categorised as "fairly difficult to read" [FRES (apical surgery) = 54.7; FRES (apicoectomy) = 54.3]. Similarly, Spanish sites ("cirugía apical" and "apicectomía") were classified as "relatively difficult to read". Overall, the DISCERN tool showed a low average quality of information for all terms analysed.
    CONCLUSION: English and Spanish online information about apical surgery is difficult for the average patient to understand and presents significant quality deficiencies.
    CLINICAL RELEVANCE: The Internet is a powerful tool for communicating with patients, but available apical surgery information is difficult for laypersons to understand and has a low overall quality. To overcome this issue, endodontists should produce high-quality, patient-relevant materials in plain language.
    Keywords:  Apical surgery; Apicoectomy; DISCERN; Health literacy; Internet; Readability
    DOI:  https://doi.org/10.1016/j.identj.2025.100955
  24. BMC Med Educ. 2025 Aug 21. 25(1): 1181
      
    Keywords:  Education; Endoscopic myringoplasty; Endoscopic tympanoplasty; IVORY; Internet; Surgery; YouTube
    DOI:  https://doi.org/10.1186/s12909-025-07775-7
  25. Cureus. 2025 Aug;17(8): e90522
       BACKGROUND AND OBJECTIVE:  Social media plays a significant role in patient education as many US Internet users obtain health information online. YouTube is a popular search engine among people looking for dermatologic advice. Our study assesses the content on homeopathic remedies for non-melanoma skin cancers (NMSCs) available on YouTube.
    METHODS:  We searched YouTube in a private browsing tab for "natural skin cancer remedies," "alternative skin cancer treatment," and "holistic skin cancer treatment" in separate searches. The top 40 videos meeting inclusion criteria in each search were analyzed. For data extraction, the video characteristics, engagement metrics and content themes were recorded. Duplicates were removed.
    RESULTS:  61 videos were analyzed in total. 22 (36.1%) of the videos were created by homeopathic YouTube channels, 20 (32.8%) by laypeople and/or influencers, 11 (18%) by physicians and pharmacists, 4 (6.7%) by news channels, and 4 (6.7%) by other personnel. The 10 most frequently mentioned remedies included green tea (n=12, 19.7%), turmeric/curcumin (n=12, 19.7%), coconut oil (n=9, 14.8%), black salve (n=7, 11.5%), apple cider vinegar (n=7, 11.5%), baking soda (n=7, 11.5%), garlic (n=7, 11.5%), frankincense oil (n=6, 9.8%), eggplant (n=6, 9.8%), and milk thistle (n=6, 9.8%).
    DISCUSSION:  Although most videos were created by homeopathic channels, they had the lowest engagement. Videos created by healthcare professionals achieved significantly higher engagement. Thus, even when seeking natural remedies on social media, viewers prefer content created by professionals. There is some existing literature on the role the recommended remedies play in preventing and/or treating skin cancers.
    CONCLUSION:  Dermatologists should be aware of the various at-home therapies patients may try for their skin cancer. They should consider creating their own reliable and accurate social media content to educate the public about the risks of these dangerous trends and emphasize the importance of seeking evaluation for suspicious skin lesions.
    Keywords:  alternative medicine; basal cell carcinoma; homeopathic remedies; skin cancer; social media; squamous cell carcinoma
    DOI:  https://doi.org/10.7759/cureus.90522
  26. BMJ Open. 2025 Aug 21. 15(8): e102818
       OBJECTIVES: The prevalence of myopia has been rising, whereas prevention efforts have shown limited success. Educational short videos have become crucial sources for health information; however, their quality regarding myopia prevention is uncertain. This study aimed to evaluate the quality and content of short videos on myopia prevention disseminated via major Chinese short video platforms and compare content differences between healthcare professionals and non-professional creators.
    DESIGN: A cross-sectional content analysis.
    SETTING: Top-ranked videos from three dominant Chinese platforms (TikTok, Kwai and BiliBili) in 6-10 August 2024.
    PARTICIPANTS: 284 eligible videos screened from 300 initial results using predefined exclusion criteria, including 97 videos from TikTok, 94 from BiliBili and 93 from Kwai.
    METHODS: Videos were assessed using the Global Quality Scale and a modified DISCERN tool. Content completeness was evaluated across six predefined domains. Videos were categorised by source (healthcare professionals vs non-healthcare professionals), and intergroup differences were statistically analysed.
    RESULTS: Of the 284 videos, 48.9% were uploaded by healthcare professionals and 51.1% by non-healthcare professionals. Overall video quality was suboptimal. Videos by ophthalmologists had significantly higher quality scores than those by other creators. Healthcare professionals focused more on definitions, symptoms and risk factors of myopia, whereas non-healthcare professionals emphasised prevention and treatment outcomes. Ophthalmologists more frequently recommended corrective lenses (including both standard spectacles and specially designed lenses for myopia control) and low-dose atropine, whereas non-healthcare professionals favoured vision training.
    CONCLUSIONS: Significant quality gaps exist in myopia prevention videos. Healthcare professionals, particularly ophthalmologists, produce higher-quality and more comprehensive content. Strategic engagement by healthcare professionals in digital health communication and platform-level quality control is needed to improve public health literacy on myopia.
    Keywords:  Myopia; OPHTHALMOLOGY; PUBLIC HEALTH
    DOI:  https://doi.org/10.1136/bmjopen-2025-102818
  27. Foot Ankle Spec. 2025 Aug 20. 19386400251359402
      ObjectivesThis study aimed to examine the content of the most-viewed pes planus exercise videos on YouTube® and evaluate their quality and reliability.MethodsYouTube was searched with keywords "Pes planus exercises," "Pes planus rehabilitation," "Pes planus physiotherapy," "Flat foot exercises," "Flat foot rehabilitation," and "Flat foot physiotherapy." A total of 360 videos were independently reviewed by 2 evaluators. The URL of the videos, length, publication date, number of views/likes, number of comments, number of subscribers of the video source, video type, and exercise type of the videos were recorded. Video popularity view rate; quality and information content of videos, Global Quality Scale (GQS) and modified DISCERN scale; its reliability was evaluated with the Journal of the American Medical Association (JAMA) comparison score.ResultsOf the 49 videos that met inclusion criteria, 42.85% were of high quality according to GQS. Video length, number of comments, modified DISCERN, and JAMA scores were significantly higher in the high-quality group (P < .05). Other video features were not different (P > .05). The number of likes, comments, views, and subscribers of the videos, and video popularity, were positively correlated with each other at a moderate to high level (P < .001). High quality and reliability were significantly correlated only with longer video length and higher number of comments (P < .05).ConclusionThe overall quality of pes planus exercise videos on YouTube is low; however, longer videos with active viewer engagement tend to be of higher quality. This highlights the need for clinicians to direct patients to reliable digital resources and for content creators to follow standards.Level of Evidence:Level V: Systemic review of nonpeer-reviewed resources.
    Keywords:  YouTube; exercise; flatfoot; internet; pes planus; physiotherapy; quality; rehabilitation; social media; video analysis
    DOI:  https://doi.org/10.1177/19386400251359402
  28. J Med Libr Assoc. 2025 Jul 01. 113(3): 224-232
       Objective: The purpose of this study is to understand the process of physicians' evidence-based clinical decision-making for new drug prescriptions.
    Methods: Eleven semi-structured interviews were conducted, and thematic coding was used for data analysis.
    Results: Several findings emerged. First, point-of-care information seeking focuses more on accessible and easy-to-use sources, such as medical websites, while out-of-practice searches encompass broader sources such as printed sources and extended networks. Medical websites are becoming preferred sources of information. Second, critical appraisal of information is performed passively by using pre-appraised information sources and referring to professional networks. Third, professional networks (i.e., specialists and senior colleagues) remain essential throughout the process and are pivotal for the decision to change prescription practices.
    Conclusions: Medical information systems that facilitate immediate access to summarized reliable evidence and feature real-time connectivity to the communities of practice can be an effective strategy for improving physicians' evidence-based practice for new drug prescriptions.
    Keywords:  Clinical decision-making; Evidence Based Practice; Information-Seeking
    DOI:  https://doi.org/10.5195/jmla.2025.2082
  29. Children (Basel). 2025 Aug 10. pii: 1049. [Epub ahead of print]12(8):
      The internet is now the primary mode of information exchange worldwide. Online health information-seeking behavior (e-HISB) has become a prevalent practice, especially among parents concerned with their children's health, creating both opportunities and risks. Objective: The present study aims to translate and culturally adapt the CHIRPI questionnaire into Greek and conduct a comprehensive psychometric validation, including analyses of internal consistency, test-retest reliability (temporal stability), and inter-rater reliability. The adapted tool is further pilot-tested for its utility in measuring parental internet use concerning child health information. Methods: The translation, validation, and pilot study of the questionnaire were conducted in accordance with internationally recommended procedures. CHIRPI was translated into Greek using forward-backward translation and was culturally adapted. A pilot sample of 105 parents (children aged 0-10) participated. The majority of participants were mothers (66.7%), aged 31-40 years, residing in urban areas, and they held tertiary or postgraduate degrees. Internal consistency was measured with Cronbach's alpha, test-retest reliability with the ICC, and inter-rater reliability with the kappa coefficient. Item responses were also analyzed in relation to demographic factors. Results: The CHIRPI Greek version demonstrated excellent internal consistency (Cronbach's α = 0.91; all subscales had α values greater than 0.70). Test-retest reliability (ICC = 0.632-1.000) and inter-rater reliability (kappa = 0.615-1.000) indicated moderate to excellent agreement. The scale showed satisfactory psychometric properties, supporting its use in Greek populations. Higher education was linked to more frequent health-related internet searches and increased distress (p < 0.001). Conclusions: The CHIRPI Greek version is a valid and reliable tool for assessing parental online health information-seeking behavior related to children's health among Greek-speaking populations. As the first standardized tool in Greek, it fills a critical methodological gap in eHealth research.
    Keywords:  CHIRPI; Greek translation; online health information; parental behavior; pediatric information-seeking; psychometric validation
    DOI:  https://doi.org/10.3390/children12081049
  30. J Med Internet Res. 2025 Aug 27. 27 e69606
       Background: Access to high-quality internet plays an increasingly important role in supporting care delivery and health information access. Although internet access has the potential to alleviate some inequities in health care, the digital divide negatively impacts cancer across the continuum. While subscription to high-speed internet has been previously assessed, satisfaction with home internet to meet the health needs of users is a lesser-known, important indicator of satisfactory access to internet-based health information and digital health technology use.
    Objective: This study aimed to assess differences in perceptions of quality of at-home internet connection and its association to cancer health information-seeking experiences and use of digital health technologies in a nationally representative sample of US adults.
    Methods: Secondary analysis of data from the National Cancer Institute's Health Information National Trends Survey (HINTS) 2022 (n=6252) was conducted. The primary predictor, "how satisfied are you with your Internet connection at home to meet health-related needs?," a novel item on HINTS 6, was dichotomized into "high" (extremely satisfied or very satisfied) and "low" (somewhat satisfied, not very satisfied, or not at all satisfied) satisfaction. Outcomes variables included 3 items assessing cancer information-seeking experiences and 2 items measuring access to telehealth and patient portals over the past 12 months. Adjusted logistic regression models (P<.05) were performed, including age, race and ethnicity, education, income, health insurance access, geography, and difficulty understanding cancer information, a proxy for health literacy, as covariates.
    Results: Those reporting low satisfaction with their home internet had higher odds of agreeing that searching for cancer information took a lot of effort (odds ratio [OR] 1.59, 95% CI 1.16-2.19) and that they felt frustrated searching for cancer information (OR 1.46, 95% CI 1.07-1.98). Respondents with lower satisfaction with their home internet had lower odds of accessing their patient portal at least once in the past year (OR 0.54, 95% CI 0.33-0.89). While the relationship between internet satisfaction and concern over information quality was not significant, respondents aged 18-34 years reported higher odds to be concerned compared with those aged 75 years and older (OR 1.74, 95% CI 1.04-2.90), and those with lower education reported less concern over the quality of information compared with those with postbaccalaureate degrees (high school graduate: OR 0.56, 95% CI 0.31-0.99; college graduate: OR 0.67, 95% CI 0.48-0.95). Finally, while the association between satisfaction with internet and telehealth use over the past 12 months was not significant, those without health insurance were significantly less likely to have had a telehealth appointment in the last year (OR 0.39, 95% CI 0.19-0.81).
    Conclusions: Satisfaction with internet at home to meet health needs is correlated with cancer information-seeking experiences and usage of some available health technology. These findings underscore the value of high-quality internet services toward successful implementation of health care technology and better patient experiences in health information seeking.
    Keywords:  EHR; cancer; digital health; electronic health record; health information-seeking; telehealth
    DOI:  https://doi.org/10.2196/69606
  31. Int J Sex Health. 2025 ;37(3): 325-337
       Objectives: Sexual expression is important to many older adults, but this population may be overlooked by sexual health campaigns and services. This study sought to understand the sexual health information-seeking behaviors and preferences of older adults, including whether and where they seek this information, the characteristics associated with seeking it, as well as satisfaction, preferences, and reasons for not seeking it.
    Methods: The data were gathered in 2021 via a cross-sectional online survey of Australians aged 60 and over. There were seven quantitative outcomes and one set of free-text comments. Quantitative outcomes were analyzed using descriptive statistics, chi2 test, and logistic regression. The free-text comments were analyzed using qualitative content analysis.
    Results: The survey sample was comprised of 1,470 respondents with an equal balance of men (49.9%) and women (49.7%) and a median age of 69 years (range of 60-92 years). Findings showed that 41.2% (602/1,461; 95%CI 38.7-43.7) had sought information, and 63.6% were satisfied with the information found. Being male, STI testing, online dating, age 70-79, and urban-living were associated with information-seeking. Healthcare providers were the most utilized and trusted information source, and many respondents were willing to look online. One in five did not seek information when they needed it, outlining various barriers preventing them from doing so.
    Conclusions: Many older adults seek sexual health information, and with some experiencing access barriers and one-third unsatisfied, there is room for improvement. Relevant, accessible information should be provided by healthcare professionals and credible websites.
    Keywords:  Older adults; health promotion; primary care; sexual health
    DOI:  https://doi.org/10.1080/19317611.2025.2527050
  32. J Med Libr Assoc. 2025 Jul 01. 113(3): 252-258
       Background: Many researchers benefit from training and assistance with their data management practices. The release of the Office of Science and Technology Policy's Nelson Memo and the National Institutes of Health's new Data Management and Sharing Policy created opportunities for librarians to engage with researchers regarding their data workflows. Within this environment, we-an interdisciplinary team of librarians and informationists at the University of Michigan (U-M)-recognized an opportunity to develop a series of data workshops that we then taught during the summer of 2023.
    Case Presentation: The series was primarily aimed at graduate students and early career researchers, with a focus on the disciplines served by the authors in the Health Sciences - Science, Technology, Engineering, and Mathematics (HS-STEM) unit of the U-M Library. We identified three topics to focus on: data management plans, organizing and managing data, and sharing data. Workshops on these topics were offered in June, July, and August 2023.
    Conclusion: The number of registrants and attendees exceeded our expectations with 497 registrations across the three workshops (174/169/154, respectively), and 178 attendees (79/49/50, respectively). Registrants included faculty, staff, students, and more, and were primarily from the health sciences clinical and academic units. We received a total of 45 evaluations from the three workshops which were very positive. The slides and evaluation forms from each workshop are available through U-M's institutional repository. We developed these workshops at an opportune time on campus and successfully reached many researchers.
    Keywords:  Data Management; Data education; Data management and sharing plans; Workshops; data services; data sharing; library workshops
    DOI:  https://doi.org/10.5195/jmla.2025.2070
  33. J Med Libr Assoc. 2025 Jul 01. 113(3): 193-194
      In our editorial in the January/April 2023 issue of the Journal of the Medical Library Association (JMLA), we spoke of the challenges we faced when we took on the co-lead editor roles. At the end of that editorial, we stated our intention to get the publishing schedule back on track and to finally tackle other projects. And while it took us some time to report it, we are pleased to share that, in the publication year of 2024, JMLA resumed its regular quarterly publishing schedule.
    DOI:  https://doi.org/10.5195/jmla.2025.2289