bims-librar Biomed News
on Biomedical librarianship
Issue of 2025–12–28
thirty-one papers selected by
Thomas Krichel, Open Library Society



  1. Health Info Libr J. 2025 Jun;42(2): 203-208
      Artificial intelligence (AI) provides new challenges for knowledge and library professionals (KLPs) working in healthcare. In NHS England, KLPs created the AI Literacy Group, part of the NHS England Workforce, Transformation and Education (NHSE W,T&E) Current & Emerging Technology Community of Practice (CET CoP). The aim of the group was to develop their understanding of the changing landscape and understand how AI could be used in healthcare organisations and knowledge and library services (KLS). The group developed a series of presentations that KLPs could use to develop their own understanding of AI and which could be adapted for use in their local organisations. Case studies from KLPs working in two NHS organisations discuss the use of the presentations to train clinical and non-clinical staff, and consider their impact and next steps. Through collaboration, the group was able to learn together, develop a shared understanding of AI and create training resources to benefit the KLP community. The group has evolved to consider new areas of learning relating to the adoption of AI and to review the content of the presentations to ensure that they are up to date and relevant in this ever-changing landscape.
    Keywords:  Artificial Intelligence (AI); collaboration; digital information resources; information skills
    DOI:  https://doi.org/10.1111/hir.70006
  2. Health Commun. 2025 Dec 23. 1-13
      As generative artificial intelligence (AI) systems such as ChatGPT become increasingly popular sources of health information, understanding what shapes users' trust in AI-generated health information (AI-HI) is essential. Despite growing use, little is known about how human- and source-related factors jointly influence trust across national contexts. Drawing on frameworks from AI trust and online health information research, this study used multi-group structural equation modeling with representative samples from Austria (N = 502), Denmark (N = 507), France (N = 498), and Serbia (N = 483) to predict trust in AI-HI and its effect on intention to use AI-HI. AI literacy and performance expectancy consistently increased trust across countries, while social norms and prior AI-HI experience showed smaller, context-dependent effects. Health literacy, personal innovativeness, effort expectancy, and surveillance risk perceptions were not significant. Informational risk perceptions had only a weak negative effect on trust, indicating that while concerns about inaccuracy can reduce confidence, they play a relatively minor role in shaping it. Trust strongly predicted intention to use AI-HI in all countries, with path-level effects largely stable across contexts. These findings suggest that trust in AI-HI is shaped more by digital capabilities, perceived utility, and social endorsement than by privacy concerns or health literacy. Future research should examine how digital literacy interventions and transparency standards can foster informed trust in these systems.
    DOI:  https://doi.org/10.1080/10410236.2025.2601265
  3. Antioxid Redox Signal. 2025 Dec 11.
      The exponential growth of biomedical literature has rendered traditional search methods inadequate. Artificial intelligence (AI) tools have emerged and are developing as transformative solutions for literature search and knowledge mining. This first article of a series, intended to address different components of biomedical research, provides a comprehensive analysis of recent advancements, practical applications, and challenges in deploying AI for biomedical research. The objective of this work is to synthesize the evolution, capabilities, and limitations of AI-driven tools for literature discovery, summarization, and evidence synthesis, offering actionable insights for researchers across disciplines. AI tools have progressed from keyword-based retrieval to semantic and multimodal approaches. Platforms such as Elicit, BioGPT, and PubTator 3.0 enable rapid extraction of gene-disease associations and evidence-based insights, while ResearchRabbit and Connected Papers visualize citation networks. Systematic review tools like Rayyan and Covidence reduce screening time by up to 50%. Variability in output quality, risk of hallucination, and lack of algorithmic transparency pose challenges. Open-source solutions (e.g., BioGPT, DeepChem) and explainability-focused tools (e.g., Scite.ai) offer promising pathways to mitigate these concerns. AI-driven literature workflows can accelerate hypothesis generation, systematic reviews, and translational research. However, close human expert oversight remains indispensable to ensure rigor and interpretive accuracy. These technologies are not a passing trend; they are forging the contours of tomorrow's research landscape. The peril lies as much in reckless adoption as in willful oblivion. This editorial serves as a general roadmap for integrating trustworthy AI tools into biomedical research, fostering high-impact innovation. Antioxid. Redox Signal. 00, 000-000.
    DOI:  https://doi.org/10.1177/15230864251405885
  4. J Intell. 2025 Dec 02. pii: 157. [Epub ahead of print]13(12):
      This study reviews 33 meta-analyses and systematic reviews on Computational Thinking (CT), focusing on research quality, intervention effectiveness, and content. Quality assessment of included studies was conducted using the AMSTAR 2 tool. The meta-analysis achieved an average score of 10.9 (a total of 16 points), while systematic reviews scored an average of 6.1 (a total of 11 points). The 15 meta-analyses showed diverse intervention strategies. Project-based learning, text-based programming, and game-based learning demonstrate more pronounced effects in terms of effect size and practical outcomes. Curricular integration, robotics programming, and unplugged strategies offered additional value in certain contexts. Gender and disciplinary background were stable moderators, while grade level and educational stage had more conditional effects. Intervention duration, sample size, instructional tools, and assessment methods were also significant moderators in several studies. The 18 systematic reviews used a five-layer framework based on ecological systems theory, covering educational context (microsystem), tools and strategies (mesosystem), social support (exosystem), macro-level characteristics (macrosystem), and CT development (chronosystem). Future research should focus on standardizing meta-analyses, unifying effect size indicators, and strengthening longitudinal studies with cognitive network analysis. Additionally, systematic reviews should improve evidence credibility by integrating textual synthesis and data-driven reasoning to reduce redundancy and homogeneity.
    Keywords:  computational thinking; literature review; meta-analysis; pedagogical issues; systematic review; umbrella review
    DOI:  https://doi.org/10.3390/jintelligence13120157
  5. Arthritis Care Res (Hoboken). 2025 Dec 26.
      The development of Common Data Elements (CDEs) is a foundational component of supporting research in all diseases, and the National Institutes of Health (NIH) Common Data Elements repository, hosted by the National Library of Medicine (NLM), provides an online resource for investigators to identify CDEs for their research. This manuscript outlines the collaborative efforts of the Office of Autoimmune Disease Research, the Office of Data Science Strategy and the National Library of Medicine to support the development of CDEs for autoimmune disease research.
    DOI:  https://doi.org/10.1002/acr.70026
  6. Curr Oncol. 2025 Nov 28. pii: 668. [Epub ahead of print]32(12):
       BACKGROUND: Recently, patients have been using large language models (LLMs) such as ChatGPT, Gemini, and Claude to address their concerns. However, it remains unclear whether the readability, understandability, actionability, and empathy meet the standard guidelines. In this study, we aim to address these concerns and compare the outcomes of the LLMS to those of professional resources.
    METHODS: We conducted a comparative cross-sectional study by following the relevant items of the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist for cross-sectional studies and using 14 patient-style questions. These questions were collected from the professional platforms to represent each domain. We derived the 14 domains from validated quality-of-life instruments (EORTC QLQ-H&N35, UW-QOL, and FACT-H&N). Fourteen Responses were obtained from three LLMs (ChatGPT-4o, Gemini 2.5 Pro, and Claude Sonnet 4) and two professional sources (Macmillan Cancer Support and CURE Today). All responses were evaluated using the Patient Education Materials Assessment Tool (PEMAT), DISCERN instrument, and the Empathic Communication Coding System (ECCS). Readability was assessed using the Flesch Reading Ease and Flesch-Kincaid Grade Level metrics. Statistical analysis included one-way ANOVA and Tukey's HSD test for group comparisons.
    RESULTS: No differences were found in quality (DISCERN), understandability, actionability (PEMAT), and empathy (ECCS) between LLMS and professional resources. However, professional resources outperform the LLMs in readability.
    CONCLUSIONS: In our study, we found that LLMs (ChatGPT, Gemini, Claude) can produce patient information that is comparable to professional resources in terms of quality, understandability, actionability, and empathy. However, readability remains a key limitation, as LLM-generated responses often require simplification to align with recommended health-literacy standards.
    Keywords:  CURE Today; Macmillan Cancer Support; artificial intelligence; head and neck cancer; large language models; patient education; patient information resources; quality of life; readability assessment
    DOI:  https://doi.org/10.3390/curroncol32120668
  7. Knee Surg Sports Traumatol Arthrosc. 2025 Dec 26.
       PURPOSE: The aim of this study was to comparatively evaluate the responses generated by three advanced artificial intelligence (AI) models, ChatGPT-4o (OpenAI), Gemini 1.5 Flash (Google) and DeepSeek-V3, to frequently asked patient questions about meniscal tears in terms of reliability, usefulness, quality, and readability.
    METHODS: Responses from three AI chatbots, ChatGPT-4o (OpenAI), Gemini 1.5 Flash (Google) and DeepSeek-V3 (DeepSeek AI), were evaluated for 20 common patient questions regarding meniscal tears. Three orthopaedic specialists independently scored reliability and usefulness on 7-point Likert scales and overall response quality using the 5-point Global Quality Scale. Readability was analysed with six established indices. Inter-rater agreement was examined with intraclass correlation coefficients (ICCs) and Fleiss' Kappa, while between-model differences were tested using Kruskal-Wallis and ANOVA with Bonferroni adjustment.
    RESULTS: Gemini 1.5 Flash achieved the highest reliability, significantly outperforming both GPT-4o and DeepSeek-V3 (p = 0.001). While usefulness scores were broadly similar, Gemini was superior to DeepSeek-V3 (p = 0.045). Global Quality Scale scores did not differ significantly among models. In contrast, GPT-4o consistently provided the most readable content (p < 0.001). Inter-rater reliability was excellent across all evaluation domains (ICC > 0.9).
    CONCLUSION: All three AI models generated high-quality educational content regarding meniscal tears. Gemini 1.5 Flash demonstrated the highest reliability and usefulness, while GPT-4o provided significantly more readable responses. These findings highlight the trade-off between reliability and readability in AI-generated patient education materials and emphasise the importance of physician oversight to ensure safe, evidence-based integration of these tools into clinical practice.
    LEVEL OF EVIDENCE: Level V, observation-based, expert opinion-based, or in vitro/artificial intelligence model evaluation.
    Keywords:  ChatGPT; DeepSeek; Gemini; large language models; meniscal tear; patient education
    DOI:  https://doi.org/10.1002/ksa.70247
  8. J Clin Apher. 2025 Dec;40(6): e70085
      People are increasingly using artificial intelligence (AI)-based chatbots to provide health-related information. However, concerns remain regarding the quality, accuracy, and readability of the information they produce. This study aimed to evaluate and compare the responses of five widely used AI chatbots to the most frequently searched keywords about apheresis. On May 1, 2025, the 25 most searched apheresis-related keywords were identified using Google Trends. Two keywords were excluded due to irrelevance. The remaining 23 queries were submitted to five chatbots: GPT-4o, Gemini 2.5, Grok 3, DeepSeek v3, and Copilot. Responses were assessed using the EQIP tool for content quality, the DISCERN questionnaire for information reliability, and the Flesch-Kincaid grade level (FKGL) and reading ease (FKRE) metrics for readability. Statistical analysis was performed using the Kruskal-Wallis test and Bonferroni correction. Significant differences were found among chatbots in EQIP, DISCERN, FKGL, and FKRE scores (p = 0.001). DeepSeek v3 demonstrated the highest quality and accuracy (EQIP: 95.7%, DISCERN: 71.8), while GPT-4o had the best readability (FKRE: 43.1, FKGL: 9.1). Copilot showed the poorest readability. Overall, chatbot responses were generally written at a college reading level. AI chatbots vary substantially in the quality and comprehensibility of their health information about apheresis. While newer models like DeepSeek offer improved informational accuracy, readability remains a concern across all platforms. Future chatbot development should prioritize plain-language communication to enhance accessibility and health literacy for diverse patient populations.
    Keywords:  GPT; Gemini; apheresis; artificial intelligence; chatbot; health communication; readability
    DOI:  https://doi.org/10.1002/jca.70085
  9. Knee Surg Sports Traumatol Arthrosc. 2025 Dec 26.
       PURPOSE: The purpose is to analyze and compare the quality and readability of information regarding anterior shoulder instability and shoulder stabilization surgery from three LLMs: ChatGPT 4o, ChatGPT Orthopaedic Expert (OE) and Google Gemini.
    METHODS: ChatGPT 4o, ChatGPT OE and Google Gemini were used to answer 21 commonly asked questions from patients on anterior shoulder instability. The responses were independently rated by three fellowship-trained orthopaedic surgeons using the validated Quality Analysis of Medical Artificial Intelligence (QAMAI) tool. Assessors were blinded to the model, and evaluations were performed twice, 3 weeks apart. Readability was measured using Flesch Reading Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL). This study adhered to TRIPOD-LLM. Statistical analysis included the Friedman test, the Wilcoxon signed-rank tests and inter-class coefficients.
    RESULTS: Inter-rater reliability between three surgeons was good or excellent reliability in all LLMs. ChatGPT OE and ChatGPT 4o demonstrated comparable overall performance, each achieving a median QAMAI score of 22 with interquartile ranges (IQRs) of 5.25 and 6.75, respectively, with median (IQR) domain scores for accuracy 4 (1) and 4 (1), clarity 4 (1) and 4 (1), relevance 4 (1) and 4 (1), completeness 4 (1) and 4 (1), provision of sources 1 (0) for both and usefulness 4 (1) and 4 (1), respectively. Google Gemini showed lower scores across these domains (accuracy 3 [1], clarity 3 [1], relevance 3 [1.25], completeness 3 [0.25], sources 3 [3] and usefulness 3 [1.25]), with a median QAMAI score of 19 (5.25) (p < 0.01 vs. each ChatGPT model). Readability was higher for Google Gemini (FRES = 36.96, FKGL = 11.92) than for ChatGPT OE (FRES = 21.90, FKGL = 14.94) and ChatGPT 4o (FRES = 24.24, FKGL = 15.11), indicating easier-to-read content (p < 0.01). There was no significant difference between ChatGPT 4o and OE in overall quality or readability.
    CONCLUSIONS: ChatGPT 4o and ChatGPT OE provided statistically higher-quality responses than Google Gemini, though all models showed good-quality responses overall. However, responses generated by ChatGPT 4o and OE were more difficult to read than those generated by Google Gemini.
    LEVEL OF EVIDENCE: Level V, expert opinion.
    Keywords:  ChatGPT; anterior shoulder instability; artificial intelligence; large language models; shoulder stabilization surgery
    DOI:  https://doi.org/10.1002/ksa.70255
  10. Front Public Health. 2025 ;13 1698596
       Objective: This study assessed the accuracy, quality, and readability of responses from three leading AI chatbots-ChatGPT-3.5, DeepSeek-V3, and Google Gemini-2.5-on the diagnosis, treatment, and long-term risks of adult hypothyroidism, comparing their outputs with current clinical guidelines.
    Methods: Two thyroid specialists developed 27 questions based on the Guideline for the Diagnosis and Management of Hypothyroidism in Adults (2017 edition), covering three categories: diagnosis, treatment, and long-term health risks. Responses from each AI model were independently evaluated by two reviewers. Accuracy was rated using a six-point Likert scale, quality using the DISCERN tool and the five-point Likert scale, and readability was assessed by the Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI),and Simple Measure of Gobbledygook(SMOG).
    Results: All three AI models demonstrated excellent performance in accuracy (mean score > 4.5) and quality (high-quality rate > 94%). According to the DISCERN tool, no significant difference was observed in the overall information quality among the models. However, Gemini-2.5 generated responses of significantly lower quality for treatment-related questions than for diagnostic inquiries. The content generated by all models was relatively difficult to comprehend (low FRE scores and high FKGL/GFI scores), generally requiring a college-level or higher education for adequate understanding.
    Conclusion: All three AI chatbots were capable of producing highly accurate and high-quality medical information regarding hypothyroidism, with their responses showing strong consistency with clinical guidelines. This underscores the substantial potential of AI in supporting medical information delivery. However, the consistently high reading difficulty of their outputs may limit their practical utility in patient education. Future research should focus on improving the readability and patient-friendliness of AI outputs-through prompt engineering and multi-round dialogue optimization-while maintaining professional accuracy, to enable broader application of AI in health education.
    Keywords:  artificial intelligence chatbot; clinical guideline; hypothyroidism; patient education; readability
    DOI:  https://doi.org/10.3389/fpubh.2025.1698596
  11. Indian Dermatol Online J. 2025 Dec 23.
       OBJECTIVE: This study evaluated the accuracy, readability, understandability, and actionability of ChatGPT-3.5 responses to common patient questions about systemic isotretinoin therapy.
    MATERIALS AND METHODS: Thirty questions were developed in five categories (drug information, side effects, pregnancy, daily life, and course of treatment) based on resources from the British Association of Dermatologists and the Turkish Dermatology Association. Questions were presented to ChatGPT-3.5, and responses were evaluated using a four-point Likert scale for accuracy, the Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE) for readability, and the Patient Education Assessment Tool for Printed Materials (PEMAT-P) for understandability and actionability.
    RESULTS: Of the 90 evaluations, 44.4% of responses were comprehensive and correct, 18.8% were correct but insufficient, 32.2% were mixed with outdated data, and 4.4% were completely incorrect. The average FKGL was 13.28 ± 2.38, and the FRE score was 29.34 ± 10.4, indicating a college graduate reading level. PEMAT-P scores for understandability and actionability averaged 48.1% and 35.06%, respectively, falling below the 70% threshold. The "daily life" section had the highest scores for both metrics, while "pregnancy and contraception" scored the lowest.
    LIMITATIONS: This study was limited to ChatGPT-3.5, conducted in English, and based on training data available only up to 2021, which may affect the generalizability and currency of the results.
    CONCLUSION: While ChatGPT-3.5 shows potential as a patient education tool, it struggles to provide accurate, readable, and actionable information on systemic isotretinoin therapy. Its use requires supervision, and further refinement of artificial intelligence tools is needed to improve their utility in healthcare settings.
    DOI:  https://doi.org/10.4103/idoj.idoj_1243_24
  12. J Hip Preserv Surg. 2025 Dec;12(4): 242-247
      This study evaluates the quality and readability of responses given by ChatGPT 4 relating to common patient queries on Developmental Dysplasia of the Hip (DDH) and Periacetabular Osteotomy (PAO). Frequently asked questions on DDH and PAO were selected from online Patient Education Materials and posed to ChatGPT 4. The responses were evaluated by four high-volume PAO surgeons using a well-established evidence-based rating system, categorizing responses from 'excellent response not requiring clarification' to 'unsatisfactory requiring substantial clarification'. Readability assessments were subsequently conducted to determine the required literacy level to understand the content provided. Responses from ChatGPT 4 varied significantly between preoperative and postoperative queries. In the postoperative category, 50% of responses were rated as 'excellent', showing no need for further clarification, while the preoperative responses frequently required minimal to moderate clarification. The overall median response rating was 'satisfactory requiring minimal clarification'. Readability tests showed that the average Reading Grade Level was 13.44, considerably higher than the recommended sixth-grade level for patient education materials, indicating a substantial barrier to comprehension for the general public. While ChatGPT delivers generally reliable information, the complexity of its language is a major barrier to widespread utilization as a tool for patient education. Future iterations of ChatGPT should aim to utilize more simplistic language, as such enhancing accessibility without compromising content quality.
    DOI:  https://doi.org/10.1093/jhps/hnaf025
  13. J Cancer Educ. 2025 Dec 22.
      Online cancer information is a key health resource in Japan, particularly for older adults. However, the quality may vary depending on the source. This study aimed to evaluate Japanese-language cancer webpages across different sources in terms of understandability, actionability, readability, and credibility. We analyzed 100 webpages about five major cancers (breast, colorectal, lung, prostate, and gastric) retrieved via Google. Pages, classified as academic (n = 14), medical (n = 51), or corporate (n = 35), and assessed using PEMAT-P, jReadability, and JAMA Benchmark Criteria. Overall, 79 pages (79.0%) scored ≥ 70 in understandability, while only 5 pages (5.0%) reached this threshold in actionability. Corporate pages showed the highest proportion scoring ≥ 70 in understandability (n = 34, 97.1%), followed by academic (n = 14, 100.0%) and medical (n = 31, 60.8%) sources (p < 0.001). Readability was uniformly low, with 98 pages (98.0%) rated as somewhat to very difficult. Corporate sources also displayed significantly higher credibility scores than other sources (p < 0.001). Corporate websites demonstrated clearer structure and higher transparency of source attribution, potentially reflecting organizational standards. However, actionable content remained limited across all sources. These findings highlight a misalignment between user search needs and information design, underscoring the importance of structuring content not only for clarity and reliability, but also for supporting users in taking informed health actions. This has implications for digital cancer education targeting aging populations.
    Keywords:  Actionability; Cancer website; Health literacy; Japan; PEMAT; Readability
    DOI:  https://doi.org/10.1007/s13187-025-02809-6
  14. Womens Health Rep (New Rochelle). 2025 ;6(1): 1209-1215
       Introduction: Thousands of Hispanic parturients give birth in the United States annually, necessitating accessible health education resources in Spanish. Given the known Hispanic maternal care disparities and high reading levels of Spanish-written patient education materials (PEMs), this study aims to assess the readability and quality of obstetric (OB) anesthesia Spanish-language PEMs from a general internet search and academic leaders. We hypothesize that the readability and quality of PEMs from academic leaders will be superior to those found via general internet search.
    Methods: To identify Spanish-written PEMs on OB anesthesia, the webpages of 62 academic medical centers (AMCs) recognized as OB anesthesia leaders were screened. A general internet search using "anestesia y alivio del dolor durante el parto" ("anesthesia and pain relief during labor and delivery") was conducted to find an equal number of additional resources. Readability was assessed using the Fernandez-Huerta Readability Index (FHRI) and Indice de Legibilidad de Flesch-Szigriszt (INFLESZ) analyses, while quality was evaluated using the DISCERN instrument and the Health Education Materials Assessment Tool (HEMAT).
    Results: Twenty-eight Spanish-language PEMs from AMCs and 28 from a general internet search were identified. The FHRI and INFLESZ readability analyses revealed that PEMs from both cohorts primarily aligned with a 9-10th grade reading level. These reading levels significantly exceeded the recommended 4-6th grade level (p < 0.001). DISCERN scores indicated no quality difference between cohorts. Both groups achieved high HEMAT scores for understandability.
    Conclusion: The readability of online OB anesthesia Spanish-written PEMs from AMCs and a general internet search was both similar and higher than recommended. Quality did not differ between both cohorts. Improvements in readability and quality are needed for better patient-centered care and to emphasize the importance of shared decision-making.
    Keywords:  obstetric anesthesia; patient education materials; quality analysis; readability analysis
    DOI:  https://doi.org/10.1177/26884844251394823
  15. Andrology. 2025 Dec 21. e70161
       BACKGROUND: Ejaculatory dysfunction (EjD) is a prevalent sexual disorder. Because of the private nature of this condition, few patients are willing to discuss it openly or seek medical help, as this may cause them embarrassment. Internet searches are increasingly becoming the primary source of health information for individuals with sexual dysfunctions. However, given the wide variability in online resource quality, rigorous evaluation of both content reliability and textual accessibility becomes essential.
    METHODS: We systematically analyzed the top 100 Google search results for each of the following EjD-related terms: premature ejaculation (PE), delayed ejaculation (DE), retrograde ejaculation (RE), anejaculation, painful ejaculation, anorgasmia, and hematospermia. After applying pre-defined inclusion/exclusion criteria, two board-certified urologists independently evaluated eligible websites using standardized tools: the Journal of the American Medical Association (JAMA) benchmark criteria (for credibility), the DISCERN instrument (for health information quality), and validated readability metrics (Flesch Reading Ease [FRE], Gunning Fog index, and Simple Measure of Gobbledygook [SMOG] index).
    RESULTS: Our systematic evaluation of 345 websites revealed that commercial entities constituted the predominant source (n = 221, 64.1%) of online health information regarding ejaculatory dysfunction. Quality assessment using the DISCERN instrument demonstrated "fair" ratings for resources addressing PE, DE, RE, and anorgasmia, while those covering anejaculation, painful ejaculation, and hematospermia scored in the "poor" range. Analysis of ejaculatory disorder websites revealed mean JAMA benchmark scores of 2.0, with disclosure as the highest-scoring domain. Additionally, FRE scores for these websites indicate a reading difficulty level categorized as "difficult", which is comparable to college-level reading proficiency.
    DISCUSSION AND CONCLUSION: While online resources on EjD may serve as supplementary patient education materials, our analysis reveals significant limitations in their readability. These findings underscore the need for developing standardized quality control measures to systematically monitor and manage health information available online.
    Keywords:  DISCERN; ejaculatory dysfunction; health information; readability
    DOI:  https://doi.org/10.1111/andr.70161
  16. Front Public Health. 2025 ;13 1721461
       Introduction: Iron deficiency anemia (IDA) is the most widespread nutritional deficiency worldwide. This study aimed to evaluate the quality, reliability, and readability of Arabic web-based resources on IDA.
    Materials and methods: A retrospective, infodemiological descriptive study was conducted using validated assessment tools. A web-based search was performed on July 8, 2025, and the collected websites were evaluated using validated assessment tools, including DISCERN, JAMA, FKGL, SMOG, and FRE.
    Results: Of 36 included websites, most were from medical institutions or health portals. Overall reliability was limited (mean JAMA score 1.39 ± 0.96), and no website met all JAMA criteria. Website quality was generally moderate (DISCERN 39.72 ± 8.83), with governmental and public health websites performing poorly (p < 0.001). Readability was high (FKGL 3.65 ± 3.58; SMOG 3.26 ± 0.79; FRE ≥ 80 in 97.2%). JAMA and DISCERN scores were positively correlated (ρ = 0.430, p = 0.009).
    Conclusion: Arabic-language web resources on IDA are easily readable but demonstrate significant deficiencies in quality and reliability. This pronounced gap may contribute to misinformation, delayed care-seeking, and suboptimal management. Collaboration between medical institutions, public health organizations, and digital platforms will be essential for developing standardized, evidence-based patient education materials, which could support earlier intervention and help reduce the public health burden of IDA in Arabic-speaking communities.
    Keywords:  Arabic web resources; digital health literacy; iron deficiency anemia; quality; readability
    DOI:  https://doi.org/10.3389/fpubh.2025.1721461
  17. J Craniofac Surg. 2025 Dec 23.
       INTRODUCTION: Patients and parents increasingly rely on the internet to obtain medical information. The readability of these online webpages is significant, as lower literacy rates have been associated with poorer health outcomes. As such, the American Medical Association (AMA) and National Institutes of Health (NIH) recommend that health information be written between a 6th- and 8th-grade reading level. This study aimed to evaluate the readability and quality of online webpages discussing hemifacial microsomia (HFM).
    METHODS: Three of the largest online search engines were queried by 2 independent reviewers for "hemifacial microsomia." Readability was assessed using 6 readability tests: Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Simple Measure of Gobbledygook (SMOG) Index, Coleman-Liau Index (CLI), and Automated Readability Index (ARI). The quality of online webpages was assessed using the DISCERN handbook and scale.
    RESULTS: Thirteen webpages were included for analysis. The mean overall readability level was equivalent to a 13th-grade level. The mean readability grade level for each score used was: FKGL 12.4, GFI 15.7, SMOG Index 11.3, CLI 14.1, and ARI 13.2. The FRES was 36.8 (ie, difficult to read).
    CONCLUSION: Online webpages providing information regarding HFM are too difficult for most Americans to read. The readability of online patient information should be a priority for health care providers and medical organizations that publish this information. By improving the readability and quality of online health information, patients and caregivers will better understand their condition, effectively encouraging active participation in the shared decision-making process.
    Keywords:  Hemifacial microsomia; quality assessment; readability
    DOI:  https://doi.org/10.1097/SCS.0000000000012327
  18. J Am Vet Med Assoc. 2025 Dec 24. 1-5
       Objective: To evaluate the literacy level of online resources for management and recovery after cranial cruciate ligament rupture (CCLR) in dogs.
    Methods: This was a cross-sectional observational study evaluating the readability and suitability of online information sources describing CCLR management and recovery. Websites were queried on June 25, 2025. Websites lacking relevance or limited to videos, graphics, tables, blogs, or discussions were excluded. The first 15 online sources describing CCLR management and describing CCLR recovery were analyzed using the Flesch Reading Ease score and the Flesch-Kincaid Grade Level score. Scores were compared with those recommended by the American Medical Association to ensure understanding of medical communications by a broad segment of the US population (sixth-grade reading level).
    Results: For CCLR management, the mean ± SD (95% CI) reading ease score was 47.1 ± 5.6 (-∞ to 49.7) and was < 80 (Cohen d effect size = -5.66), and the mean grade level score was 10.7 ± 1.2 (10.1 to ∞) and was > 6 (d = 3.67). For CCLR recovery instructions, the mean reading ease score was 49.1 ± 9.5 (-∞ to 53.6) and was < 80 (d = -3.15), and the mean grade level score was 10.0 ± 1.7 (9.2 to ∞) was > the sixth grade (d = 2.31).
    Conclusions: Online information about the CCLR management and recovery is written at a level unlikely to be understood by a broad segment of the US population.
    Clinical Relevance: Veterinarians should simplify wording of medical communications related to CCLR management.
    Keywords:  health communication; health information; health literacy; patient education material; readability
    DOI:  https://doi.org/10.2460/javma.25.10.0680
  19. Alzheimers Dement. 2025 Dec;21 Suppl 9 e110456
       BACKGROUND: Patients are increasingly using YouTube as a source of health-related information. This study assessed the quality and reliability of videos on Alzheimer's disease dementia (ADD) available on the platform.
    METHOD: In October 2023, YouTube was systematically searched for ADD-related videos. Two independent physicians reviewed each video, scoring it using modified DISCERN (mDISCERN) for reliability and the Global Quality Scale (GQS) for content quality. Videos were categorized by goal and assessed for quality, accuracy, comprehensiveness, and specific content.
    RESULT: There were 58 videos included in the study. Using the GQS, 16 videos (28%) were assessed as high quality, 32 videos (55%) as medium quality, and 10 videos (17%) as low quality. Using the mDISCERN scale, 48 videos (83%) were deemed reliable, while 10 videos (17%) were classified as unreliable. Videos from academic institutions and physicians exhibited higher mDISCERN and GQS scores compared to other groups (p =  0.004, p =  0.005, respectively), and a significant correlation was seen between mDISCERN and GQS (p < 0.001).
    CONCLUSION: The majority of YouTube videos on ADD are good to fair quality, covering disease properties, treatment choices, and patient experiences. However, video popularity does not significantly correlate with content reliability and quality. Videos provided by academic institutions and healthcare professionals can help the general population to understand ADD.
    DOI:  https://doi.org/10.1002/alz70863_110456
  20. Psychiatriki. 2025 Dec 20.
      Hypnosis combined with cognitive behavioral therapy (CBT-hypnosis) is a type of psychological treatment that focuses on how people think and behave in various mental and medical illnesses. It treats behavioral and emotional issues by tapping into the subconscious mind. Patients who are hypnotized are more open to new ideas and less prone to decline hard ones. The result is that it is simpler to adopt the healthy cognitive patterns and habits that CBT tries to promote. YouTube is a great resource for health-related education that has the power to greatly impact the choices and actions of medical professionals, patients, and their primary caregivers, because they visit the YouTube platform to investigate and obtain guidance regarding CBT-hypnosis. However, unreliable and deceptive information on YouTube could encourage undesirable habits, making patients, primary caregivers, and hypnosis practitioners avoid CBT-hypnosis. Thus, the purpose of this study was to assess the quality and reliability of YouTube videos about CBT-hypnosis as a source of supportive information for practitioners, patients, and their primary caregivers. A total of 354 YouTube videos about CBT-hypnosis were analyzed. The videos' reliability and quality were assessed using the Global Quality Scale and a modified DISCERN tool. The analysis found that the median overall GQS score was 3 (IQR: 2; min-max: 1-5), indicating that the videos had moderate quality and some important information was adequately covered. The modified DISCERN tool yielded a median total score of 3 (IQR: 1; min-max: 0-5), indicating that the videos were moderately reliable and that the information was presented in a balanced and unbiased manner. Most of the included videos came from science and technology sources (academic channels) (57.6%; n = 204). While 42.4% of videos came from non-profits and activism, people and blogs, and others lay in public. As a supportive source of information, YouTube videos about CBT-hypnosis are regarded as being of a moderate level of quality and reliability. Therefore, formal presenters should promote the distribution of good-quality content, which helps to improve the quality of information available on the YouTube platform.
    Keywords:  Behavioral therapy; YouTube; cognitive behavioral therapy; cognitive therapy; hypnosis
    DOI:  https://doi.org/10.22365/jpsych.2025.025
  21. Alzheimers Dement. 2025 Dec;21 Suppl 6 e102052
       BACKGROUND: Patients are increasingly using YouTube as a source of health-related information. This study assessed the quality and reliability of videos on Alzheimer's disease dementia (ADD) available on the platform.
    METHOD: In October 2023, YouTube was systematically searched for ADD-related videos. Two independent physicians reviewed each video, scoring it using modified DISCERN (mDISCERN) for reliability and the Global Quality Scale (GQS) for content quality. Videos were categorized by goal and assessed for quality, accuracy, comprehensiveness, and specific content.
    RESULT: There were 58 videos included in the study. Using the GQS, 16 videos (28%) were assessed as high quality, 32 videos (55%) as medium quality, and 10 videos (17%) as low quality. Using the mDISCERN scale, 48 videos (83%) were deemed reliable, while 10 videos (17%) were classified as unreliable. Videos from academic institutions and physicians exhibited higher mDISCERN and GQS scores compared to other groups (p = 0.004, p = 0.005, respectively), and a significant correlation was seen between mDISCERN and GQS (p < 0.001).
    CONCLUSION: The majority of YouTube videos on ADD are good to fair quality, covering disease properties, treatment choices, and patient experiences. However, video popularity does not significantly correlate with content reliability and quality. Videos provided by academic institutions and healthcare professionals can help the general population to understand ADD.
    DOI:  https://doi.org/10.1002/alz70860_102052
  22. Digit Health. 2025 Jan-Dec;11:11 20552076251404501
       Objective: Postpartum depression affects 10-15% of women in the U.S. and up to 20% globally. TikTok, one of the most downloaded apps worldwide, has become an increasingly popular space for sharing health information. This study examines the tone, accuracy, and educational value of TikTok videos related to postpartum mental health (PPMH), with particular attention to narrator type, use of citations, and representation of psychiatric conditions.
    Methods: A cross-sectional analysis was conducted on 80 unique TikTok videos gathered using PPMH-related hashtags from new, unbiased accounts. Videos were coded using a structured qualitative and quantitative codebook. Three independent coders categorized the content by narrator type, presence of anecdotal or educational content, mention of other diagnoses, and citation of peer-reviewed literature. Engagement metrics were also recorded.
    Results: The 80 videos, created by 68 accounts, averaged 884,960 views, 74,420 likes, and 1201 comments per video. Narration was primarily by postpartum individuals (64%). Anecdotal storytelling dominated (78%), and 66% of videos were labeled as educational. However, only 2% cited any academic or peer-reviewed sources. Anxiety was the most frequently mentioned coexisting diagnosis (33%), followed by trauma. A small number of videos referenced suicide, often with coded spelling. Reaction and "stitched" videos were common and often used to validate shared experiences. Videos with more robust provider presence or structured information had higher engagement rates.
    Conclusion: TikTok is a powerful platform for PPMH conversations but is dominated by personal narratives and lacks consistent citation of reliable sources. Early findings point to high engagement but limited accuracy. There is an opportunity for healthcare professionals to contribute evidence-based content to better support and inform postpartum audiences.
    Keywords:  Postpartum depression; Tiktok; maternal mental health; online health education; social media
    DOI:  https://doi.org/10.1177/20552076251404501
  23. Sci Rep. 2025 Dec 22. 15(1): 44260
      Short-video platforms are becoming major sources of health information, but the quality and dissemination patterns of thyroid cancer-related content remain unclear. We conducted a large-scale cross-sectional analysis of 1,248 thyroid carcinoma videos retrieved from TikTok, Kwai, and Rednote between February 5-10, 2025. Video characteristics, including source, duration, and engagement metrics, were extracted, and quality was evaluated using JAMA, GQS, mDISCERN and PEMAT. TikTok videos had longer duration, higher engagement, and overall better quality compared with those on Kwai and Rednote. Videos from healthcare professionals and institutions achieved significantly higher information quality, whereas independent creators generated greater audience interaction, particularly comments and shares. Knowledge-focused content showed the higher reliable, while patient vlogs attracted more engagement. Engagement indicators were strongly intercorrelated but exhibited weak or negative associations with quality, suggesting that popularity does not fully reflect informational rigor. These findings highlight a divergence between video quality and audience reach, underscoring the need for professional participation, credential verification, and algorithmic refinement to enhance the visibility of evidence-based content and support public health literacy.
    Keywords:  Health communication; Information quality; Short video; Thyroid cancer; User engagement
    DOI:  https://doi.org/10.1038/s41598-025-27833-w
  24. Int Urogynecol J. 2025 Dec 22.
       INTRODUCTION AND HYPOTHESIS: Episiotomy is among the most performed obstetrical procedure globally. While restrictive episiotomy is recommended, patients may be reluctant to consent. TikTok, a rapidly growing video platform, is a popular patient resource to obtain health information on episiotomy. This study aimed to evaluate the information quality and degree of misinformation contained in TikTok videos about episiotomy and examine the relationship between user engagement and misinformation.
    METHODS: In this cross-sectional study, we identified the top videos with keyword "episiotomy" on the Canadian TikTok app. Three reviewers scored videos using the DISCERN instrument for health information quality and a 5-point Likert scale for misinformation. We evaluated correlation between user engagement and misinformation and narrator credentials and misinformation using the Pearson correlation coefficient.
    RESULTS: Forty-seven videos met the inclusion criteria. The median video length was 57 s (IQR 15-89). Most videos were oriented towards education (59.6%) and narrated by healthcare providers (36.2%) or patients (25.5%). Many videos (71.74%) contained low quality information (DISCERN score < 3), and 36.17% of videos contained misinformation (misinformation rating Likert score > 3). There was no significant correlation between engagement and misinformation (r = 0.06, p = 0.68) nor between narrator credentials and misinformation (r = 0.17, p = 0.29).
    CONCLUSIONS: Low-quality health information and misinformation about episiotomy are prevalent on TikTok. Neither user engagement nor narrator credentials showed a significant correlation with misinformation. Patient education about potential misinformation on these platforms and development of evidence-based resources about episiotomy are essential to support informed decision-making.
    Keywords:  Episiotomy; Misinformation; Patient education; Social Media
    DOI:  https://doi.org/10.1007/s00192-025-06496-1
  25. Sci Rep. 2025 Dec 24. 15(1): 44509
      This study aimed to assess the quality and reliability of short videos about uraemia on BiliBili and TikTok. On June 12, 2025, we searched the top 100 videos arranged by default using the keyword "uraemia" on both platforms. We collected and analysed their characteristics, content, and uploaders. To evaluate the quality, the Global Quality Score (GQS) and the modified DISCERN tool (mDISCERN) were used. This study included 153 videos in total, with 81 from TikTok and 72 from Bilibili. Overall, the audience engagement on TikTok was significantly greater than that on Bilibili. Although the median mDISCERN and GQS scores were identical across platforms (2 [2-2] and 2 [2-3], respectively), their score distributions differed significantly (p < 0.05). Interestingly, we found that videos from self-media achieved higher GQS scores than those from doctors or official media. A weak positive correlation was found between video quality and collections and shares on Bilibili alone (p < 0.05). In conclusion, the quality and reliability of health information from uraemia videos on TikTok and Bilibili were found to be suboptimal. Concerted efforts from the public, uploaders, and social media platforms are needed to foster the dissemination of high-quality health information.
    Keywords:  Bilibili; Reliability; Social media; TikTok; Uraemia; Video quality
    DOI:  https://doi.org/10.1038/s41598-025-28155-7
  26. Cartilage. 2025 Dec 23. 19476035251408206
      PurposeTo evaluate the quality, reliability, and educational value of TikTok videos on cartilage surgery. It was hypothesized that overall quality would be low but higher in videos by healthcare professionals (HCP) and those with educational content.MethodsTikTok was searched (September 22-25, 2025) using terms related to cartilage surgery and repair. Of 800 retrieved videos, 108 met inclusion criteria. Video metrics, uploader type, and content type were recorded. Quality and reliability were assessed using the DISCERN instrument, Journal of the American Medical Association (JAMA) benchmark criteria, and Global Quality Score (GQS). Associations between video metrics and quality scores were analyzed using Spearman rank correlation, and Mann-Whitney U tests compared scores by uploader and content type.ResultsMost videos were posted by private users (61.1%) and focused on patient experiences (58.3%). Duration, shares, and views correlated positively with all quality metrics (P < 0.001). HCP videos achieved significantly higher DISCERN (47.5 vs. 26.0), JAMA (2.9 vs. 0.9), and GQS (3.2 vs. 1.8) scores but lower engagement (all P < 0.001). Educational videos outperformed patient experience videos across all quality metrics (all P < 0.01).ConclusionTikTok videos on cartilage surgery demonstrated low overall quality and reliability. Greater professional engagement is needed to enhance the accuracy and credibility of cartilage-related information on social media.
    Keywords:  TikTok; cartilage; cartilage repair; knee; social media
    DOI:  https://doi.org/10.1177/19476035251408206
  27. Front Cell Infect Microbiol. 2025 ;15 1732375
       Objective: The gut-liver axis has emerged as a pivotal focus in hepatology and metabolic disease research. However, the quality of public-facing health information, particularly in short-form video content, remains largely unexamined.
    Methods: Between January 2021 and October 2025, we systematically screened and analyzed 210 short videos (70 per platform) on the gut-liver axis. Basic metadata were extracted, and video quality was assessed using three validated tools: the modified DISCERN instrument, JAMA Benchmark Criteria, and Global Quality Score (GQS). Pearson correlation was used to explore associations between video metrics and quality scores.
    Results: Bilibili videos showed the highest educational quality (mean GQS: 3.79), while TikTok videos had greater engagement (median likes: 74.00). Videos uploaded by healthcare professionals scored significantly higher across all quality measures (SUM score: 9.03 vs 3.87, p < 0.001). No significant correlation was found between engagement metrics and content quality.
    Conclusion: A misalignment exists between user engagement and informational quality in gut-liver-related short videos. Content from verified health professionals delivers superior educational value yet remains algorithmically underprioritized. Efforts to enhance digital health communication should focus on promoting expert-led content, verifying source credentials, and integrating quality-weighted algorithms.
    Keywords:  gut microbiota; gut-liver axis; health information quality; liver disease; short-form videos; social media platforms
    DOI:  https://doi.org/10.3389/fcimb.2025.1732375
  28. J Oral Facial Pain Headache. 2025 Dec;39(4): 242-251
       BACKGROUND: Temporomandibular disorders (TMDs) are typical biopsychosocial conditions often accompanied by anxiety, somatization and even cyberchondria. Targeted online health information is increasingly prominent in digital era yet its psychological impact on TMDs remains underexplored. To examine the association between targeted online health information and TMDs severity, and to explore whether anxiety, cyberchondria and somatization mediate this relationship.
    METHODS: Participants were evaluated using questionnaires including the five TMDs symptoms (TMDs-5T) scale, 7-item Generalised Anxiety Disorder scale (GAD-7), short-form version of the cyberchondria severity scale (CSS-12) and somatic symptom scale-8 (SSS-8). Data on targeted online health information, search frequency and perceived impact were also collected. Analyses were conducted using SPSS and Mplus.
    RESULTS: A total of 588 valid responses were analyzed. Greater TMDs severity was significantly associated with targeted online health information delivery (β = 1.373, p < 0.001), anxiety (β = 1.714, p < 0.001) and cyberchondria (β = 1.641, p < 0.001). The chain mediation model revealed that both the total effect and direct effect (Targeted online health information → TMDs-5T) were significant (β = 2.261, p < 0.001; β = 1.003, p < 0.001). A significant indirect pathway was also identified, in which targeted online health information influenced TMD severity through somatization, anxiety, and cyberchondria (β = 0.210, p = 0.026).
    CONCLUSIONS: Exposure to targeted online health information was associated with greater TMD symptom severity, mediated by psychological factors such as somatization, anxiety, and cyberchondria. These findings underscore the importance of algorithmic ethics, policy oversight, and user education to mitigate psychological risks in digital health environments.
    Keywords:  Anxiety; Chain mediation; Cyberchondria; Online healthy information; Somatic symptoms; Temporomandibular disorders
    DOI:  https://doi.org/10.22514/jofph.2025.081
  29. Front Public Health. 2025 ;13 1713794
       Background: Short-video platforms have become major channels for public access to health information in the digital era. However, the low barriers to content creation and the increasing use of AI-generated content have accelerated the spread of health misinformation, underscoring the need to better understand how users identify health misinformation in short videos.
    Methods: Grounded theory was applied to analyze 47 in-depth interviews and extract core factors influencing users' recognition of health misinformation in short videos. Based on the derived factor structure, a questionnaire survey was conducted and 279 valid samples were collected. Partial least squares structural equation modeling (PLS-SEM) was used to test the proposed relationships, and fuzzy-set qualitative comparative analysis (fsQCA) was further employed to identify the causal configurations through which different factor combinations contribute to users' health misinformation discernment.
    Results: The results identified three key categories: information quality, user characteristics, and external environments. The PLS-SEM model demonstrated acceptable explanatory power (R2  = 0.478) for users' health misinformation discernment in short videos. Among the seven proposed hypotheses, content logic (p < 0.05), narrative expression (p < 0.05), information structure (p < 0.01), cognitive level (p < 0.05), and external influences (p < 0.05) were statistically supported, while information reliability and psychological needs showed non-significant effects. The fsQCA further revealed three distinct causal configurations leading to effective discernment. When content logic functioned as the core condition, users tended to rely on central, analytical processing; whereas when external influences were dominant, users were more likely to depend on heuristic processing rather than message logic.
    Discussion: The findings highlight three distinct ways users process health misinformation in short videos, including primarily analytical evaluation, peripheral reliance on content cues, and peripheral reliance on cognitive cues. These results suggest practical strategies for mitigating health misinformation on short-video platforms, emphasizing interventions at individual, platform, and policy levels.
    Keywords:  PLS-SEM; fuzzy-set qualitative comparative analysis (fsQCA); health information governance; health misinformation; influencing factors; short videos
    DOI:  https://doi.org/10.3389/fpubh.2025.1713794
  30. Sex Health. 2025 Dec 24. pii: SH25069. [Epub ahead of print]
       BACKGROUND: Older adults are not traditionally a priority group for sexual health (SH) promotion, however recent years have seen increasing interest in this population. Effective SH promotion requires an understanding of older adults' interests, concerns and knowledge gaps.
    METHODS: In 2021, we conducted the 'SHAPE2' online survey of Australians aged 60+ on SH information-seeking. Data are from two questions: i) SH topics participants wanted to know more about, and ii) last SH topic participants sought information on since turning 60. Quantitative data were collected as Topics organized into Categories. Free-text comments were classified into Categories using Content Analysis. Data were analysed using descriptive statistics and chi2 test.
    RESULTS: There were 1,470 respondents with a median age of 69 years and a balance between men and women. The Categories of most interest were 'sexual anatomy and physiology' (1,043/1,248, 83.6%; 95%CI: 81.4-85.6), 'sex and ageing' (942/1,175, 80.2%; 95%CI: 77.8-82.4), and 'sexual difficulties' (937/1,236, 75.8%; 95%CI:73.3-78.2). The specific Topics of most interest were 'ageing and libido (sex drive)' (771/1175, 65.6%; 95%CI: 62.8-68.3), 'ageing and sexual pleasure' (766/1175, 65.2%; 95%CI: 62.4-67.9), and 'ageing and sexual performance' (765/1175, 65.1%; 95%CI: 62.3-67.8). Men were more likely to have sought information (51.5% versus 30.6%, p&lt;0.001) and indicated higher levels of interest, whereas women were interested in and/or had sought information on a wider range of issues. Differences were observed between SH issues of interest and those for which participants had sought information.
    CONCLUSION: Older adults seek information on, and are interested in, a variety of SH topics. To improve the sexual wellbeing of older people and address knowledge gaps, the priorities of older adults should be forefront when designing SH promotion strategies for this population.
    DOI:  https://doi.org/10.1071/SH25069
  31. Health Info Libr J. 2025 Dec 22.
       BACKGROUND: Health care is of great importance to individuals and those funding health care. The academic community is interested in how health care can be delivered and the role of health information.
    OBJECTIVES-: This study uses bibliometrics to identify novel research subjects, highly cited literature, worldwide cooperation relationships, author distribution and cooperative network, journal distribution, and research hotspots in the field of health information.
    METHODS: Data were collected from the Web of Science database, and the 3525 items of literature retrieved were analysed with word frequency, social network, and cluster analysis methods.
    RESULTS: The findings indicate that the Internet, Health Information, Health Information Technology, Health Literacy, and Health Information Exchange are the top five health information research topics. There is a close relationship between the research themes of COVID-19, Mental Health, Public Health, and Health Information Seeking. The main co-operative network is centred around the United States and the United Kingdom. College students (health information), young people (health information), and privacy issues (concerning health information) are the recent areas of research.
    CONCLUSIONS: This study can provide some insights for practitioners in libraries and health information institutions, for topic selection in health journals, and for international cooperation among educators.
    Keywords:  bibliometrics; health literacy; information and communication technologies; information seeking behaviour; internet; research networks
    DOI:  https://doi.org/10.1111/hir.70008