bims-librar Biomed News
on Biomedical librarianship
Issue of 2025–12–07
thirty papers selected by
Thomas Krichel, Open Library Society



  1. Adv Health Inf Sci Pract. 2025 ;1(2): KLVV3078
       Background: Previous AI literacy studies have been limited to clinical students and professionals and included subjective reporting. This survey study explored the extent of AI knowledge in health information professionals with subjective and objective questions. The objective of the study was to inform the International Federation of Health Information Management Associations (IFHIMA) and national health information bodies about current AI literacy levels and the education needs of their members.
    Methods: A descriptive survey was adapted from two validated, previously-published study instruments. Survey data were collected between December 5, 2024, and February 28, 2025 using a self-administered Qualtrics (Seattle, WA) online survey link distributed by email to IFHIMA members. The survey link was also distributed on the LinkedIn professional networking platform (LinkedIn Corp; Sunnyvale, CA) by multiple IFHIMA members. Results were analyzed using Chi-square, ANOVA, and Tukey HSD post-hoc tests to assess the associations between the categorical response variables and the subjective survey question measures.
    Results: A total of 176 participants began the survey. Data were cleaned to exclude 48 incomplete responses, leaving 128 complete and valid responses for analysis. AI knowledge varied by demographics; country of employment or residence and professional association membership were shown to influence familiarity with AI. Many health information professionals reported limited or no AI experience, and those with practical AI experience performed better on foundational AI knowledge questions, suggesting that experiential learning scaffolds AI literacy. Most respondents understood emerging AI-related threats. However, regardless of experiences with everyday AI tools, they struggled with AI modeling and product development.
    Conclusions: The study results identified a major gap in AI knowledge, and the authors provide input for educators aiming to align educational programs with job market demand by increasing AI knowledge content, addressing gaps through targeted curriculum development and educator training.
    Keywords:  artificial intelligence; education; health informatics; healthcare; literacy
    DOI:  https://doi.org/10.63116/KLVV3078
  2. BMJ Open. 2025 Dec 02. 15(12): e109345
       OBJECTIVES: To explore the perspectives of librarians and information specialists (LIS) on their experience and impact as peer reviewers of systematic reviews (SRs), and on facilitators and barriers to LIS methodological peer review.
    DESIGN: Survey and focus groups.
    SETTING: We surveyed LIS who completed a peer review of an SR in a randomised controlled trial conducted in BMJ, BMJ Open and BMJ Medicine from 3 January 2023 to 2 January 2024. The questionnaire sought to understand their experience, what aspects of manuscripts they focused on, perceived impact on editorial decision-making and authors' revisions and willingness to peer review again. To better understand factors that might impact decisions to review again, we contacted survey respondents to participate in a focus group concentrating on facilitators and barriers to peer reviewing SRs.
    PARTICIPANTS: 88 LIS were eligible for participation. From the survey respondents, 27 LIS who had volunteered were randomly selected and invited to participate in a follow-up focus group.
    RESULTS: Of the 88 LIS invited to participate in the survey, 70 (80%) responded. Most respondents had six or more years of experience as an LIS (67/70; 96%) and advising researchers on doing SRs (55/70; 79%) and had peer reviewed for a journal prior to the study (57/70; 81%). Most focused on the search and SR methods when reviewing but also commented on aspects such as research question formulation, plagiarism, study results and conclusions. Two-thirds (44/66; 67%) believed they impacted editors' decision-making and 59% (39/66) believed they impacted the authors' revisions. Only three factors were considered extremely or very likely to impact their decision to review again: their schedule and/or lack of time, review turnaround time and their sense of professional duty. 17 LIS (63.0%) participated in a focus group. Time was the primary barrier identified in the focus groups, followed by a sense of intimidation. LIS reported being motivated by feeling valued by editors, the enjoyment of peer reviewing, the desire to improve SR quality and peer review as a learning experience. Several expressed surprise and delight at being asked to peer review for the journals.
    CONCLUSIONS: LIS may be an underused peer reviewing resource with methodological experience that can help editors make decisions and improve the quality of SRs. Efforts to engage LIS as peer reviewers by journal editors are likely to be well-received when LIS expertise is clearly valued, sought and heeded. We encourage both journal editors and LIS to creatively advance efforts to promote LIS as methodological peer reviewers.
    TRIAL REGISTRATION NUMBER: https://doi.org/10.17605/OSF.IO/QVTY4.
    Keywords:  Capacity Building; Follow-Up Studies; QUALITATIVE RESEARCH; STATISTICS & RESEARCH METHODS; Surveys and Questionnaires
    DOI:  https://doi.org/10.1136/bmjopen-2025-109345
  3. Health Info Libr J. 2025 Dec 04.
       BACKGROUND: Health disparities remain as a systemic challenge. With the emergence of the Black Lives Matter movement and scant evidence of diversity, equity, and inclusion (DEI) initiatives for workers in health science libraries, this scoping review maps evidence that can be incorporated into a culture of change.
    OBJECTIVES: To identify the extent, type, and location of DEI initiatives being conducted in health science libraries for library workers.
    METHODS: Eight databases were systematically searched for literature from 2014 onwards, including PubMed, Scopus, and Web of Science. Four reviewers were involved in screening and data extraction.
    RESULTS: Reviewers excluded 6712 title/abstracts. A total of 177 articles progressed to full-text screening, where 153 were excluded. The final number of articles that underwent data extraction was 24.
    DISCUSSION: Initiatives primarily occurred in academic libraries, led by library workers. Identities mostly focused on were gender, race, and sexuality, while some initiatives focused on general DEI concepts. Most literature pertained to library patrons, demonstrating a gap in reported initiatives for health science library workers. Assessment of initiatives was lacking, with no validated assessment tools used. All of the articles focused on either the United States or Canada.
    CONCLUSION: Diversity continues to be a challenge within the profession; this should be mitigated through recruitment and retention strategies along with mentorship for new and diverse librarians.
    Keywords:  education and training; librarians, health science; librarians, medical; libraries, health science; library and information professionals; professional development; review, scoping; social justice
    DOI:  https://doi.org/10.1111/hir.70007
  4. Med Ref Serv Q. 2025 Dec 02. 1-12
      Health sciences librarians play a critical role in supporting evidence synthesis, mostly systematic and scoping reviews. This paper reflects on the author's decade of evidence synthesis services (2015-2025). During this time, the author provided extensive support to the health sciences researchers, faculty, students, and staff at a large research university through consultation and collaboration on systematic and scoping reviews. The author discusses the roles undertaken, practice guidelines adhered to, information sources frequently searched, SR tools utilized, challenges encountered, and lessons learned throughout the experiences. The paper concludes with some recommendations for enhancing systematic review services in health sciences libraries.
    Keywords:  Evidence synthesis; health sciences librarians; health sciences libraries; library services; scoping review; systematic review
    DOI:  https://doi.org/10.1080/02763869.2025.2595571
  5. Aust Occup Ther J. 2025 Dec;72(6): e70056
       INTRODUCTION: Scoping reviews are being completed by occupational therapists more frequently. Many occupational therapy scoping review search protocols only focus on white peer-reviewed literature accessible from online database searches. The original intent of scoping reviews was to also include searches of grey literature, but this source of evidence is frequently overlooked. This overview aims to discuss the advantages and challenges of grey literature and provide strategies for searching, accessing, and critically appraising it.
    METHODS: This overview discusses the advantages and challenges of grey literature and provides strategies for searching, accessing, and critically appraising it in the context of conducting a scoping review.
    RESULTS: Searching for and sourcing grey literature can be challenging, time-consuming, costly, and labour-intensive. Determining the breadth and inclusivity of a grey literature search can also be daunting. However, there are several significant advantages to incorporating grey literature in an occupational therapy scoping review search protocol. This includes reducing the impact of positive-results publication bias, providing a balanced perspective on the available evidence, and offering more detailed information than journal literature, as grey literature reports are not restricted by publisher-enforced length limitations.
    CONSUMER AND COMMUNITY INVOLVEMENT: No consumer or community involvement occurred during the writing of this manuscript in part due to the nature of the article topic. Any terminology used in this manuscript also does not refer to any specific societal, community, or cultural groups.
    CONCLUSION: Moving forwards, it is strongly recommended that occupational therapists undertaking a scoping review include both white and grey literature sources in their search protocols where appropriate. Doing so will enhance the breadth, depth, and rigour of occupational therapy scoping review outcomes.
    Keywords:  evidence; grey literature; methodology; occupational therapy; scoping reviews; search process
    DOI:  https://doi.org/10.1111/1440-1630.70056
  6. J Exp Orthop. 2025 Oct;12(4): e70521
       Purpose: High tibial osteotomy (HTO) is frequently used to treat knee malalignment in younger patients. Given the rise in online health information-seeking behaviour, this study aimed to evaluate the quality of ChatGPT-generated responses to frequently asked questions (FAQs) about HTO and to assess the reliability of two scoring systems used by orthopaedic surgeons.
    Methods: Twenty-four FAQs were submitted to ChatGPT (GPT-4-turbo). Four orthopaedic surgeons independently rated the responses at two time points using: (1) a 4-point categorical scale (1 = excellent, 4 = poor), and (2) a 100-point numerical scale (0 = worst, 100 = best). Intra-observer reliability was assessed using weighted kappa (κ) and intraclass correlation coefficients (ICC); inter-observer agreement was measured using ICC values.
    Results: Most responses were rated positively, with over 70% considered 'excellent' or requiring minimal clarification. Intra-observer agreement was variable, ranging from κ = 0.333 to 0.864 and ICC = 0.690-0.922. Inter-observer agreement was consistently low across both scales (ICC ≤ 0.390).
    Conclusion: ChatGPT responses to HTO-related FAQs were rated as high quality by most evaluators. However, the low inter-observer agreement highlights the need for standardised evaluation tools and suggests that expert oversight remains essential when integrating AI-generated content into patient education.
    Level of Evidence: Level V.
    Keywords:  ChatGPT; artificial intelligence; high tibial osteotomy; scoring system
    DOI:  https://doi.org/10.1002/jeo2.70521
  7. BMC Oral Health. 2025 Dec 04.
       BACKGROUND: This study aimed to evaluate and compare the quality, reliability, readability, and originality of information provided by two AI-generated chatbots-ChatGPT-4.0 and Google Gemini Pro-regarding primary tooth pulpotomy, a common pediatric dental procedure.
    METHODS: Based on the current AAPD guidelines and frequently asked parental inquiries, a total of 20 questions on primary tooth pulpotomy-10 theoretical and 10 clinical-were prepared. Each question was presented to the AI-generated chatbots (ChatGPT-4.0 and Google Gemini Pro) in a new conversation session without providing any guiding prompts. The responses were evaluated using standardized assessment criteria, including DISCERN, EQIP, GQS, FRES, FKRGL, and the iThenticate similarity index. Two experienced pediatric dentists independently assessed the answers after a calibration process to standardize scoring, and inter-rater reliability was confirmed using the intraclass correlation coefficient (ICC).
    RESULTS: Gemini Pro demonstrated higher reliability and quality scores (DISCERN and EQIP), while ChatGPT-4 produced responses with higher complexity, requiring university-level reading skills (FKRGL). No significant differences were observed in plagiarism or global quality scores (GQS). Gemini Pro's responses were more readable, enhancing accessibility for broader audiences.
    CONCLUSION: While both AI models generated informative and original content, Gemini Pro provided more reliable and accessible responses, making it a potentially valuable resource for patient and parent education. However, AI-generated information should not replace professional dental consultation. Future AI development should focus on improving source transparency, readability, and clinical relevance. AI-powered language models can be valuable resources for patient and parent education on primary tooth pulpotomy. Gemini Pro's higher reliability and readability enhance accessibility, while ChatGPT-4's responses require a higher level of education. These models have the potential to improve access to dental information.
    Keywords:  Artificial intelligence; ChatGPT/Gemini; Oral health information; Pediatric dentistry; Pulpotomy
    DOI:  https://doi.org/10.1186/s12903-025-07449-2
  8. Cureus. 2025 Nov;17(11): e95898
      Introduction Acute appendicitis is one of the most common diseases occurring due to inflammation of the vermiform appendix, which requires surgical intervention. With the advent of standardized artificial intelligence (AI) tools such as ChatGPT (OpenAI, San Francisco, CA), AI-based search engines have emerged as a secondary means for patients to educate themselves about their health. The readability of each response is important for concept understanding as well as the impact of novel therapeutics. Aims This study aims to evaluate and compare the readability of medical information on acute appendicitis generated by an AI language model and UpToDate (Wolters Kluwer Health, Waltham, MA) using established readability metrics. Methodology A comparative cross-sectional study was conducted to evaluate the readability of six ChatGPT-4o and six UpToDate responses on acute appendicitis. Readability parameters were assessed using WebFX (WebFX®, Harrisburg, PA), and differences between sources were analyzed using the Mann-Whitney U test in IBM SPSS Statistics software, version 25 (IBM Corp., Armonk, NY) and R (v4.3.2, The R Core Team, R Foundation for Statistical Computing, Vienna, Austria). Results UpToDate had a higher word count and higher words per sentence than ChatGPT (both p < 0.05). ChatGPT had a lower absolute difficult-word count (p = 0.002) but a higher difficult-word percentage (p = 0.002). Differences in Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG), and sentence count were not statistically significant (all p > 0.05). Conclusions ChatGPT produced more concise content compared to UpToDate, but its higher proportion of difficult words may limit comprehension, highlighting the need to balance brevity with readability in AI-generated medical information.
    Keywords:  acute appendicitis; chatgpt; complex words; grade level; readability; smog index
    DOI:  https://doi.org/10.7759/cureus.95898
  9. Clin Transl Allergy. 2025 Dec;15(12): e70130
       BACKGROUND: Chat Generative Pre-Trained Transformer 4 (ChatGPT-4) represents an advancing large language model (LLM) with potential applications in medical education and patient care. While Allergen Immunotherapy (AIT) can change the course of allergic diseases, it can also bring uncertainty to patients, who turn to readily available resources such as ChatGPT-4 to address these doubts. This study aimed to use validated tools to evaluate the information provided by ChatGPT-4 regarding AIT in terms of quality, reliability, and readability.
    METHODS: In accordance with EAACI clinical guidelines about AIT, 24 questions were selected and introduced in ChatGPT-4. Independent reviewers evaluated ChatGPT-4 responses using three validated tools: the DISCERN instrument (quality), JAMA Benchmark criteria (reliability), and Flesch-Kincaid Readability Tests (readability). Descriptive statistics summarized findings across categories.
    RESULTS: ChatGPT-4 responses were generally rated as "fair quality" on DISCERN, with strengths in classification/formulations and special populations. Notably, the tool provided good-quality responses on the preventive effects of AIT in children and premedication to reduce adverse reactions. However, JAMA Benchmark scores consistently indicated "insufficient information" (median = 0-1), primarily due to absent authorship, attribution, disclosure, and currency. Readability analyses revealed a college graduate-level requirement, with most responses classified as "very difficult" to understand. Overall, ChatGPT-4 demonstrated fair quality, insufficient reliability, and difficult readability for patients.
    CONCLUSIONS: ChatGPT-4 provides generally well-structured responses on AIT but lacks reliability and readability for clinical or patient-directed use. Until specialized, reference-based models are developed, healthcare professionals should supervise its use, particularly in sensitive areas such as dosing and safety.
    Keywords:  allergen immunotherapy; allergic rhinitis; artificial intelligence
    DOI:  https://doi.org/10.1002/clt2.70130
  10. Knee Surg Sports Traumatol Arthrosc. 2025 Dec 01.
       PURPOSE: To compare the accuracy, readability and patient-centeredness of responses generated by standard ChatGPT-4o and its retrieval-augmented 'deep research' mode for hip arthroscopy education, addressing the current uncertainty about the reliability of large language models in orthopaedic patient information.
    METHODS: Thirty standardised patient questions were derived through structured searches of reputable orthopaedic health information websites. Both ChatGPT configurations independently generated responses. Two fellowship-trained orthopaedic surgeons assessed each response independently, using 5-point Likert scales (1 = poor, 5 = excellent) for accuracy, clarity, comprehensiveness and readability. Intra- and interrater reliabilities were calculated, and readability metrics were also evaluated using Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES).
    RESULTS: Deep Research outperformed the standard model in accuracy (4.7 ± 0.4 vs. 4.0 ± 0.5; p = 0.012) and comprehensiveness (4.8 ± 0.3 vs. 3.9 ± 0.6; p < 0.001). The standard model performed better in clarity (4.6 ± 0.4 vs. 4.4 ± 0.5; p = 0.048). Readability Likert scores were comparable (p = 0.729), but FKGL and FRES favoured the standard model (both p < 0.001). Interrater intraclass correlation coefficients (ICC) ranged from 0.57 to 0.83; intrarater ICCs from 0.63 to 0.79.
    CONCLUSION: Deep research provides superior scientific rigour, whereas the standard model offers better readability. A hybrid approach combining model strengths may maximise educational effectiveness, though clinical oversight remains essential to mitigate misinformation risks. The observed differences were modest in magnitude, aligning with previously reported accuracy-readability trade-offs in LLMs. These results should be interpreted as exploratory and hypothesis-generating.
    LEVEL OF EVIDENCE: Level IV, cross-sectional, comparative simulation study.
    Keywords:  ChatGPT; artificial intelligence; hip arthroscopy; large language models; patient education; retrieval‐augmented generation
    DOI:  https://doi.org/10.1002/ksa.70207
  11. Sci Rep. 2025 Nov 29.
      Large language model-based (LLM) chatbots are increasingly integrated into healthcare communication, offering accessible and interactive information. These artificial intelligence (AI) tools have the potential to influence caregiver health behaviors when tailored to user needs and literacy levels. In pediatric dentistry, fluoride remains a cornerstone of caries prevention but is also subject to public concerns and online misinformation, underscoring the need for reliable digital communication. This observational and exploratory study evaluated the performance of three advanced AI chatbots-ChatGPT-4.o, Google Gemini Pro, and DeepSeek V3-in providing fluoride-related information to parents and caregivers in the context of pediatric oral health. Twenty fluoride-related questions, derived from American Academy of Pediatric Dentistry (AAPD) guideline themes, were presented to each chatbot in standardized sessions. Responses were independently evaluated by three blinded reviewers using validated tools: EQIP, DISCERN, Global Quality Scale (GQS), Flesch Reading Ease Score (FRES), Flesch-Kincaid Reading Grade Level (FKRGL), and iThenticate similarity index. These instruments assessed quality, reliability, readability, and originality. Inter-rater reliability was confirmed with intraclass correlation coefficients (ICCs). Statistical analyses were conducted using ANOVA or Kruskal-Wallis tests with appropriate post-hoc methods. ChatGPT-4.o achieved significantly higher EQIP (M = 4.32, SD = 0.43) and DISCERN (M = 4.20, SD = 0.48) scores than Gemini Pro and DeepSeek V3 (p < 0.001), indicating superior reliability and informational quality. While FRES (median = 68.5, p = 0.12) and Similarity Index (≤ 10%, p = 0.54) showed no significant differences, ChatGPT consistently produced more readable and original content. FKRGL differences were borderline (p = 0.041) but not retained after correction, and GQS outcomes were comparable. These findings suggest that ChatGPT's superior performance is not only statistically significant but also practically relevant for enhancing parental comprehension of fluoride use. Among the evaluated models, ChatGPT-4.o demonstrated the clearest and most reliable fluoride communication. Its higher EQIP and DISCERN scores highlight its potential as a supportive tool for caregiver education in pediatric dentistry. Nonetheless, these systems should be implemented cautiously, complemented with professional oversight, and continuously validated to prevent misinformation and ensure safe clinical integration.
    Keywords:  Artificial intelligence; Chatbots; Fluoride; Pediatric dentistry
    DOI:  https://doi.org/10.1038/s41598-025-28857-y
  12. Reumatologia. 2025 ;63(5): 313-320
       Introduction: This study aimed to evaluate the readability, quality, reliability, similarity, and length of texts generated by ChatGPT on common rheumatic diseases and compare their content with American College of Rheumatology (ACR) patient education fact sheets.
    Material and methods: Fifteen common rheumatic diseases were included based on the ACR fact sheets. Questions about disease characteristics, symptoms, treatments, and lifestyle recommendations were generated based on ACR content and input into ChatGPT-4 for comparison. Readability was assessed using the Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease (FRE), and the Simple Measure of Gobbledygook (SMOG) index. Quality and reliability were evaluated using the DISCERN questionnaire and the Ensuring Quality Information for Patients (EQIP) tool. Text similarity was measured using cosine similarity, and word count was obtained using Microsoft Word.
    Results: ChatGPT-generated texts had significantly higher FKGL scores (14.3 vs. 12.7; p = 0.007) and SMOG scores (p < 0.001), indicating greater linguistic complexity. They also had lower FRE scores (35.8 vs. 43.7; p < 0.001). The mean DISCERN score for ChatGPT was significantly lower than for ACR fact sheets (46 vs. 52; p < 0.001), suggesting reduced reliability. However, no significant difference was found in EQIP quality scores (p = 0.744). Cosine similarity between ChatGPT and ACR texts averaged 0.69 (range: 0.57-0.76), indicating moderate content overlap. ChatGPT texts were more than twice as long, with a median word count of 1,109 compared to 450 for ACR materials (p < 0.001).
    Conclusions: Despite the moderate similarity, ChatGPT-generated texts on rheumatic diseases were more complex, less reliable, and longer than ACR fact sheets. These findings highlight the need for improvements in artificial intelligence-driven healthcare tools to ensure readability, accuracy, and reliability, making them more aligned with expert-reviewed resources.
    Keywords:  American College of Rheumatology; ChatGPT; health information; rheumatic diseases
    DOI:  https://doi.org/10.5114/reum/207526
  13. Fr J Urol. 2025 Dec 03. pii: S2950-3930(25)00209-8. [Epub ahead of print] 103062
       PURPOSE: Nocturnal enuresis, defined as involuntary urination during sleep in children aged five years and older, is a prevalent condition affecting millions of children worldwide with significant psychosocial implications. While artificial intelligence-powered chatbots have rapidly emerged as accessible health guidance tools, the clarity, consistency, and reliability of their information remain uncertain.
    METHODS: This study systematically compared the quality, readability, and clinical reliability of responses from three leading large language models (OpenAI GPT-4o, Google Gemini 2.5 Pro, and DeepSeek R1) to the 40 most frequently asked questions about childhood nocturnal enuresis. Designed as a cross-sectional observational study, questions identified through decade-long search engine data analysis were organized into four thematic categories and posed in Turkish to the chatbots. As the study exclusively analyzed publicly available AI outputs without involving human participants, institutional review board approval was not required. Responses were evaluated using the Flesch Reading Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL) for readability; Ensuring Quality Information for Patients (EQIP) and modified DISCERN (mDISCERN) tools for quality, applying a double-blind methodology. Group comparisons were conducted using ANOVA and post-hoc tests.
    RESULTS: All chatbots generated texts in the "difficult to read" range (FRES: 33.6-40.9), requiring university-level comprehension (FKGL: 20.3-21.9), thereby substantially limiting accessibility for the target parent audience. DeepSeek demonstrated significantly superior performance on EQIP criteria (70.4 ± 9.2), outperforming both Gemini (57.8 ± 6.3) and GPT-4o (54.7 ± 4.8) (p = 0.003). However, mDISCERN scores remained low across all models (2.10-2.30, p = 0.183).
    CONCLUSION: Current AI chatbots offer only limited potential as reliable and accessible health information sources on nocturnal enuresis and are not yet adequate for clinical use. Future developments must prioritize plain language implementation, structured information delivery, and alignment with current pediatric urology guidelines to transform digital health tools into genuinely beneficial and clinically reliable resources for families.
    Keywords:  Nocturnal enuresis; artificial intelligence; digital health; grands modèles de langage; health literacy; intelligence artificielle; large language models; lisibilité; littératie en santé; pediatric urology; readability; santé numérique; urologie pédiatrique; Énurésie nocturne
    DOI:  https://doi.org/10.1016/j.fjurol.2025.103062
  14. Oral Oncol. 2025 Dec 04. pii: S1368-8375(25)00642-6. [Epub ahead of print]172 107813
       OBJECTIVE: To compare the quality of online information about human papillomavirus (HPV)-associated oropharyngeal cancer generated by a large language model with content retrieved from conventional web search and authoritative guideline-based sources.
    METHODS: Twenty high-volume patient search queries were identified using global Google Trends data. For each question, responses were obtained from GPT-4 (OpenAI), the highest-ranked non-sponsored Google Search result, and leading governmental or guideline-based websites. Responses were anonymized and evaluated in a blinded manner by seven otolaryngology specialists and ten adult laypersons. Experts assessed accuracy, clarity, completeness, relevance, and usefulness; laypersons rated clarity, trustworthiness, and usefulness. Comparative analyses were performed using Friedman and Bonferroni-corrected Wilcoxon signed-rank tests, with inter-rater agreement estimated using intraclass correlation coefficients (ICC) RESULTS: ChatGPT-generated responses received higher mean ratings than Google Search across all domains for both rater cohorts (p < 0.001 for all comparisons). Experts rated GPT-4 and guideline-based content similarly for accuracy, completeness, and usefulness, while GPT-4 scored significantly higher for clarity and relevance (p < 0.01). Laypersons rated GPT-4 responses highest across all domains, with median scores of 5 versus 4 for the other sources. Inter-rater agreement was modest for subjective domains.
    CONCLUSION: ChatGPT-generated information on HPV-associated oropharyngeal cancer matched the accuracy and completeness of authoritative guideline-based content and demonstrated significantly greater clarity and relevance, while outperforming conventional web search results. LLMs may help improve accessibility and consistency of online patient education when implemented with expert oversight, transparent sourcing, and ongoing quality monitoring.
    Keywords:  Artificial intelligence; Digital health literacy; GPT-4; Google search; Health communication; Human papillomavirus; Large language model; Online health information; Oropharyngeal cancer; Patient education
    DOI:  https://doi.org/10.1016/j.oraloncology.2025.107813
  15. Menopause. 2025 Dec 02.
       OBJECTIVE: Generative artificial intelligence is rapidly evolving and is now being explored in health care to support patient and clinician education. This study evaluated the accuracy, completeness, and readability of four large language models (LLMs): ChatGPT 3.5, Gemini, ChatGPT 4.0, and OpenEvidence in answering questions about menopause and hormone therapy.
    METHODS: A total of 35 questions (20 patient-level, 15 clinician-level) were entered into each LLM. OpenEvidence was only used for clinician-level questions. Four blinded expert reviewers rated responses as accurate and complete, accurate but incomplete, or inaccurate. Readability of patient-level responses was assessed using the Flesch Reading Ease Score (FRES) and word count. Analysis used ANOVA for readability, odds ratios for accuracy comparisons.
    RESULTS: For patient-level questions, ChatGPT 3.5 achieved the highest accuracy (70%), followed by ChatGPT 4.0 (60%) and Gemini (30%); Gemini had significantly lower odds of accuracy compared with ChatGPT 3.5 (OR=0.18, 95% CI=0.05-0.71; P=0.014). FRES scores differed significantly (P<0.001): Gemini scored 38.9±7.3 ("difficult"), ChatGPT 3.5 scored 31.0±11.2, and ChatGPT 4.0 scored 26.5±8.6 (both "very difficult"). For clinician-level questions, ChatGPT 4.0 achieved the highest accuracy (67%), followed by ChatGPT 3.5 and OpenEvidence (60% each) and Gemini (47%); no significant differences were observed among models (all P>0.05).
    CONCLUSION: LLMs demonstrated limited accuracy and frequent incorrect or incomplete responses to menopause-related queries, highlighting the need to improve model performance to ensure accurate and reliable information for both patients and clinicians.
    Keywords:  Artificial intelligence; Clinician education; Large language models; Menopause education; Patient education.
    DOI:  https://doi.org/10.1097/GME.0000000000002695
  16. Psychooncology. 2025 Dec;34(12): e70349
       OBJECTIVE: Sarcoma is a rare cancer with complex treatment phases, leaving people with sarcoma and their carers with unmet information and support needs. This review provides an evaluation of sarcoma websites internationally to inform the development of online resources for the SUN-SHINE sarcoma project, aimed at addressing the unmet needs of people with a sarcoma diagnosis.
    METHODS: A review of sarcoma information websites was conducted via Google search, with the first 3 pages of results of 19 searches undergoing eligibility screening. Of 95 websites yielded by the initial screening, 40 were eligible for assessment using the Flesch-Kincaid Grade Level (FKGL), Gunning-Fog Index (GFI), Coleman-Liau Index (CLI), and Simple Measure of Gobbledygook (SMOG) Index, and Flesch-Kincaid Reading Ease (FRE) for readability, and the Patient Education Materials Assessment Tool (PEMAT) for understandability and actionability. Readability was assessed against the Australian recommendation of grade 8 reading level for health information and higher PEMAT scores indicated greater understandability and actionability.
    RESULTS: The 40 websites reviewed were based in Europe (n = 13), Oceania (n = 12), North America (n = 10), and Asia (n = 4), with one multinational site. Websites generally contained pages on sarcoma definitions, diagnosis, and treatments, but lacked information on supportive and psychosocial care. Readability assessments exceeded (were less readable than) the general population reading level (M = 10.5; SD = 2.0). PEMAT scoring of websites revealed understandability averaging 70.7% (SD = 14.8), but lower actionability (M = 29.1%; SD = 26.4).
    CONCLUSIONS: Limited sarcoma supportive care information exists, with no caregiver-focused websites and little tailoring for specific populations. Websites contained some components to support their readability, such as sub-headings and summaries, but more inclusive and accessible websites are warranted.
    Keywords:  environmental scan; oncological resources; patient and caregiver resources; rare cancers; sarcoma; supportive care
    DOI:  https://doi.org/10.1002/pon.70349
  17. Int J Dent Hyg. 2025 Dec 04.
       BACKGROUND: The internet is an important source of health information for the population. There is robust evidence about the efficacy of mouthwashes in the prevention and treatment of oral diseases. However, many websites containing mouthwash-related content may display misinformation and be challenging to read and understand. Thus, this study evaluates the quality of information available about mouthwashes on Brazilian websites.
    METHODS: A total of 100 websites were evaluated across Google, Bing and Yahoo!. The websites were organised into rankings according to their order of appearance on each of the four search engines. Two independent examiners assessed the quality of the websites using the DISCERN questionnaire and the Journal of American Medical Association (JAMA) benchmark criteria. The readability of the sites was assessed by the Flesch Reading Ease adapted to Brazilian Portuguese (FRE-BP). The content of the websites was categorised according to the presence or absence of information relevant to the theme. Statistical analysis was performed using Spearman rank correlation coefficient, Mann-Whitney U test, Kruskal-Wallis test and Dunn post hoc test.
    RESULTS: A total of 32 sites were analysed. Web content was considered of poor quality by DISCERN (mean 37.46 ± 8.28) and JAMA (mean 1.37 ± 0.87) scores, presenting difficult reading levels (FRE-BP: mean 44.04 ± 9.89).
    CONCLUSIONS: The mouthwash-related content available on Brazilian websites was considered of low quality and difficult to read.
    Keywords:  COVID‐19; consumer health information; dental informatics; internet use; mouthwashes
    DOI:  https://doi.org/10.1111/idh.70023
  18. Addict Sci Clin Pract. 2025 Dec 02. 20(1): 93
       BACKGROUND: The utilisation of online evidence-based written educational resources is crucial in addressing problematic alcohol and other drugs (AOD) use through prevention, treatment, and intervention strategies. However, low health literacy among one in five Australian adults raises concerns regarding the effective understanding of health information. This study aims to evaluate the content, suitability, and readability of AOD resources in New South Wales (Australia), recognising the importance of accessible and informative resources in supporting AOD demand reduction strategies.
    METHODS: In this research, a comprehensive desktop search was conducted to analyse one to two-page AOD resources readily accessible through the internet in New South Wales, published by government and not-for-profit organisations. The content was thoroughly analysed for its coverage of key AOD topics. The Suitability Assessment of Materials (SAM) instrument evaluated visual and written elements, examining aspects like layout, typography, and illustrations. Readability was assessed using Flesch -Kincaid Grade Level (FKGL), Gunning Fog Index (FOG), Simplified Measure of Gobbledygook (SMOG), and Flesch Reading Ease tools. Descriptive statistics, including frequency, percentage, and standard deviation were calculated.
    RESULTS: The study analysed 88 AOD resources. Most resources had a target audience, but only three resources involved consumers in the development process. The content analysis showed 66% focused on drug-related topics, 20% on alcohol-related topics, and 14% covered both. Topics such as alcohol use during pregnancy and breastfeeding were well addressed in alcohol resources. Additionally, 90% of the resources had headings and subheadings. However, only 28% scored 'superior' for layout, and none achieved 'superior' ratings for typography. Furthermore, 74% did not use illustrations to highlight key messages. Most resources used an active voice and conversational style, but complex sentences were common. The average reading grade level of the resources was 9 ± 2.6 with FOG and Flesch's reading ease indicating 10th-grade difficulty, while FKGL and SMOG suggested a 7th-grade level.
    CONCLUSIONS: The evidence strongly suggests the need for the development of AOD resources that are accessible to individuals with low literacy levels without sacrificing content coverage. A key recommendation is to involve consumers in both developing and reviewing these resources.
    Keywords:  Alcohol and other drugs; Content analysis; Health literacy; Patient education; Readability
    DOI:  https://doi.org/10.1186/s13722-025-00615-5
  19. Front Public Health. 2025 ;13 1666853
       Objective: This study aimed to evaluate the information quality and content of dementia prevention on WeChat.
    Methods: The search term "dementia prevention" was used on WeChat, resulting in 125 samples being included. Information quality was assessed using GQS and PEMAT-P. The content was evaluated based on dementia prevention guidelines and article characteristics.
    Results: Information quality was moderate (median 3.0), with high understandability and actionability. Most articles were published by medical institutions (37.6%), but governmental organizations achieved the highest scores (p < 0.05). Content completeness was low, with healthy lifestyle being mentioned most frequently (98.4%), while sensory organ protection and improving air environment were mentioned least frequently (both at 3.2%). Articles with more complete content and fewer advertisements demonstrated significantly higher information quality (p < 0.001 and p = 0.016, respectively).
    Conclusion: Overall, the information quality of dementia prevention on WeChat was medium, with high understandability and actionability but low content completeness. Articles with more complete content and fewer advertisements have better information quality. It is recommended that publishers provide more complete articles, while platforms should strengthen advertisement supervision.
    Keywords:  WeChat; content analysis; dementia prevention; health information; information quality
    DOI:  https://doi.org/10.3389/fpubh.2025.1666853
  20. Front Public Health. 2025 ;13 1696018
       Background: Bowel sounds are a valuable indicator of monitoring and reflecting intestinal motility. Information about bowel sounds is significant for assessing physical condition. Short video-sharing platforms facilitate such information but must be validated regarding the quality and reliability of the content.
    Objective: This study aimed to assess the reliability and quality of bowel sounds-related information available on Chinese short video-sharing platforms.
    Methods: A total of 132 video samples were collected on the three most popular Chinese video sharing platforms: TikTok, Bilibili, and WeChat. Each video was assessed by two independent physicians in terms of content comprehensiveness, quality (using the Global Quality Score) and reliability (using the DISCERN tool). Furthermore, comparisons were made across different video sources.
    Results: Out of 132 videos analyzed, 78 (59.09%) were uploaded by medical professionals, including gastroenterologists, non-gastroenterologists, and clinical nutritionists, while 54 (40.91%) were shared by non-medical professionals such as science bloggers, nonprofit organizations and patients. Gastroenterologists-uploaded videos received the highest engagement, with median likes of 150 (IQR: 31-1,198), favorites of 90 (IQR: 19-412) and share of 50 (14-225). And in general, Medical professionals'videos generally showed higher engagement, particularly those by Gastroenterologists, compared to non-medical professionals. The median The GQS and modified DISCERN tool were used to assess video quality and reliability respectively, with medical professionals scoring higher on both metrics (z = 4.448, p < 0.001; z = 2.209, p < 0.05). GQS score and DISCERN score was 2 for bowel sounds videos analyzed in this study. Videos from gastroenterologists had the highest GQS scores, with a median of 3. However, the DISCERN score of gastroenterologists needs to be improved.
    Conclusion: The study shows that medical professionals generally provide better and more accurate results than non-professionals. Videos uploaded by clinical nutritionists offer more comprehensive health education and treatment options. To ensure public access to reliable information, it's important to encourage medical professionals to produce the videos and also basic standards must be established.
    Keywords:  bowel sounds; clinical nutritionists; information quality; social media; video platforms
    DOI:  https://doi.org/10.3389/fpubh.2025.1696018
  21. Digit Health. 2025 Jan-Dec;11:11 20552076251404516
       Background: Cervical cancer is a significant global health concern with over 662,000 new cases and approximately 349,000 deaths in 2022. Despite the clear benefits of screening, a portion of the population remains unaware of its importance. In China, short video platforms such as Kuaishou, Bilibili, and TikTok host numerous related videos, but the quality varies significantly.
    Method: Using the keyword "cervical cancer screening," the top 100 videos on each platform were searched (totaling 300), with 259 meeting the criteria. A comparative analysis was conducted on video duration, engagement metrics (likes, favorites, comments, shares), follower count, uploader identity, and video type. The Global Quality Score (GQS) and modified DISCERN tool were used for evaluation.
    Results: The study included 82 Kuaishou videos, 93 Bilibili videos, and 84 TikTok videos. Bilibili had the longest median video duration (109 s), while Kuaishou had the shortest (54.5 s). Kuaishou outperformed TikTok and Bilibili in engagement metrics. TikTok had a higher proportion of videos on the importance, process, considerations, and timing of screening. Professional uploaders (obstetricians and gynecologists whose expertise directly pertains to cervical cancer screening) were most prevalent on TikTok (74%). TikTok videos had the highest quality scores in GQS and mDISCERN, followed by Bilibili and Kuaishou. Significant differences in mDISCERN scores were found among the platforms (all pairwise comparisons p < .001). Spearman rank correlation analysis showed that higher-quality videos (measured by GQS and mDISCERN) were more likely to achieve higher audience engagement. Still, video duration did not affect quality or engagement.
    Conclusion: Social media platforms provide accessible health information, but the quality and reliability of cervical cancer screening videos vary significantly. Professionally uploaded videos generally have higher engagement and information reliability. Content creators should prioritize high-quality, accurate videos, and platforms should enhance content quality control to prevent misinformation dissemination.
    Keywords:  Bilibili; Cervical cancer screening; Kuaishou; TikTok; information quality; short videos platforms
    DOI:  https://doi.org/10.1177/20552076251404516
  22. Front Public Health. 2025 ;13 1663977
       Background: The incidence of adolescent depression has been increasing globally in recent years, raising concern among the public about this condition. Videos on adolescent depression are disseminated through TikTok and Bilibili, both of which have gained popularity in recent years as easily accessible sources of health information. However, no researchers have conducted professional inspection and evaluation of depression-related videos targeting adolescents, some of these videos may even disseminate misleading information.
    Methods: We retrieved the top 100 adolescent depression related videos from TikTok and Bilibili. Data on video characteristics, including engagement metrics and content, were also collected. Video quality was assessed using three rating tools: the Journal of the American Medical Association (JAMA), Global Quality Score (GQS), and the Modified DISCERN (mDISCERN). The independent t-test, Mann-Whitney U test, and Kruskal-Wallis test were used for comparison and analysis.
    Results: The analysis included 188 videos, with 95 from TikTok and 93 from Bilibili. TikTok videos were shorter and exhibited higher audience interaction. The most popular topic on TikTok and Bilibili was "Symptoms of adolescent depression." Video creators were predominantly experts on TikTok (72.63%), and general users on Bilibili (56.99%). Video quality, assessed using JAMA, GQS, and mDISCERN, varied across platforms. There were statistically significant differences in the three quality scores among different types of creators on TikTok and Bilibili (P < 0.005). No significant differences were observed in views, likes, comments, and collections data across different video publishers on TikTok and Bilibili.
    Conclusion: Videos on social media platforms can help the public gain knowledge about adolescent depression. However, the quality of video from all platforms requires improvement. Strengthening collaboration among content creators, mental health experts, and platform administrators may enhance video quality and ensure more accurate and effective dissemination of information.
    Keywords:  Bilibili; TikTok; adolescents depression; quality analysis; social media
    DOI:  https://doi.org/10.3389/fpubh.2025.1663977
  23. Medicine (Baltimore). 2025 Nov 28. 104(48): e46045
      Melanoma is the most aggressive form of skin cancer, with a rising incidence worldwide. Social media platforms such as TikTok are increasingly serving as important channels for health information dissemination, yet the quality and reliability of melanoma-related content remain unclear. This study evaluated the characteristics, content distribution, quality, and reliability of Chinese-language melanoma-related short videos on TikTok. From August 7 to 10, 2025, 113 melanoma-related videos were collected. Video characteristics were recorded, and the global quality score and modified DISCERN (mDISCERN) tool were used for evaluation. The videos were generally short (median: 58.00 seconds) and had high engagement. Most content focused on risk factors, clinical manifestations, and treatment, with limited coverage of prevention and recurrence. Overall quality and reliability were low, with median global quality score and mDISCERN scores of 2.00 (interquartile range: 2.00-2.00). Significant differences in quality were observed among uploader types, with dermatologists producing the highest-quality content. No significant correlation was found between engagement metrics and quality. The quality and reliability of melanoma-related videos on TikTok are suboptimal. This study provides empirical evidence on the current status of melanoma health information in the social media environment and offers a reference for optimizing digital health communication strategies. Future efforts should enhance the comprehensiveness and scientific rigor of health content, increase the participation of healthcare professionals, and establish platform-level quality control mechanisms to ensure the accuracy and reliability of health information.
    Keywords:  health communication; information quality; melanoma; social media; tiktok
    DOI:  https://doi.org/10.1097/MD.0000000000046045
  24. J Pediatr Ophthalmol Strabismus. 2025 Dec 02. 1-5
       PURPOSE: To analyze the origin and quality of health information on one social media platform on the use of vision therapy in treating pediatric visual and learning disorders.
    METHODS: Nine hashtags were selected, and the 20 most liked videos were independently assessed using the DISCERN and Global Quality Scoring (GQS) systems, totaling 147 videos with 11,194,000 views. Across all hashtags, vision therapy content was predominantly created by behavioral optometrists (32.2%) and vision therapy clinics (21.2%).
    RESULTS: This content scored poorly on both the DISCERN and GQS scoring systems, demonstrating widespread misinformation. With the increasing influence of social media on health care decisions, misinformation may lead to misguided treatment choices, potentially delaying evidence-based care. These findings underscore the need for increased engagement from ophthalmologists and other medical professionals to provide accurate, research-backed information and counter the spread of misleading claims of vision therapy in treating pediatric and visual learning disorders.
    CONCLUSIONS: Future research should explore content across multiple platforms and broaden the scope of analysis to enhance the understanding of online health misinformation.
    DOI:  https://doi.org/10.3928/01913913-20251008-02
  25. JMIR Dermatol. 2025 Dec 01. 8 e70010
       Background: TikTok, with more than 2 billion users worldwide, has become an influential venue for health information, including dermatologic advice. However, concerns remain about the accuracy and impact of sunscreen-related content.
    Objective: This study aimed to assess the quality, accuracy, and themes of popular TikTok videos about sunscreen; evaluate associations with creator credentials and promotional content; and identify implications for public health.
    Methods: We conducted a cross-sectional content analysis of the 100 most-liked English-language TikTok videos generated by the search term "sunscreen." Metadata, creator characteristics, Global Quality Score (GQS), accuracy, attitudes, promotional disclosures, and reference use were extracted using a structured codebook. Thematic and statistical analyses (ie, Pearson correlations, χ2, 2-tailed t tests, and ANOVA) were conducted, with significance defined as P<.05.
    Results: Of the top 100 videos, 74 (74%) expressed a positive attitude toward sunscreen, 35 (35%) were accurate, 57 (57%) were opinion based, and 6 (6%) were inaccurate. None of the videos cited references. GQS ratings were low: 40 (40%) videos were rated poor (score=1), 31 (31%) below average (score=2), and only 2 (2%) excellent (score=5). Promotional content appeared in 27 (27%) videos. Accuracy was negatively correlated with likes (r=-0.229; P=.02) and views (r=-0.242; P=.02), while GQS correlated positively with accuracy (r=0.270; P=.007) but not with engagement. Likes and views were strongly correlated (r=0.726; P<.001).
    Conclusions: Despite broadly positive sentiment toward sunscreen, misinformation and promotional bias are common in highly engaged TikTok videos, and user engagement is often unrelated to accuracy or educational quality. Dermatologists and public health experts must proactively engage on social platforms to counter misinformation and promote reliable skin health information.
    Keywords:  GQS; Global Quality Scale; SPF; TikTok; dermatology; health information; misinformation; social media; sun protection factor; sun safety; sunscreen
    DOI:  https://doi.org/10.2196/70010
  26. BMC Psychol. 2025 Dec 01.
       BACKGROUND: This study explores the relationships among Internet addiction, online health information-seeking behavior (OHISB), and cyberchondria.
    METHODS: The research was conducted in Yalova Province, Türkiye, with a research sample included individuals aged 18 and over who lived in that province. Data were collected from participating individuals using an online survey.
    RESULTS: The findings demonstrated that Internet addiction had a positive effect on OHISB (β = 0.557). It was also revealed that Internet addiction (β = 0.270) and OHISB (β = 0.442) had a positive effect on cyberchondria. Furthermore, OHISB had a mediating role in the effect of Internet addiction on cyberchondria (β = 0.246).
    CONCLUSIONS: This study highlights the existence of significant relationships among Internet addiction, OHISB, and cyberchondria.
    Keywords:  Cyberchondria; Internet addiction; Online health information-seeking behavior
    DOI:  https://doi.org/10.1186/s40359-025-03770-1
  27. Pediatr Dermatol. 2025 Nov 30.
      Social media are increasingly being used as a source of health information. We conducted an online, anonymous survey to learn how caregivers are interacting with social media and how this may impact their child's dermatologic care. There were 136 participants who started the survey and 97 who completed it (71.3% completion rate). The most common skin conditions participants sought information for were atopic dermatitis, 48% (47); acne, 40% (39); and dry skin care, 35% (34). Our results also found that participants of lower socioeconomic status use social media for skin care management more often (p < 0.01), highlighting the importance of providing reliable content on social media.
    Keywords:  atopic dermatitis; health literacy; online health information; social media
    DOI:  https://doi.org/10.1111/pde.70067
  28. JMIR Form Res. 2025 Dec 03. 9 e78397
       BACKGROUND: Internet health care plays a crucial role in addressing the challenge of distributing high-quality medical resources and promoting the optimal allocation of these resources and health equity in China. Online medical consultation (OMC) plays a more significant role than online health information seeking (OHIS). Currently, the proportion of Chinese patients using OMC is low. Therefore, it is essential to enhance patient engagement with OMC and fully leverage the role of internet health care in optimizing the allocation of medical resources.
    OBJECTIVE: This study aims to explore the correlation mechanisms of online medical community users' switching behaviors from OHIS to OMC.
    METHODS: This study is based on the knowledge-attitude-practice theory, which combines the social support theory and the health belief model to construct a research model of users' willingness to transition from OHIS to OMC. The study adopts a questionnaire survey and structural equation modeling method to conduct an empirical study.
    RESULTS: Gaining knowledge about information support has a significant positive impact on perceived susceptibility (β=.339, P<.001), perceived severity (β=.348, P<.001), and perceived benefits (β=.361, P<.001), while having a significant negative impact on perceived barriers (β=-.285, P<.001). Gaining knowledge about emotional support positively affects perceived susceptibility (β=.220, P<.001) and perceived benefits (β=.149, P<.01) but does not significantly influence perceived severity (β=-.006, P>.05) or perceived barriers (β=.099, P>.05). Perceived susceptibility (β=.123, P<.05), perceived severity (β=.174, P<.001), and perceived benefits (β=.273, P<.001) positively influence patients' transition to online consultation behavior, whereas perceived barriers (β=-.112, P<.05) negatively impact this switch. In addition, we found that gaining knowledge about information support not only directly affects patients' behavior in switching to online consultations but also impacts patients' OMCs through perceived susceptibility (14.23%), perceived severity (13.17%), and perceived benefits (25.28%). In contrast, gaining knowledge about emotional support does not directly influence patient behavior transfer; it operates only through perceived susceptibility (46.95%) and perceived benefit (52.90%).
    CONCLUSIONS: This study integrated the knowledge-attitude-practice framework, social support theory, and health belief model to uncover the internal logic of patients' behavioral transfers within online health communities. It confirmed the mediating role of the cognitive-emotional dual-drive pathway and health beliefs. The findings provide a scientific basis for the functional design of online health care platforms and for precise health knowledge dissemination strategies.
    Keywords:  health belief model; knowledge-attitude-practice; online health information seeking; online medical consultation; social support; switching behavior
    DOI:  https://doi.org/10.2196/78397
  29. J Health Commun. 2025 Dec 06. 1-12
      This study proposes a dual-motive model of health information seeking and avoidance, incorporating two distinctive motives for information behaviors - the accuracy and defense motives. In the proposed model, we identify the key antecedents to these two motives and explore political ideology as a potential moderator. In the context of COVID-19, an online survey was conducted with 638 respondents in South Korea. The results indicate that information insufficiency is linked to information seeking, whereas information overload and denial explain information avoidance to a greater extent. Trust in government and risk perception are negatively linked to information overload, reactance, and denial. Liberals' and conservatives' perceptions of risk and emotions differently activate information motives and behaviors. These findings provide theoretical and practical implications for health information management.
    Keywords:  COVID-19; health communication; information avoidance; information seeking
    DOI:  https://doi.org/10.1080/10810730.2025.2598820
  30. ScientificWorldJournal. 2025 ;2025 9846483
      [This corrects the article DOI: 10.1155/2024/6949281.].
    DOI:  https://doi.org/10.1155/tswj/9846483