bims-librar Biomed News
on Biomedical librarianship
Issue of 2026–02–22
33 papers selected by
Thomas Krichel, Open Library Society



  1. J Dent Educ. 2026 Feb 16.
       BACKGROUND: Library resources play a critical role in supporting academic learning and research, particularly in health sciences education. Despite increased availability of both physical and digital library materials, the extent to which dental students utilize these resources remains uncertain.
    AIM: To assess the knowledge, attitude, and practice (KAP) related to library resource usage among undergraduate and postgraduate dental students at a single dental institution.
    METHODS: A descriptive cross-sectional study was conducted among dental students using a structured, self-administered questionnaire. The survey assessed students' awareness of available library resources, their attitudes toward library use, and their actual practices regarding both physical and digital resource utilization. Data were analyzed using Pearson chi-square test and a p value of ≤ 0.05.
    RESULTS: The study included 300 participants of which 221were undergraduates and 43 post graduates. While most participants acknowledged the importance of library resources for academic success, only a moderate proportion reported frequent use of these resources. Postgraduate students demonstrated higher awareness and usage of digital resources compared to undergraduates. Several barriers were identified, including lack of time, insufficient awareness of available resources, and limited digital literacy.
    CONCLUSION: Although dental students exhibit a positive attitude toward library use, there exists a gap between their knowledge and actual practices. Strengthening awareness programs, improving digital resource access, and integrating library training into the curriculum are recommended to enhance resource utilization and academic performance.
    Keywords:  academic libraries; dental students; digital resources; knowledge‐attitude‐practice (KAP); library usage
    DOI:  https://doi.org/10.1002/jdd.70167
  2. Med Ref Serv Q. 2026 Feb 16. 1-8
      In July 2025, Therapeutic Research Center's Healthcare launched a newly designed NatMed Pro database that offers a contemporary and streamlined webpage layout and more expedient load times. The new landing page is visually appealing due to its refreshed tab bar and graphics. This column will present a sample search in the improved search interface as well as explore some of the new and redesigned features which create a more intuitive user experience.
    Keywords:  Dietary supplements; integrative therapies; natural medicine; online database
    DOI:  https://doi.org/10.1080/02763869.2026.2629233
  3. Postgrad Med J. 2026 Feb 16. pii: qgag015. [Epub ahead of print]
       BACKGROUND: PubMed Central (PMC) is a freely accessible digital repository offering full-text biomedical literature with structured metadata. Despite its scale, its suitability as a resource for systematic reviews, particularly in medical research and education, remains underexplored.
    METHODS: We studied the proportion of systematic reviews using PMC compared to other resources. Additionally, we examined the number of results retrieved by search strategies in PMC compared to PubMed in three random samples of subject topics (10 of 28 of the general and 25 of 413 of the extensive contents of the "Goldman's Cecil Medicine, 26th Edition" textbook, and 25 of 502 of the essential medicines listed in the "World Health Organization Model List of Essential Medicines").
    RESULTS: About 0.5% of 407 242 systematic reviews archived in PubMed included PMC in the title and/or abstract, a considerably lower proportion compared to other resources (36%, 33.9%, 30.4%, 18%, 14.1%, and 5.8% for PubMed, Embase, Cochrane Library, Web of Science, Scopus, and Google Scholar, respectively). Even though PMC includes a considerably smaller number of articles than PubMed (11 vs. 39 million), the yield from PMC was higher compared to PubMed in the studied random samples [in 9/10 (90%), 21/25 (84%), and 25/25 (100%) of subject topics].
    DISCUSSION: PMC is rarely utilized for systematic reviews. Although the number of articles retrieved from PMC is higher than PubMed, further studies should evaluate the comparative relevance of their yield, as PMC may return fewer specific articles on the studied research topic due to its inclusion of full-text articles. Key messages What is already known on this topic PubMed Central (PMC) is a free resource of full-text articles in biomedical fields. What this study adds Our study shows that PMC is rarely used as a resource for full-text articles for systematic reviews in biomedical research and education. In all three random samples of clinical medicine subjects that we studied, the yield of articles was higher in PMC than in PubMed. How this study might affect research, practice, or policy Future studies should focus on the relevance (specificity) of the yield of articles from PMC searches in specific research topics in systematic reviews.
    Keywords:  PubMed; PubMed Central; database; resource; systematic review
    DOI:  https://doi.org/10.1093/postmj/qgag015
  4. Cochrane Evid Synth Methods. 2026 Mar;4(2): e70074
       Objective: PubReMiner is a text-mining tool that analyses a seed set of citations to assess word frequency in titles, abstracts, and Medical Subject Headings (MeSH). This study aimed to determine the sensitivity and precision of search strategies developed using the PubReMiner tool compared to conventional search strategies developed by a librarian at our institution.
    Methods: Twelve consecutive reviews conducted at our center were included from September 2023 to January 2025. These reviews included various types of evidence synthesis, including rapid reviews and systematic reviews, covering a variety of topics. One librarian developed a comprehensive search strategy, which included a conventional MEDLINE search for each review. Separately, two librarians independently developed MEDLINE search strategies using PubReMiner-generated word frequency tables (PubReMiner 1 and PubReMiner 2). All search strategies were constructed by experienced librarians using predefined work instructions. Primary outcomes were sensitivity and precision. Secondary outcomes included the number needed to read, the number of unique references retrieved, and the time taken to construct each strategy.
    Results: Sensitivity of PubReMiner strategies was generally lower than that of conventional strategies; however, in one review, PubReMiner achieved a higher sensitivity (83.87%) than the conventional strategy (58.06%). Only the sensitivity outcome showed a statistically significant difference between search methods (Friedman test p = 0.0065). No statistically significant difference in precision between the searches was identified. PubReMiner strategies were typically faster to construct but yielded inconsistent performance across reviews and between librarians.
    Conclusion: While PubReMiner offers efficiency advantages, its inconsistent performance in retrieving relevant studies suggests that it should not replace conventional search strategies. The study illustrates the value of multi-review SWARs in producing evidence that informs evidence synthesis practices.
    Keywords:  SWAR; information retrieval; study within a review; systematic search methods; text‐mining
    DOI:  https://doi.org/10.1002/cesm.70074
  5. Med Ref Serv Q. 2026 Feb 16. 1-23
      This study demonstrates how health sciences librarians can use citation analysis, COUNTER statistics, and interlibrary loan data to quantitatively evaluate journal collections across multiple health sciences programs. Unlike previous studies focusing on single professions, this research analyzed an entire health sciences college collectively. Results showed 90.5-100% of journals used by faculty for publication and citation were available in the library's catalog, validating the collection's value. These findings provide a framework for collection development, resource promotion, and budget justification. The methodology is particularly valuable for librarians managing collections serving multiple programs within health sciences institutions.
    Keywords:  Bibliometric study; citation analysis; collection development; health sciences; journals; nursing; occupational therapy; physical therapy; public health
    DOI:  https://doi.org/10.1080/02763869.2026.2619784
  6. J Med Internet Res. 2026 Feb 18. 28 e78836
       BACKGROUND: Patients frequently search for health information online and value physician support in evaluating and interpreting their findings, yet many hesitate to share their online searches with their physicians. This hesitation hinders shared decision-making and compromises patient care. While extensive research has examined patients' online health information-seeking behaviors, little has focused on patients' disclosure of this information to their physicians during consultations.
    OBJECTIVE: Guided by the Health Empowerment Model and the Linguistic Model of Patient Participation in Care, this study aims to (1) identify distinct patient profiles based on eHealth literacy and psychological health empowerment levels, (2) examine how these patient profiles differ in online health information seeking and disclosure to physicians, and (3) investigate whether patient-centered communication (PCC) promotes information disclosure and whether this effect varies by patient profile.
    METHODS: This cross-sectional study surveyed 2001 Chinese participants recruited through convenience sampling. Patient profiles were identified using k-means cluster analysis with standardized z scores of eHealth literacy and psychological health empowerment. Differences between profiles in information behaviors were examined using 1-way Welch ANOVA, chi-square tests, and pairwise comparisons. Regression analyses examined the association between PCC and disclosure of online health information. Moderation analyses using the Hayes PROCESS macro assessed whether this association varied across patient profiles.
    RESULTS: Four distinct patient profiles were identified: effective self-managers (996/2001, 49.8%), moderate-needs dependent patients (408/2001, 20.4%), high-needs patients (68/2001, 3.4%), and dangerous self-managers (529/2001, 26.4%). Profiles differed significantly in information-seeking intentions (F3,289=62.09; P<.001; η²=0.12) and disclosure intentions (F3,299.41=66.08; P<.001; η²=0.09). "Effective self-managers" showed the highest seeking (mean 4.01, 95% CI 3.96-4.06) and disclosure intentions (mean 3.43, 95% CI 3.36-3.50), while "high-needs patients" showed the lowest intentions for both behaviors. Actual information-seeking rates also differed significantly across profiles (χ²3=103.4; P<.001), with "effective self-managers" having the highest rate (800/996, 80.3%) and "high-needs patients" the lowest (25/68, 36.8%). Among seekers, disclosure rates varied significantly (χ²3=23.1; P<.001), with "high-needs patients" showing the highest disclosure (16/25, 64%) despite having the lowest seeking rate. PCC was positively associated with actual information disclosure behavior (odds ratio 1.26, 95% CI 1.04-1.53; P=.02), with no significant moderation by patient profiles (χ²3=1.7; P=.64).
    CONCLUSIONS: This study extends existing literature from information-seeking behavior to patients' disclosure of online findings to physicians. Unlike prior research that examined eHealth literacy and psychological health empowerment separately, this study integrated these constructs to identify meaningful patient profiles with distinct information behavior patterns. PCC facilitates disclosure regardless of patient profile. For practice, physicians should adopt a PCC that acknowledges patients' online research efforts, promoting safer information use and stronger patient-physician relationships.
    Keywords:  empowerment; health information; literacy; misinformation; patient-centered communication; patient-provider communication
    DOI:  https://doi.org/10.2196/78836
  7. Sci Rep. 2026 Feb 15.
      The main objective of the research is to determine the impact of digitalization on organizational culture of libraries, which ultimately affects the performance. This research was conducted by examining the concept of a Technology-Organization-Environment (TOE) framework. It also includes digital culture in organization through the absolute tenacity of improving library services and performance. In this study, the research population comprised chief/in-charge librarians from universities across Pakistan. A total of 318 questionnaires received back from the respondents, of which 40 were considered incomplete due to inconsistent or missing responses. The final analysis has been completed based on a total of 278 responses. The data were analyzed using Partial Least Squares Structural Equation M (PLS-SEM) with the use of PLS Smarts software version 4, to analyze the modified Technology-Organization-Environment (TOE) research model. The results of study concluded that organizations can enhance their digital culture by integrating technological, organizational, and environmental factors into their practices and services. Study confirms that technological, organizational and environment factors have positive impact on the adoption of digital culture, it also further shows that technological factors such as compatibility, regulatory support have positive impact top management strategies for adoption of digital culture except one technological factor i.e. relative advantage shows no impact on top management strategies regarding adoption of digital culture. Additionally, it improved the performance, services, and satisfaction of information management organizations as well. The study contributes to the field of information science through the significance of digital culture for executives, researchers and policy makers.
    Keywords:  Digital culture; Digital technologies; Information services; Librarians; Libraries; Performance; TOE
    DOI:  https://doi.org/10.1038/s41598-026-39685-z
  8. Cureus. 2026 Jan;18(1): e101413
      Background Clear, effective communication is fundamental to orthopaedic practice, particularly when securing informed consent. Escalating NHS workforce and time constraints necessitate tools that streamline, yet enhance, patient‑clinician dialogue. By analysing understandability, readability, and complication profile inclusion, this study aims to determine the feasibility of large language model (LLM)‑assisted correspondence to support equitable, patient‑centred consent and decision‑making. Methods Six frequently performed orthopaedic operations were chosen. Standardised, clinic‑friendly prompts were fed to four LLMs: OpenAI o1, DeepSeek, Gemini, and Copilot, each producing two letters per procedure. An identical prompt was provided to two clinicians to produce letters for the same operation, serving as a human benchmark. Understandability (Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P)), readability (Flesch-Kincaid readability tests, Gunning Fog Index, and Simple Measure of Gobbledygook (SMOG) indices), and gold-standard complication inclusion were recorded. Results PEMAT-P understandability scores for each LLM were as follows: OpenAI o1 0.72 (±0.07), DeepSeek 0.81 (±0.09), Copilot 0.81 (±0.08), Gemini 0.83 (±0.05). Human letters scored 0.72 (±0.03). All LLMs produced text at a seventh-eighth grade level; Flesch‑Kincaid 6.850-8.517, markedly simpler than human letters (10.6 ± 0.94). OpenAI o1's outputs were easiest to read according to the Gunning-Fog and SMOG scales (8.8833 ± 0.5702 and 9.9833 ± 0.4569), whereas clinician letters were harder (14.1333 ± 1.1 and 13.3333 ± 0.55).  OpenAI o1 achieved the greatest complication profile compliance (0.923 ± 0.104, P < 0.001), followed by Gemini (0.860 ± 0.079). Conclusion LLMs can outperform traditional clinician correspondence in readability and understandability, while simultaneously incorporating gold‑standard complication profiles into clinic letters. Embedding optimised, LLM workflows within outpatient practice could markedly reduce administrative burden, minimise transcription delays, and empower patients to make better‑informed, shared decisions. Future research must refine LLM search capability, evaluate cost‑effectiveness, ensure ethical and medico‑legal oversight, integrate outputs with electronic health records, and establish rigorously validated pathways for safe clinical deployment.
    Keywords:  artificial intelligence in medicine; clinic letters; large language model; orthopaedic clinic; patient communication; readability analysis; surgical consent
    DOI:  https://doi.org/10.7759/cureus.101413
  9. J Obes. 2026 ;2026 2376530
       Background: The global obesity epidemic challenges health systems, driving people to seek metabolic and bariatric surgery (MBS), especially laparoscopic sleeve gastrectomy (LSG). Many MBS centers have limited resources for patient education, creating knowledge gaps that lead patients to search online. AI chatbots, such as ChatGPT, can provide reliable medical information, though concerns about accuracy and completeness remain.
    Methods: The study involved four fellowship-trained minimally invasive surgeons (MISs), nine fellows (MIFs), and two general practitioners (GPs) in the MBS multidisciplinary team from March 1, 2024, to March 30, 2024. Seven AI chatbots were selected, including ChatGPT 3.5 and 4, Bard, Bing, Claude, Llama, and Perplexity, based on their public availability on December 1, 2023. Forty patient questions regarding LSG were sourced from social media, MBS organizations, and online forums. Experts and chatbots answered these questions, with their responses evaluated for accuracy and comprehensiveness on a 5-point scale. Statistical analyses compared groups' performance.
    Results: Chatbots demonstrated a higher overall performance score (2.55 ± 0.95) compared to the expert group (1.92 ± 1.32, p < 0.001). Among chatbots, ChatGPT-4 achieved the highest performance (2.94 ± 0.24), while Llama had the lowest (2.15 ± 1.23). Expert group scores were highest for MISs (2.36 ± 1.09), followed by GPs (1.90 ± 1.36) and MIFs (1.75 ± 1.36). The readability of chatbot responses was assessed using Flesch-Kincaid scores, revealing that most responses required reading levels between the 11th grade and college level. Furthermore, chatbots exhibited fair reliability and reproducibility in response consistency, with ChatGPT-4 showing the highest test-retest reliability.
    Conclusion: AI chatbots generated accurate and comprehensive answers to common bariatric patient questions, suggesting promise as a scalable aid for patient education. However, readability often exceeds recommended levels, performance varies by model, occasional inaccuracies occur, and medicolegal considerations remain unresolved. Accordingly, chatbots should complement clinician counseling, and future work should improve readability and reliability and evaluate real-world safety and impact.
    Keywords:  AI chatbots; ChatGPT; bariatric surgery; healthcare technology; laparoscopic sleeve gastrectomy; patient education
    DOI:  https://doi.org/10.1155/jobe/2376530
  10. Front Public Health. 2026 ;14 1760871
       Background: Gestational diabetes mellitus (GDM) is increasingly prevalent worldwide and is associated with substantial short- and long-term risks for mothers and offspring, making high-quality, accessible health information essential. At the same time, artificial intelligence (AI) chatbots based on large language models are being widely used for health queries, yet their accuracy, reliability and readability in the context of GDM remain unclear.
    Methods: We first evaluated six AI chatbots (ChatGPT-5, ChatGPT-4o, DeepSeek-V3.2, DeepSeek-R1, Gemini 2.5 Pro and Claude Sonnet 4.5) using 200 single-best-answer multiple-choice questions (MCQs) on GDM drawn from MedQA, MedMCQA and the Chinese National Medical Examination item bank, covering four domains: epidemiology and risk factors, clinical manifestations and diagnosis, maternal and neonatal outcomes, and management and treatment. Each item was posed three times to every model under a standardized prompting protocol, and accuracy was defined as the proportion of correctly answered questions. For public-facing information, we identified 15 core GDM education questions using Google Trends and expert review, and queried four chatbots (ChatGPT-5, DeepSeek-V3.2, Claude Sonnet 4.5 and Gemini 2.5 Pro). Two obstetricians independently assessed reliability using DISCERN, EQIP, GQS and JAMA benchmarks, and readability was quantified using ARI, CL, FKGL, FRES, GFI and SMOG indices.
    Results: Overall MCQ accuracy differed significantly across the six chatbots (p < 0.0001), with ChatGPT-5 achieving the highest mean accuracy (92.17%) and DeepSeek-V3.2 and Gemini 2.5 Pro performing comparably well, while ChatGPT-4o, DeepSeek-R1 and Claude Sonnet 4.5 scored lower. Newer model generations (ChatGPT-5 vs. ChatGPT-4o; DeepSeek-V3.2 vs. DeepSeek-R1) consistently outperformed their predecessors across all four domains. Among the four models evaluated on public-education questions, ChatGPT-5 achieved the highest reliability scores (DISCERN 42.53 ± 7.20; EQIP 71.67 ± 6.17), whereas Claude Sonnet 4.5, DeepSeek-V3.2 and Gemini 2.5 Pro scored lower. JAMA scores were uniformly low (0-0.07/4), reflecting poor transparency. All models produced text above the recommended sixth-grade reading level; ChatGPT-5 showed the most favorable readability profile (for example, FKGL 7.43 ± 2.42, FRES 62.47 ± 13.51) but still did not meet guideline targets.
    Conclusion: Contemporary AI chatbots can generate generally accurate and moderately reliable GDM-related information, with newer model generations showing clear gains in diagnostic validity. However, limited transparency and systematically high reading levels indicate that these tools are not yet suitable as stand-alone resources for GDM patient education and should be used as adjuncts to clinician counseling and professionally curated materials.
    Keywords:  artificial intelligence; gestational diabetes mellitus; large language models; patient education; readability
    DOI:  https://doi.org/10.3389/fpubh.2026.1760871
  11. Am J Surg. 2026 Feb 08. pii: S0002-9610(26)00042-5. [Epub ahead of print]254 116859
       INTRODUCTION: Patients increasingly use generative artificial intelligence assistants (chatbots) for medical information. We examined surgeon-perceived accuracy, reliability, and readability of chatbot responses to adrenal nodule queries.
    METHODS: Six commonly asked adrenal nodule questions were input into five chatbots. Blinded answers were reviewed by 10 endocrine surgeons for correctness and reliability (6-point Likert scale) and content structure (3-point Likert scale). One-way ANOVAs and Tukey adjusted post hoc analyses tested differences. Inter-rater reliability across surgeon ratings was assessed by two-way intraclass correlation coefficient. Reading grade levels were assessed using Lexile Text Analyzer (MetaMetrics, Inc).
    RESULTS: Correctness, reliability, and jargon use differed significantly (p ≤ 0.05). Perplexity scored highest for correctness, reliability, and thoroughness but used the most jargon. Gemini scored lowest on all domains, but used the least jargon. Mean Lexile reading level was 8th-10th grade.
    CONCLUSION: Chatbot responses about adrenal nodules vary significantly. More accurate, reliable, and thorough answers contained excess jargon, but all responses exceeded recommended patient education levels.
    Keywords:  Adrenal nodule; Artificial intelligence; Chatbot; Patient education
    DOI:  https://doi.org/10.1016/j.amjsurg.2026.116859
  12. JMIR Public Health Surveill. 2026 Feb 18. 12 e79720
       BACKGROUND: The growing use of artificial intelligence (AI) chatbots for seeking health-related information is concerning, as they were not originally developed for delivering medical guidance. The quality of AI chatbots' responses relies heavily on their training data and is often limited in medical contexts due to their lack of specific training data in medical literature. Findings on the quality of AI chatbot responses related to health are mixed. Some studies showed the quality surpassed physicians' responses, while others revealed occasional major errors and low readability. This study addresses a critical gap by examining the performance of various AI chatbots in a complex, misinformation-rich environment.
    OBJECTIVE: This study examined AI chatbots' responses to human papillomavirus (HPV)-related questions by analyzing structure, linguistic features, information accuracy and currency, and vaccination stance.
    METHODS: We conducted a qualitative content analysis following the approach outlined by Schreier to examine 4 selected AI chatbots' (ChatGPT 4, Claude 3.7 Sonnet, DeepSeek V3, and Docus [General AI Doctor]) responses to HPV vaccine questions. These questions, simulated by young adults, were adapted from items on the Vaccine Conspiracy Beliefs Scale and Google Trends. The selection criteria for AI chatbots included popularity, accessibility, countries of origin, response update methods, and intended use. Two researchers, simulating a 22-year-old man or woman, collected 8 conversations between February 22 and 28, 2025. We used a deductive approach to develop initial code groups, then an inductive approach to generate codes. The responses were analyzed based on a comprehensive codebook, with codes examining response structure, linguistic features, information accuracy and currency, and vaccination stance. We also assessed readability using the Flesch-Kincaid Grade Level and Reading Ease Score.
    RESULTS: All AI chatbots cited evidence-based sources from reputable health organizations. We found no fabricated information or inaccuracies in numerical data. For complex questions, all AI chatbots appropriately deferred to health care professionals' suggestions. All AI chatbots maintained a neutral or provaccine stance, corresponding with scientific consensus. The mean and range of response lengths varied [word count; ChatGPT: 436.4 (218-954); Claude: 188.0 (138-255); DeepSeek: 510.0 (325-735); and Docus: 159.4 (61-200)], as did readability [Flesch-Kincaid Grade Level; ChatGPT: 10.7 (6.0-14.9); Claude: 13.2 (7.7-17.8); DeepSeek: 11.3 (7.0-14.7); and Docus: 12.2 (8.9-15.5); and Flesch-Kincaid Reading Ease Score; ChatGPT: 46.8 (25.4-72.2); Claude: 32.5 (6.3-67.3); DeepSeek: 43.7 (22.8-67.4); and Docus: 40.5 (19.6-58.2)]. ChatGPT and Claude offered personalized responses, while DeepSeek and Docus lacked this. Occasionally, some responses included broken or irrelevant links and medical jargon.
    CONCLUSIONS: Amidst an online environment saturated with misinformation, AI chatbots have the potential to serve as an alternative source of accurate HPV-related information to conventional online platforms (websites and social media). Improvements in readability, personalization, and link accuracy are still needed. Furthermore, we recommend that users treat AI chatbots as complements, not replacements, to health care professionals' guidance on clinical settings.
    Keywords:  artificial intelligence; health communication; large language models; papillomavirus vaccines; qualitative research
    DOI:  https://doi.org/10.2196/79720
  13. Patient Educ Couns. 2026 Feb 14. pii: S0738-3991(26)00079-0. [Epub ahead of print]147 109546
       OBJECTIVES: This study assessed the understandability, actionability, and overall quality of Artificial Intelligence (AI)-generated responses to frequently posed questions related to chronic kidney disease (CKD), compared to expert information.
    METHODS: A quantitative analysis to compare the quality of AI-generated responses and webpages containing information for patients with CKD was performed. Frequently searched keywords for CKD were entered into the ChatGPT 3.5, Copilot, and Gemini databases. Online patient education materials on CKD created by experts were also collected via Google searches. The Japanese version of the Patient Education Materials Assessment Tool was used to evaluate the understandability and actionability of the information.
    RESULTS: A total of 180 AI responses and 88 items of expert information were incorporated. There was no significant differences between AI responses and expert information in terms of understandability (mean±SD: AI vs expert, 67.2%±14.0% vs 63.4%±15.9 %, P = 0.06). For actionability, expert information scored higher than AI responses (mean±SD: AI vs expert, 27.9%±16.2% vs 37.1%±27.6%, P < 0.01). The AI responses excelled in simplicity of writing, clarity of purpose, and use of numbers. However, AI responses lacked logic in information and specificity in instructions for action. Gemini outperformed ChatGPT and Copilot in both understandability and actionability (71.6​​​​​​% vs 63.7% and 66.2%, P = 0.01; 38.3% vs 20.7% and 24.7%, P < 0.01).
    CONCLUSION: The AI responses showed comparable understandability to expert information, but outperformed expert information in terms of text readability. However, challenges have arisen regarding the actionability, flow, and comprehensiveness of AI responses.
    PRACTICAL IMPLICATIONS: Although AI tools can simplify medical terminology, healthcare professionals should enhance AI-generated CKD information to ensure actionability when disseminating it to patients.
    Keywords:  Artificial intelligence; Chronic kidney disease; Health communication; Health information; Patient education
    DOI:  https://doi.org/10.1016/j.pec.2026.109546
  14. Skin Res Technol. 2026 Feb;32(2): e70331
       BACKGROUND: Artificial intelligence, including large language models (LLMs) such as GPT-4, can generate responses to clinical queries using predictive algorithms trained on large online datasets. Current literature lacks a comprehensive assessment of the medical quality and accuracy of dermatologic GPT-4-generated outputs.
    METHODS: A standardized query was used to ask GPT-4 models (Copilot and ChatGPT-4) to generate summaries and treatment recommendations for 33 dermatologic conditions, which were then compared to corresponding sections of UpToDate (UTD) excerpts. DISCERN scores were calculated for each source by two authors (AN and PV). Concordance between GPT-4-generated treatments and UTD was evaluated by a certified dermatologist. Word count and Flesch Kincaid reading score were generated in R. Paired t-tests and one-way and weighted ANOVA were conducted in R.
    RESULTS: The DISCERN instrument classified UTD content as being of "fair" medical quality (mean [SD], 3.08 [0.34]), while both ChatGPT-4 and Copilot produced content of "poor" medical quality (mean [SD], 2.28 [0.22] and mean [SD], 2.31 [0.35], respectively). ChatGPT-4's treatment recommendations demonstrated 33.5% greater average concordance with UTD treatment recommendations (mean [SD], 64.89% [29.29]), in comparison to Copilot (mean [SD], 31.38% [31.08%]); (95% CI, 22.3%-44.7%, p < 0.001).
    CONCLUSIONS: Overall, GPT-4 models produced dermatological content with few harmful recommendations. However, GPT-4-generated content performed poorly on the DISCERN instrument, and validation of LLM-generated responses remains challenging. Results suggest LLM parameters and query structures may be optimizable for dermatologic applications. If implemented alongside the professional judgement of certified dermatologists, future LLMs may serve as time-saving dermatologic tools, enhancing patient care.
    Keywords:  AI; AI in dermatology; ChatGPT; artificial intelligence
    DOI:  https://doi.org/10.1111/srt.70331
  15. Knee Surg Sports Traumatol Arthrosc. 2026 Feb 16.
       PURPOSE: This study aimed to evaluate and compare the performance of the Chat Generative Pre-Trained Transformer (ChatGPT) and DeepSeek artificial intelligence (AI) models for patient information on shoulder instability.
    METHODS: Sixteen frequently asked questions related to shoulder instability were posed to both AI models. The models' responses were evaluated for content quality using the Journal of the American Medical Association (JAMA), DISCERN, and 4-point Likert scales. In addition, the readability of the responses was analysed using the Flesch-Kincaid Readability Score (FRES) and Flesch-Kincaid Grade Level (FKGL).
    RESULTS: None of the models met the JAMA criteria. In the DISCERN scoring, DeepSeek (52.81) scored significantly higher than ChatGPT (48.5) (p = 0.001). While there was no significant difference in the accuracy, clarity, and consistency criteria between the two models in the 4-point Likert evaluation (p > 0.05), DeepSeek scored significantly higher than ChatGPT in the completeness criterion (p = 0.001). In terms of readability, ChatGPT had an average FKGL value of 7.78 and an FRES score of 52.44. The DeepSeek model had an FKGL value of 9.90 and an FRES score of 41.87. There was a statistically significant difference in the readability between the two models (FKGL, p = 0.016; FRES, p = 0.015).
    CONCLUSION: Both AI models provided generally accurate and clinically relevant information on shoulder instability patient education despite limitations in transparency and source attribution. The results showed that DeepSeek scored significantly higher in DISCERN and the completeness criterion of the 4-point Likert scale, while there was no significant difference in accuracy, clarity, and consistency. ChatGPT demonstrated better readability. These findings suggest that AI models have the potential to be tools for patient information on shoulder instability, with each model having different strengths.
    LEVEL OF EVIDENCE: Level V.
    Keywords:  ChatGPT; DeepSeek; artificial intelligence; shoulder instability
    DOI:  https://doi.org/10.1002/ksa.70335
  16. J Am Coll Health. 2026 Feb 17. 1-5
      The widespread use of digital technologies may spread health information disorders (HIDs), which may have adverse health outcomes. College students are at risk of HIDs due to spending a considerable amount of time online and often rely on readily accessible information for health-related decision-making. They easily adopt visually attractive, misleading information and may share inaccurate content with their communities. Moreover, because of inadequate health literacy, students become vulnerable to HIDs. Therefore, fighting HIDs among college students is critical to protecting their well-being. Educational resources about the mindful and responsible use of digital technologies to access online health information can be a helpful strategy. Promoting digital literacy programs through discussion groups and digital skills training can also be beneficial. Additionally, educational institutions need to establish partnerships with different healthcare organizations to support students. Finally, multidisciplinary stakeholders should work together to ensure equitable engagement of student communities to combat HIDs.
    Keywords:  College students; health information disorder; health information hygiene
    DOI:  https://doi.org/10.1080/07448481.2026.2630052
  17. Psychiatr Serv. 2026 Feb 18. appips20250542
      Literacy is a social determinant of health that can affect perinatal help seeking and maternal and neonatal outcomes. This study aimed to evaluate the readability of online information concerning perinatal mental health and psychotropic medication safety. Online materials were identified through Google searches that mirrored a typical patient's experience; readability was assessed via the Flesch-Kincaid Grade Level. On average, general and perinatal mental health information was written at or above a 12th-grade level, higher than the national recommended reading level. Information regarding medication safety during pregnancy required even higher levels of literacy. Patients may benefit from having perinatal mental health education materials tailored to their literacy level.
    Keywords:  Patient Education; Patient Needs; Pregnancy and Mental Illness; Psychoeducation; Women
    DOI:  https://doi.org/10.1176/appi.ps.20250542
  18. Cureus. 2026 Jan;18(1): e101617
       BACKGROUND: Pediatric septic arthritis is a time-sensitive orthopedic emergency requiring prompt recognition and treatment to avoid serious morbidity. Families often seek information online, yet prior studies show patient education materials (PEMs) in orthopedics frequently exceed recommended readability standards (≤6th-grade level).
    PURPOSE: This study aims to evaluate the readability of online PEMs on pediatric septic arthritis from top-ranked US pediatric orthopedic hospitals and assess alignment with readability guidelines.
    METHODS: In July 2025, websites of the top 25 US pediatric orthopedic hospitals were searched for PEMs on "septic arthritis" in children. Hospitals were included if they hosted a dedicated PEM ≥100 words. Text was extracted, cleaned of non-narrative elements, and analyzed with eight readability metrics: Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index, Simple Measure of Gobbledygook (SMOG), Coleman-Liau Index, Automated Readability Index (ARI), Flesch Reading Ease Score (FRES), Ford, Caylor, Sticht (FORCAST) formula, and Dale-Chall Readability Score. Descriptive statistics were summarized, and FKGL was correlated with hospital rank using Spearman's test.
    RESULTS: Of 25 hospitals, 12 (48%) hosted qualifying PEMs. The mean readability was grade 10.6, above the recommended sixth-grade level; none achieved ≤6. Seven institutional PEMs were written at an average reading level significantly above the eighth-grade reading level (p < 0.01). Five PEMs (41.7%) were written at or below the eighth-grade level, largely from identical third-party content. Mean FRES was 51, reflecting "somewhat challenging" readability. Higher-ranked hospitals trended toward worse FKGL scores, but correlation was nonsignificant (ρ = -0.32; p = 0.31).
    CONCLUSIONS: Nearly half of the top hospitals lacked PEMs on pediatric septic arthritis, and available content largely exceeded recommended readability. Adoption of plain-language guidelines, external audits, or artificial intelligence (AI)-assisted editing may enhance accessibility and equity.
    Keywords:  health literacy; orthopedic surgery; patient education materials; pediatric septic arthritis; readability
    DOI:  https://doi.org/10.7759/cureus.101617
  19. BMC Public Health. 2026 Feb 21.
       BACKGROUND: Short video platforms are increasingly used for health information in China, yet the quality of user generated content on altitude sickness remains unexamined. This study evaluated the quality, reliability, and content characteristics of altitude sickness videos on Douyin and Bilibili.
    METHODS: A cross-sectional design was adopted. The Chinese search term "Altitude Sickness" was used to search on the Chinese short video platforms Douyin and Bilibili. The top 100 videos recommended by each platform were collected. Data on video publication, content, and characteristics were extracted. The Journal of the American Medical Association benchmark criteria (JAMA), modified DISCERN (mDISCERN), and Global Quality Score (GQS) tools were used to assess video reliability and quality. Statistical analysis involved descriptive and comparative methods to evaluate video characteristics and quality, as well as correlation analysis to examine their relationships. Analyses were conducted using SPSS 26.0, statistical significance was set at P < 0.05.
    RESULTS: A total of 161 videos (85 from Douyin, 76 from Bilibili) were ultimately included. The overall mean scores for the videos were 2.85 ± 0.75 for JAMA, 2.20 ± 0.79 for mDISCERN, and 2.57 ± 0.88 for GQS. The GQS score of videos from Bilibili was significantly higher than that from Douyin (P < 0.05). Videos published by professional institutions/individuals accounted for only 22.36%, but their content quality was significantly superior to that from non-professional institutions/individuals (P < 0.05). Correlation analysis showed that user interaction data (likes, comments) was negatively correlated with quality scores.
    CONCLUSION: The overall quality of altitude sickness videos on major Chinese short video platforms is suboptimal. Content from professional sources is significantly more reliable but remains a minority. Platform interaction metrics (such as likes and comments) do not constitute a reliable proxy for judging the scientific soundness of the content. The public should prioritize professionally sourced information, and platforms should enhance the visibility of evidence-based content.
    Keywords:  Altitude sickness; Content analysis; Health communication; Information quality; Social media
    DOI:  https://doi.org/10.1186/s12889-026-26717-6
  20. Harm Reduct J. 2026 Feb 16.
      
    Keywords:  Consumer health information; Drugs; Expert evaluation; Forums; Harm reduction; Health information quality; Natural language processing; Online platforms; Psychoactive substances; Topic modelling
    DOI:  https://doi.org/10.1186/s12954-026-01424-y
  21. J Natl Med Assoc. 2026 Feb 19. pii: S0027-9684(26)00014-3. [Epub ahead of print]
       PURPOSE: To evaluate the readability and accountability (reliability, quality, and credibility) of online patient education materials for common eye conditions in English, Portuguese, and Spanish.
    DESIGN: Cross-sectional content analysis SUBJECTS: A total of 192 websites across seven eye diseases were analyzed in three languages: English (n = 63), Portuguese (n = 67), and Spanish (n = 62).
    METHODS: First-page Google search websites for macular degeneration, cataract, diabetic retinopathy, glaucoma, conjunctivitis, uveitis, and dry eye were assessed for readability using Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), and Flesch Reading Ease (FRE). Accountability was evaluated using JAMA and DISCERN benchmarks. Websites were also categorized by source type into: academic, national, private, or crowdsourced. Statistical analyses were done using MANOVA, ANOVA, and Tukey's HSD post-hoc tests.
    MAIN OUTCOME MEASURES: Readability (FKGL, GFI, FRE) and accountability (JAMA, DISCERN) indices RESULTS: English-language websites had significantly more accessible readability scores (mean FKGL 8.1) than Spanish (16.6) and Portuguese (16.0) counterparts (p < 0.001). Portuguese websites had the lowest JAMA scores (1.2) and were predominantly authored by private entities. Spanish-language sites showed mixed results, outperforming English in some accountability metrics. Crowdsourced pages like Wikipedia had the highest accountability but poorest readability. Language was a significant predictor for readability and accountability in a multivariate analysis (p < 0.001).
    CONCLUSION: Online ophthalmology patient education materials in Portuguese and Spanish demonstrate significantly poorer readability, and in some cases, lower accountability compared to English-language resources, posing a significant barrier for patients with limited English proficiency. Future efforts should focus on developing standardized, multilingual patient education materials that balance readability with content reliability, potentially incorporating tools such as large language models (LLM's), which have shown potential in improving accessibility of non-English health information.
    Keywords:  Information; Internet; Language; Online; Ophthalmology; Portuguese; Spanish
    DOI:  https://doi.org/10.1016/j.jnma.2026.01.005
  22. Midwifery. 2026 Feb 01. pii: S0266-6138(26)00033-1. [Epub ahead of print]156 104729
       BACKGROUND: One of the most common pregnancy complications is nausea and vomiting (NVP). Most women use the internet to search for information about pregnancy complications, which raises concerns about encountering misinformation online.
    OBJECTIVE: This study assessed which recommendations regarding NVP women encounter in YouTube videos, and the extent to which these recommendations are evidence-based.
    DESIGN AND METHOD: This was a two-phase study. First, we conducted an inductive content analysis of the 45 most frequently watched full-length English YouTube videos on the topic of NVP targeted at pregnant women to obtain a list of recommendations to deal with NVP. Second, a literature review was conducted to evaluate the evidence supporting the recommendations identified in the videos.
    RESULTS: We identified 85 unique recommendations, which could be sorted in 6 categories: pharmacological interventions, alternative and herbal medicine, dietary suggestions, supplements, lifestyle changes, and other recommendations. Of the 85 recommendations, fewer than 10% were evidence-based (vitamin B6, ginger and several medications), and 5% were potentially unsafe. The effectiveness of almost half of the recommendations had received limited research attention, and more than one third was entirely unstudied.
    CONCLUSION: Women who seek NVP relief on YouTube are exposed to a wide variety of recommendations to reduce their symptoms. However, only few of these are evidence-based, and some may even be dangerous. Overall, there is a lack of research on effective non-pharmacological interventions for NVP relief. This highlights the need for improved guidance and dissemination of evidence-based interventions for NVP online.
    Keywords:  Evidence-based; Nausea and vomiting in pregnancy; Online health information seeking; Review; YouTube
    DOI:  https://doi.org/10.1016/j.midw.2026.104729
  23. BMC Med Educ. 2026 Feb 16.
      
    Keywords:  Digital learning; Exercise education; Parkinson’s disease; Patient education; YouTube
    DOI:  https://doi.org/10.1186/s12909-026-08825-4
  24. Front Med (Lausanne). 2026 ;13 1762313
       Background: Congenital nasolacrimal duct obstruction is a common ocular condition in early infancy and may lead to neonatal dacryocystitis or severe infection if not treated promptly. Short-video platforms such as TikTok are used by young parents to obtain health information, but the quality of related videos remains unclear. This study evaluated the quality of TikTok videos on congenital nasolacrimal duct obstruction and its association with uploader type and user engagement.
    Methods: We conducted a cross-sectional review of TikTok videos retrieved with predefined keywords. We included 108 videos, classified by uploader type, and extracted characteristics, engagement metrics, and coverage of six content domains. Two attending ophthalmologists independently rated each video using DISCERN, the Global Quality Score, and the Patient Education Materials Assessment Tool for Audiovisual Materials. Group differences and Spearman correlations were analyzed.
    Results: Among 108 videos, 62 (57.4%) were uploaded by medical professionals and 18 (16.7%) by non-Profit organizations, with a median duration of 48.5 s and median numbers of likes, comments, favorites and shares of 68, 8, 13, and 24, respectively. Videos uploaded by non-Profit organizations and medical professionals achieved substantially higher DISCERN scores (about 59.22 and 50.00 vs. 37.50 and 19.40), Global Quality Scores (median 4.5 and 4.0 vs. 3.0 and 1.0), and Patient Education Materials Assessment Tool for Audiovisual Materials understandability (median 92.31 and 84.62% vs. 73.08 and 53.85%) and actionability (both 75.00% vs. 66.67 and 50.00%) than those uploaded by non-Medical individuals and for-profit organizations (all P < 0.001). Spearman analysis found no significant correlations between overall quality scores and engagement, but duration correlated weakly with some selected quality indicators (r = 0.17-0.21, P < 0.05).
    Conclusion: TikTok videos on congenital nasolacrimal duct obstruction show marked heterogeneity in content completeness and educational quality, largely determined by uploader type. Non-Profit organizations and medical professionals produce more reliable, understandable, and actionable videos, but these are not consistently more popular than lower-quality content. Clinicians and institutions should develop guideline-concordant short videos and direct parents toward trustworthy channels, while platforms consider mechanisms to highlight professionally verified pediatric eye-health information.
    Keywords:  TikTok; congenital nasolacrimal duct obstruction; health education; health information quality; neonatal dacryocystitis; social media
    DOI:  https://doi.org/10.3389/fmed.2026.1762313
  25. PEC Innov. 2026 Jun;8 100458
       Objective: This study aimed to compare pro- and anti-fluoride Japanese YouTube videos on understandability, actionability, flow, reliability, and engagement.
    Methods: Eighty-four videos found via a keyword search (pro = 49, anti = 18, other = 17) were analyzed. Quality was assessed using three validated tools-namely, Patient Education Materials Assessment Tool for Audiovisual Materials (PEMAT-A/V), Global Quality Score (GQS), and modified DISCERN (mDISCERN). Engagement was measured as view rate.
    Results: Pro-fluoride videos scored higher on reliability (mDISCERN 2.6 ± 1.0 vs 1.3 ± 0.6) and overall quality (GQS 2.5 ± 1.4 vs 1.0 ± 0.9; both p < 0.001). No group differences emerged for understandability (58% versus 53%; p = 0.23) or actionability (60% versus 53%; p = 0.33). In the anti-fluoride group, a higher view rate positively correlated with understandability and GQS (ρ ≈ 0.53; p ≈ 0.03); no correlation was found for pro-videos. Only 27% of the videos satisfied the PEMAT-A/V understandability threshold.
    Conclusion: Reliable, expert-made pro-fluoride videos attract modest audiences, whereas anti-fluoride videos can achieve a wide reach when they are easy to follow and well-structured. However, scientific accuracy alone cannot guarantee audience reach or engagement. Oral health authorities should design algorithm-sensitive, evidence-based videos that are clear and actionable.
    Innovation: This is the first study to integrate the PEMAT-A/V, GQS, and mDISCERN with YouTube analytics for Japanese fluoride content, thereby providing a data-driven framework for algorithm-aware oral-health messaging.
    Keywords:  Fluoride; Health communication; Health information; Misinformation; YouTube
    DOI:  https://doi.org/10.1016/j.pecinn.2026.100458
  26. Res Dev Disabil. 2026 Feb 16. pii: S0891-4222(26)00049-1. [Epub ahead of print]170 105254
      YouTube is a major source of health information for families seeking guidance on autism spectrum disorder (ASD), yet the reliability and educational value of treatment-related content remain uncertain. This cross-sectional study evaluated the quality, reliability, and credibility of ASD treatment videos on YouTube, providing a snapshot of the platform as of July 2023. A structured search yielded 114 eligible English-language videos. Two trained evaluators independently assessed each video using validated instruments: the DISCERN questionnaire (DISCERN) and the Journal of the American Medical Association (JAMA) benchmark criteria for reliability, and the Global Quality Scale (GQS) for overall educational quality. Inter-rater reliability was acceptable to excellent across all tools (ICC = 0.516-0.801), permitting the use of combined scores. Overall, video quality was predominantly low to moderate. DISCERN scores indicated that only 14.0 % of videos were "Excellent," while 66.7 % fell within the Poor-to-Fair range. Similarly, only 24.6 % of videos were rated High quality on the GQS. JAMA scores were the lowest overall, with 71.1 % of content failing to meet basic standards of authorship, attribution, disclosure, or currency. Professionally produced content-particularly academic and specialist videos-consistently outperformed family-, patient-, and other non-health-related sources across all measures. Video duration demonstrated positive associations with quality and reliability, whereas higher comment counts were negatively correlated with all scoring systems. Treatment category alone did not predict quality; instead, uploader identity and information structure were the primary determinants. These findings highlight significant variability and persistent gaps in the quality of ASD treatment information on YouTube. Increased clinician involvement, stronger visibility for evidence-based content, and targeted digital health literacy efforts are needed to support families in navigating online ASD resources.
    Keywords:  Autism spectrum disorder; DISCERN; Global Quality Scale; JAMA; Misinformation; Treatment; YouTube
    DOI:  https://doi.org/10.1016/j.ridd.2026.105254
  27. Digit Health. 2026 Jan-Dec;12:12 20552076261418849
       Background: Short videos on platforms such as Douyin (the Chinese counterpart of TikTok) and RedNote have rapidly expanded, including growing palliative care-related content. This study evaluated the content and quality of such videos in China.
    Methods: A cross-sectional analysis of the top 100 palliative care videos on Douyin and RedNote (February 2025) was conducted. Baseline characteristics, content features, and quality indicators were assessed. Spearman correlation examined factors associated with video quality and engagement.
    Results: Videos on Douyin demonstrated higher popularity (all p < 0.001 except for that of comment counts) and quality (all p < 0.01 except for that of modified DISCERN scale and Patient Education Materials Assessment Tool for Audiovisual Content-Actionability) than those on RedNote. Both platforms exhibited generally moderate content quality (median score of 3, 3, 53.8%, 25%, and 12 for Global Quality Scale, modified DISCERN scale, Patient Education Materials Assessment Tool for Audiovisual Content-Understandability and Actionability, and Video Information and Quality Index, respectively). Video quality is related to both the video interactivity index (all p < 0.05 except for that of comment count) and the uploader's follower count (all p < 0.01). It is also influenced by the uploader's identity type and the video's purpose (all p < 0.05 except for that of Patient Education Materials Assessment Tool for Audiovisual Content-Actionability).
    Conclusion: The quality of palliative care-related short videos on Douyin and RedNote is moderate, and there is considerable room for improvement in terms of reliability and accuracy. Platforms should prioritize the recruitment and certification of qualified palliative care professionals, systematically improve content quality, and encourage the contribution of user-generated content from individuals with firsthand palliative care experience.
    Patient or Public Contribution: Publicly shared content from registered social media users, including self-media creators, patients, and family members, was analyzed to evaluate video quality and characteristics.
    Keywords:  RedNote; TikTok; palliative care; social media; video quality
    DOI:  https://doi.org/10.1177/20552076261418849
  28. Front Digit Health. 2025 ;7 1591347
       Background: Each year, influenza vaccines play a vital role in preventing millions of illnesses and reducing flu-related healthcare visits. Short video platforms (including Douyin, BiliBili, and Xiaohongshu) are powerful vehicles for information sharing and are saturated with videos about children's influenza vaccines. Nevertheless, the quality of these videos remains undetermined.
    Purpose and objectives: This study aims to assess the quality and reliability of videos addressing children's influenza vaccines on three short video platforms: Douyin, BiliBili, and Xiaohongshu.
    Methods: Using a cross-sectional survey design, this study examined three short video platforms (all mainland China versions). In February 2025, the research team searched Douyin, BiliBili, and Xiaohongshu for the keyword "children's influenza vaccine," selecting 300 videos (100 per platform) for analysis. We extracted basic video information, coded the content, and identified each video's source. Two independent reviewers then evaluated video quality using the American Medical Association (JAMA) benchmarks, the modified DISCERN (mDISCERN) criteria, and the Global Quality Score (GQS).
    Results: A detailed analysis of 300 videos revealed that on Douyin and Xiaohongshu, most videos were created by professionals and lay users, whereas on BiliBili, most were uploaded by non-profit organizations. Douyin stood out in user engagement, as its videos received significantly higher numbers of likes, comments, favorites, and shares than those on BiliBili and Xiaohongshu (p < 0.001). Furthermore, we observed a strong positive correlation between overall quality scores and comment volume (Spearman ρ = 0.90, p < 0.001); correlations with likes (ρ = 0.77) and favorites (ρ = 0.73) were moderate but still significant (p < 0.001). Score differences also emerged based on source type (p < 0.001). Videos published by health professionals were rated highest, ordinary-user videos received lower ratings, and those from news agencies and non-profit organizations fell in between. In terms of quality ratings, there were no statistically significant differences among the GQS, JAMA, and mDISCERN scoring systems.
    Conclusion: These findings suggest that Douyin, BiliBili, and Xiaohongshu offer moderately rated scientific content regarding children's influenza vaccines. Viewers should exercise caution when watching related videos on these platforms. Moving forward, both the platforms and content creators must strive to improve video quality and reliability to boost vaccination rates. These efforts have important implications for clinical practice, offering new perspectives for health education interventions and better promoting public awareness of vaccination's significance-ultimately contributing positively to public health.
    Keywords:  child; influenza vaccine; information quality; short videos; social media
    DOI:  https://doi.org/10.3389/fdgth.2025.1591347
  29. Subst Use Misuse. 2026 Feb 16. 1-9
       OBJECTIVES: Oral nicotine pouches such as Zyn have gained popularity, with marketing increasingly leveraging social media influencers. This study aimed to analyze the content and engagement metrics of TikTok videos tagged with #Zyn across different influencer types.
    METHODS: A standardized inductive coding approach was used to analyze 130 TikTok videos tagged #Zyn, collected on February 16, 2024. Videos were categorized by macro (100,000-1 M followers), micro (1000-100,000 followers), and nano (<1000 followers) influencers. Content was evaluated for quality using the Global Quality Scale (GQS) and modified DISCERN scale. Three coders independently coded the videos and intercoder reliability was assessed using Cohen's kappa, achieving a value of 0.89. Themes analyzed included post origin, health messages, sponsorship and promotion, and personal stories or product insights.
    RESULTS: Macro influencers posted longer, higher quality videos and more frequently promoted Zyn as a safer alternative to other tobacco products. Micro-influencers demonstrated the highest engagement and were the only group offering promotional incentives. Nano influencers had the lowest quality scores but the highest use of memes. Health warnings were most frequent in micro and nano influencer content. Adverse health effects were mentioned across all influencer types, with micro-influencers reporting the widest variety.
    CONCLUSION: This study suggests that macro influencers can be utilized to deliver accurate health information about nicotine products, while micro and nano influencers can be strategically engaged to boost audience interaction and communicate health risks, thereby addressing a wider audience and potentially reducing the appeal and misuse of Zyn products.
    Keywords:  Tik Tok; Zyn; Zynfluencers; oral nicotine pouch
    DOI:  https://doi.org/10.1080/10826084.2026.2624783
  30. Eur Burn J. 2026 Feb 06. pii: 9. [Epub ahead of print]7(1):
       BACKGROUND: Pathological scarring (PS) following surgical procedures, burns, or trauma poses significant clinical, psychological, and socio-economic challenges. Despite the high prevalence of PS, reliable information resources are limited, often leading individuals to depend on unvalidated online sources. To address this gap, we developed MyScarSpecialist.com, an evidence-based website providing comprehensive information on scar types, characteristics, and treatment options. This study aimed to optimize the website through co-creation with patients and clinicians.
    METHODS: Semi-structured focus group meetings were conducted with patients and carers; sessions were recorded, transcribed, and analyzed using thematic analysis.
    RESULTS: From the 3 focus group meetings with 15 patients with scars and 3 carers, four key themes emerged: (1) Information Sources: The Role of Professionals, Peers, and Digital Media in information sharing; (2) Desired information: From scar typing to treatment outcomes to psychosocial impact; (3) Website design: Audience preferences on content layering, information load, and image positioning; (4) Readability: Optimizing content for comprehension. Participants highlighted the need for enhanced peer support and resources addressing the psychological impact of scarring.
    CONCLUSIONS: These findings provide comprehensive insights for optimizing medical educational websites, ensuring inclusivity, accessibility, and empowerment for patients through co-designed strategies.
    Keywords:  co-creation; health literacy; patient and public involvement; patient education; patient-centered care
    DOI:  https://doi.org/10.3390/ebj7010009
  31. Front Public Health. 2026 ;14 1777890
      
    Keywords:  digital technology; healthcare; information management; online health information; patient education
    DOI:  https://doi.org/10.3389/fpubh.2026.1777890