bims-librar Biomed News
on Biomedical librarianship
Issue of 2026–03–15
thirty-one papers selected by
Thomas Krichel, Open Library Society



  1. Eur J Hum Genet. 2026 Mar 07.
      In the era of rapidly accumulating genomic data, largely driven by the broad use of whole-genome sequencing (WGS) in clinical settings, interpreting lesser-known genes with varied phenotypes remains challenging. PubMatcher is a new tool that simplifies bibliographic research for multiple genes at once and grants quick and easy access to relevant gene information. It helps users efficiently identify potential genotype-phenotype associations using PubMed complemented by additional data. By significantly reducing analysis time, PubMatcher supports the interpretation of novel or under-documented genes. Freely available for academic and non-commercial use, PubMatcher is a user-friendly and efficient solution for researchers, clinical scientists and clinical geneticists working on pan-genomics analyses.
    DOI:  https://doi.org/10.1038/s41431-026-02068-z
  2. Open Res Eur. 2025 ;5 335
       Background: The European Research Infrastructure for Heritage Science (E-RIHS), recently granted with European Research Infrastructure Consortium (ERIC) legal status, aims to advance research by facilitating access to cutting-edge scientific services and tools in the domain of heritage science. One of the major challenges and achievements during its implementation phase (2022-2024, G.A. 101079148) was the creation of the Catalogue of Services (CoS)-a digital platform that helps users find, request, and manage access to both physical and digital services offered by E-RIHS partners.
    Method: This paper introduces the concept, design, and development of the E-RIHS CoS, emphasising how it follows FAIR (Findable, Accessible, Interoperable, Reusable) and Open Science principles. Built with a strong focus on real research needs, the platform features a flexible and scalable architecture. It includes tools like semantic search, automated workflows, and customized dashboards based on user roles. The paper also places the CoS in the broader context of similar platforms from other research infrastructures, and point out its novel features-such as a recommendation engine, multilingual support, and advanced data analytics.
    Results and Conclusions: Now, the E-RIHS CoS is online, providing a single access entry to E-RIHS ERIC services and making easy to find and select the most adequate scientific services based on the users' research questions. It is a solid and forward-thinking digital tool designed to support high-quality research, foster collaboration, and make heritage science more inclusive and accessible.
    Keywords:  Catalogue of services; Data Management; E-RIHS; FAIR; Heritage Science; Open Science; Research Infrastructure
    DOI:  https://doi.org/10.12688/openreseurope.20798.2
  3. J Nurs Scholarsh. 2026 Mar;58(2): e70076
       INTRODUCTION: Systematic reviews (SRs) require comprehensive, reproducible searches, yet developing search strategies is resource-intensive and demands specialized expertise. Generative AI offers potential to streamline this process, but empirical evaluations for GAI-assisted SR searching remain scarce. The objectives of this study are to: demonstrate a step-by-step process for developing a custom ChatGPT-based chatbot to support SR search strategy development, and evaluate its performance.
    DESIGN: A cross-sectional evaluation study.
    METHODS: We used ChatGPT-4.0 to create a chatbot designed to mimic a medical librarian, generating PICO-informed searches. Its knowledge base was augmented with two methodological references. After piloting testing, we refined its instructions. For evaluation, we randomly sampled 50 Cochrane SRs published in 2024. Standardized P-I-O prompts produced database-ready queries for PUBMED and EMBASE. The primary outcome was per-review success rate, summarized by median and inter-quartile range. A sensitivity analysis was conducted.
    RESULTS: Pilot testing achieved a retrieval rate of 41/49 (83.7%). In the main sample (1169 studies; median 13.5 studies per SR), the chatbot identified a median of 67.4% of included studies (IQR: 43.1%-88.4%). When limited to indexed studies (n = 1114), retrieval rose to 72.0% (IQR: 46.0%-92.5%). Lower performance was observed when outcomes were absent from the abstracts or interventions had many lexical variants.
    CONCLUSIONS: A GAI-based chatbot can rapidly generate SR searches (~67%-72% identification), serving as a useful starting point but not a replacement for expert-led approaches. Integration of librarian expertise, structured prompts, and controlled vocabularies may improve performance. Further benchmarking and transparent reporting are needed to guide adoption.
    Keywords:  database searching; generative artificial intelligence; large language model; systematic review
    DOI:  https://doi.org/10.1111/jnu.70076
  4. BMJ Open. 2026 Mar 10. 16(3): e099887
       OBJECTIVES: To examine which information sources medical specialists use to answer clinical questions in daily practice and to describe the relative frequency of use for each source.
    DESIGN: Systematic review with narrative synthesis and meta-analysis.
    DATA SOURCES: Academic Search Premier, APA PsycINFO, CINAHL, Emcare, Cochrane Library, Web of Science, Embase and PubMed were searched for relevant studies published from 2000 to 1 June 2025.
    ELIGIBILITY CRITERIA: We included peer-reviewed English-language studies reporting on the frequency of information source usage by medical specialists when addressing clinical questions. Studies reporting usage on a continuous (0-100%) scale were eligible for meta-analysis.
    DATA EXTRACTION AND SYNTHESIS: Two reviewers independently screened studies. Data were extracted by one reviewer and checked by a second. Study quality was assessed using the Quality Assessment tool with Diverse Studies tool. A narrative synthesis was conducted for studies that were not eligible for quantitative pooling to summarise patterns in information-seeking behaviour and reported barriers. A random-effects meta-analysis was performed for studies reporting continuous usage percentages and assessing at least four information sources. Sensitivity analyses were conducted using a leave-one-out approach. Potential publication bias was explored descriptively using funnel plots.
    RESULTS: 25 studies were included, of which 6 (with 8641 participants) were eligible for meta-analysis. The narrative synthesis of non-pooled studies showed a consistent reliance on standalone information sources and identified barriers to the use of aggregated sources. In the meta-analysis, digital databases such as PubMed were the most frequently used information source (74%, 95% CI 63% to 85%), followed by textbooks (71%, 95% CI 57% to 85%) and consultation with colleagues (43%, 95% CI 15% to 71%). Systematically aggregated sources, including clinical practice guidelines (38%, 95% CI 27% to 49%) and point-of-care websites (49%, 95% CI 17% to 81%), were used less frequently. Sensitivity analyses indicated that pooled estimates were generally robust, although results should be interpreted cautiously given methodological variability across studies.
    CONCLUSIONS: Medical specialists predominantly rely on standalone information sources when addressing clinical questions, while systematically aggregated and interpreted sources such as clinical practice guidelines and point-of-care tools are used less frequently. These findings highlight the need to better understand and address barriers to the use of aggregated information sources in clinical practice.
    PROSPERO REGISTRATION NUMBER: CRD42022267431.
    Keywords:  Evidence-Based Medicine; Information source management; clinical information; digital resources; medical specialists; systematic review
    DOI:  https://doi.org/10.1136/bmjopen-2025-099887
  5. Glycobiology. 2026 Mar 09. pii: cwag015. [Epub ahead of print]
      To facilitate access to relevant text of literature related to data in GlyCosmos, we have developed a collection of annotated literature resources using the agile annotation method supported by the PubAnnotation system. As a proof of concept, we compiled two dictionaries for glycan motifs and epitopes, plus six additional dictionaries for relevant biological entities, covering organisms, phenotypes, diseases, and anatomical locations. Next, we collected all the PubMed abstracts from 15 selected journals, and annotated them based on these eight dictionaries. This resulted in 279,368 annotation instances made to 15,463 abstracts, meaning that we were able to automatically pull glycan motif and epitope annotations related to diseases, taxonomy, etc. from over 15,000 abstracts. All the annotations were converted into Resource Description Framework (RDF) statements to support flexible querying. For users who are not familiar with RDF, we also developed a Web interface in GlyCosmos to visualize the location of the text in publications as well as query templates to personalize queries for specific terms. Pilot searches and analyses suggest that these resources are useful for navigation of relevant contexts of biomedical associations relevant to glycobiology.
    Keywords:  annotation; bioinformatics; databases; glycans; literature mining
    DOI:  https://doi.org/10.1093/glycob/cwag015
  6. Front Public Health. 2026 ;14 1777577
       Introduction: Large language models (LLMs) are increasingly used by the public to obtain health information, yet the relationship between content quality and readability in LLM-generated patient education remains unclear.
    Methods: We benchmarked five LLMs (Doubao, DeepSeek, Wenxin Yiyan, Tongyi Qianwen, and GPT-5) using an identical set of 20 Mandarin Chinese skin-cancer FAQs (100 total outputs). Quality was assessed using c-PEMAT-P and the Global Quality Scale (GQS), and readability was assessed using seven indices (ARI, FRES, GFOG, FKGL, CL, SMOG, and LW). Group differences and correlations were evaluated with appropriate statistical tests.
    Results: Models showed comparable understandability/actionability (c-PEMAT-P), while overall quality (GQS) differed, with GPT-5 scoring highest. Readability varied substantially by both model and content category, and no single model performed best across all readability metrics. Correlation analyses indicated that quality and readability were largely decoupled.
    Discussion: High-quality outputs do not necessarily have high readability. Optimizing AI-generated skin-cancer education requires multi-faceted strategies that jointly consider model choice and content topic.
    Keywords:  digital public health communication; health information quality (C-PEMAT, GQS); large language models (LLMs); readability assessment; skin cancer education
    DOI:  https://doi.org/10.3389/fpubh.2026.1777577
  7. Front Public Health. 2026 ;14 1760872
       Objective: Large language models (LLMs), a core technology of generative artificial intelligence (AI), are increasingly used in health education and promotion. Although they may expand access to medical information, concerns remain regarding the reliability and readability of AI generated content for the public. This study evaluated the reliability and readability of answers generated by five LLMs to common questions about perinatal depression. The primary aims were to determine (1) the reliability of LLM responses to frequently asked questions about perinatal depression and (2) whether the readability of the generated content aligns with public health literacy levels.
    Methods: Twenty-seven frequently asked questions were derived from Google Trends and patient facing resources from the American College of Obstetricians and Gynecologists (ACOG). Each question was submitted to ChatGPT-5, Gemini-2.5, Microsoft Copilot, Grok4, and DeepSeek. Two obstetricians independently rated responses using five validated instruments (DISCERN, EQIP, JAMA, GQS, and HONCODE) and inter-rater agreement was quantified using the interclass correlation coefficient (ICC). Readability was assessed using six indices: ARI, GFI, CLI, OLWF, LWGLF, and FRF. Differences among models were analyzed using the Friedman test.
    Results: Inter rater agreement was high across 27 perinatal depression questions. ICC values ranged from 0.729 to 0.847. Significant between model differences emerged for DISCERN, EQIP, and HONCODE. All had p less than 0.001. No overall differences were found for JAMA and GQS. Grok4 scored highest on DISCERN at 60.33 ± 5.48. DeepSeek scored highest on EQIP at 53.04 ± 4.91. Copilot scored highest on HONCODE at 9.26 ± 1.85. These results highlight distinct strengths in quality constructs across instruments. Readability posed a common limitation. All models exceeded the NIH recommended sixth grade level on grade-based indices (for example, ARI ranged from 13.49 ± 2.92 to 15.81 ± 3.25). Similarly, OLWF scores fell well below the sixth-grade benchmark of 94 (ranging from 61.44 ± 6.80 to 72.96 ± 10.39, where higher scores denote easier reading). Most models produced empathetic and informative content. However, they fell short in fully addressing clinical safety standards.
    Conclusion: Most LLMs demonstrated moderate to high reliability when responding to perinatal depression questions, supporting their potential as supplementary sources of health information. However, readability levels above recommended benchmarks suggest that current outputs may remain challenging for individuals with lower health literacy. While LLMs improve information accessibility, further improvements in readability, source attribution, and ethical transparency are needed to maximize public benefit and support equitable health communication. Future work should focus on defining and standardizing safety behaviors in high-risk mental health contexts to enable reliable clinical deployment.
    Keywords:  generative artificial intelligence; health information quality; large language models; perinatal depression; postpartum depression; readability
    DOI:  https://doi.org/10.3389/fpubh.2026.1760872
  8. Cent European J Urol. 2026 ;79(1): 1-8
       Introduction: Upper tract urothelial carcinoma (UTUC) is associated with poor survival outcomes. Therefore, providing reliable information about UTUC is crucial. Recently, chatbots powered by large language models have become a widely used information source. Our aim was to evaluate and compare responses generated by ChatGPT-4o and DeepSeek-R1 to patient-important questions regarding UTUC.
    Material and methods: A set of 43 questions assigned into four categories (general information, symptoms and diagnosis, treatment, prognosis) was curated. Each question was entered into DeepSeek-R1 and ChatGPT-4o. Answers were rated by two urologists using a scale from 1 (completely incorrect) to 4 (fully correct). The median score was calculated for each question. Median scores ≥3 were considered accurate. The repeatability of responses was evaluated using cosine similarity. The number of words in responses was counted.
    Results: The median scores for DeepSeek-R1 and ChatGPT-4o were both 3.5. There was no statistically significant difference between the scores assigned to two chatbots for all questions (p = 0.35), nor for any particular category.DeepSeek-R1 and ChatGPT-4o provided satisfactory answers for 93% and 91% of the evaluated questions, respectively. No potentially dangerous information was found. Both models consistently generated responses with moderate-high similarity (cosine similarity >0.5), except in one query. Finally, DeepSeek-R1 provided significantly longer answers than ChatGPT-4o (p <0.001).
    Conclusions: Both DeepSeek-R1 and ChatGPT-4o predominantly provide satisfactory responses to patient-important questions about UTUC. Artificial intelligence chatbots demonstrate potential as the first-line information sources for patients but struggle with highly specialized inquiries and thus cannot replace expert medical advice.
    Keywords:  AI; ChatGPT; DeepSeek; artificial intelligence; upper tract urothelial carcinoma
    DOI:  https://doi.org/10.5173/ceju.2025.0238
  9. PEC Innov. 2026 Jun;8 100462
       Objective: We conducted an analysis of online information on noninvasive prenatal testing (NIPT) provided by Japanese medical institutions recognized by the certification system (certified institutions) and other institutions (non-certified institutions).
    Methods: We identify institutional websites from google Japan that used three keywords related to prenatal testing (N = 37). Comprehensiveness was assessed using domestic and international guidelines. Quality was measured using the DISCERN instrument, and readability was evaluated using jReadability.
    Results: Among certified institutions, the mean comprehensiveness score was 7.36 out of 20, and the mean DISCERN score was 42.6 out of 80, categorized as "fair," although nearly half were rated "poor" or "very poor." Most websites required "lower advanced" reading proficiency. Websites from non-certified institutions showed higher comprehensiveness and readability than certified institutions.
    Conclusion: NIPT-related information on certified institution in Japan is often insufficient in terms of quality. By contrast, non-certified institution, although ethically problematic, may provide more comprehensive and higher-quality information. Certified institutions should improve the quality and clarity of web-based communication to support patient's decision making.
    Innovation: This is the first study to assess the NIPT-related websites in Japanese medical institutions. Critical informational gaps were identified, highlighting the need for trustworthy and readable online resources.
    Keywords:  DISCERN; Genetic counseling; Health information; Internet; Non-invasive prenatal testing; Readability; Shared decision making
    DOI:  https://doi.org/10.1016/j.pecinn.2026.100462
  10. Muscle Nerve. 2026 Mar 07.
       INTRODUCTION/AIMS: Myasthenia gravis (MG) is a chronic autoimmune neuromuscular disorder that requires complex treatment decisions and sustained disease self-management. Health literacy is essential for patient understanding of MG, yet online patient education materials (PEMs) have not been systematically studied. This study aimed to evaluate the readability of MG-related PEMs, assess the inclusion of key topics, and identify characteristics associated with more accessible and comprehensive resources.
    METHODS: We conducted a cross-sectional analysis of MG-related PEMs identified through Google and Bing, evaluating readability using the Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), and Simple Measure of Gobbledygook (SMOG). Content analysis was conducted for inclusion of key MG topics. PEMs were categorized by organization type, target population age, and geographic location. Statistical tests included Wilcoxon signed-rank, Kruskal-Wallis, Mann-Whitney U, and Fisher's exact test.
    RESULTS: All 149 PEMs exceeded the recommended 6th-grade reading level (median FRES 41.8, FKGL 11.4, SMOG 12.9; p < 0.001). Community organization PEMs were more readable than academic PEMs (median FKGL 10.4 vs. 11.6; p = 0.02, median FRES 44.5 vs. 39.4; p = 0.01). Frequently included topics were MG definition (94%), symptoms (93%), and immunotherapy (85%), while medications to avoid (30%), myasthenic crisis (62%), and mental health (10%) were the least included. Readability varied by topic, with immunotherapy and thymectomy sections being the most complex.
    DISCUSSION: MG PEMs are written above recommended readability levels, posing barriers to comprehension. Improving readability and addressing gaps in critical topics, such as medication safety, myasthenic crisis, and mental health could enhance patient understanding and support informed decision-making.
    Keywords:  content analysis; digital health literacy; myasthenia gravis; patient education; readability
    DOI:  https://doi.org/10.1002/mus.70185
  11. BMC Musculoskelet Disord. 2026 Mar 07.
      
    Keywords:  Consumer health information; Evidence-based health information; Informed decision-making; Online health information; Osteoporosis; Quality
    DOI:  https://doi.org/10.1186/s12891-026-09711-2
  12. Cancer Control. 2026 Jan-Dec;33:33 10732748261434669
      IntroductionLittle is known about the quality of Arabic health information available online, particularly regarding information on head and neck cancer. This study evaluated the quality of web-based Arabic information on head and neck cancer and its treatment.MethodsEleven Arabic keywords were used to search for the content using Google (Google.com). The quality of written material was assessed using the Patient Education Materials Assessment Tool (PEMAT)-P and four JAMA Quality Benchmarks (authorship, attributions, disclosure, and currency), whereas the quality of audiovisual (AV) material was assessed using the PEMAT-A/V tool.ResultsA total of 315 websites were analyzed, of which 17 and 298 contained AV and written materials, respectively. Among these, 198 sites were sourced from scientific outlets, such as Medscape, whereas 117 originated from layperson platforms, such as news articles and blogs. The mean PEMAT scores for understandability and actionability were 61% (± 16.03) and 17% (± 24.89), respectively. The average overall PEMAT score was as low as 49% (± 15.55). Additionally, the included websites did not meet the JAMA benchmarks, with an average overall rating of 1.11 (± 0.94) on a total score of 4. Of the websites, none achieved all benchmarks, and 23 achieved three (7%).ConclusionNotably poor-quality Arabic information on head and neck cancers online was observed, with questionable actionability, understandability, and veracity of sources for the presented information. Clinicians and healthcare policymakers should consider creating accessible, evidence-based, up-to-date web content on head and neck cancers tailored to patients' needs and wishes.
    Keywords:  consumer health information; head and neck diseases; health literacy; patient education; value-based health care
    DOI:  https://doi.org/10.1177/10732748261434669
  13. PLoS One. 2026 ;21(3): e0343573
       INTRODUCTION: Metabolic dysfunction associated with steatotic liver disease (MASLD)/non-alcoholic fatty liver disease (NAFLD) represents a significant public health concern. Social media (SoMe) increasingly influences health perceptions in lower-middle-income countries, with one-third of Sri Lanka's population using SoMe for health information. Assessing MASLD content quality on SoMe is therefore important.
    AIMS & METHODS: This cross-sectional study assessed accuracy, completeness, and quality of MASLD content across Facebook, YouTube, TikTok, Instagram, and X in Sinhala, English, and Tamil from Sri Lanka (January 2005-December 2024). Board-certified gastroenterologists independently reviewed posts using standardised scales for accuracy (0-3), completeness (0-5), and global quality score (GQS) (0-5). Posts were categorised by source profile and content type, with user interactions analysed.
    RESULTS: Analysis included 289 posts: 158 (54.7%) YouTube, 101 (34.9%) Facebook, 14 (4.8%) TikTok, 11 (3.8%) X, 5 (1.7%) Instagram. Languages: 214 (74.0%) Sinhala, 54 (18.7%) Tamil, 21 (7.3%) English. Content sources: undisclosed identity (36.0%), non-healthcare persons (26.0%), healthcare professionals (22.1%), alternative healthcare professionals (14.2%), healthcare institutions (1.7%). Health promotion (61.9%) was the predominant content type. Mean accuracy was 1.78/3 (59.3%), with healthcare professionals scoring highest (2.35/3, 78.5%) versus others (51.0-55.1%; p < 0.001). Completeness averaged 2.1/5 (42%), with English content scoring higher than Sinhala and Tamil. GQS averaged 2.4/5 (48.4%). 82% of posts were classified as "Rotten" (<60% score for each metric). Facebook and YouTube showed significantly higher completeness and GQS (p < 0.05). User engagement metrics showed no correlation with content quality.
    CONCLUSION: Most SoMe content originated from non-healthcare sources. Healthcare professionals delivered the most accurate content. Facebook and YouTube showed relatively higher content quality scores, though comparisons are limited by the small number of posts from other platforms. Overall quality remained suboptimal across platforms, with 82% failing adequate standards. User engagement didn't correlate with quality. These findings highlight the need for improved quality control and health literacy initiatives for MASLD information on SoMe platforms.
    DOI:  https://doi.org/10.1371/journal.pone.0343573
  14. Neurochirurgie. 2026 Mar 06. pii: S0028-3770(26)00029-9. [Epub ahead of print]72(2): 101795
       INTRODUCTION: YouTube is increasingly used for neurosurgical learning; however, the educational quality, transparency, and reliability of neurosurgery-related content-and whether these features differ by video source-remain unclear.
    OBJECTIVE: To synthesize published evaluations of neurosurgical YouTube videos and meta-analyze standardized quality/reliability scores, exploring source- and time-related differences and reporting gaps in validation and procedural completeness.
    METHODS: Following PRISMA, we searched PubMed, Scopus, Embase, and Web of Science (2017-2024) for studies assessing neurosurgical YouTube videos using standardized tools (DISCERN/mDISCERN, JAMA Benchmark, Global Quality Score [GQS]). Data were pooled using random-effects meta-analysis with Hartung-Knapp adjustment; scores were also transformed to the Proportion of Maximum Possible (POMP, 0-100).
    RESULTS: Sixteen studies (12-1,233 videos each) were included. On native scales, pooled means were: DISCERN per-item 3.06/5, DISCERN total 30.1/80, JAMA 2.41/4, and GQS 3.04/5. Harmonized POMP point estimates (0-100) were: DISCERN 39.8, JAMA 60.3, and GQS 51.0. Heterogeneity was substantial (I2 > 95%) except for DISCERN total (I2 = 0%). Subgroup analyses suggested higher scores for institutional versus non-institutional sources, although meta-regression did not confirm significance. Validation and procedural completeness were infrequently reported.
    CONCLUSION: Neurosurgical YouTube content shows moderate-to-low educational quality with substantial inconsistency. Institutional sources may perform better, but gaps in transparency, validation, structure, and procedural completeness are common. Standardized production criteria and curated peer-reviewed repositories may improve safe integration into neurosurgical education.
    CLINICAL TRIAL NUMBER: Not applicable.
    Keywords:  DISCERN; GQS; JAMA benchmark; Neurosurgical education; YouTube
    DOI:  https://doi.org/10.1016/j.neuchi.2026.101795
  15. Brachytherapy. 2026 Mar 12. pii: S1538-4721(26)00002-4. [Epub ahead of print]
       PURPOSE: Patients diagnosed with cancer turn to social media to learn about diagnoses and treatments, but there are concerns of bias and misinformation. Information about brachytherapy on social media has not been evaluated for overall quality. The purpose of this paper is to review YouTube videos on brachytherapy and to analyze their content.
    METHODS AND MATERIALS: YouTube was queried on June 30, 2025 using keywords prostate brachytherapy, cervical brachytherapy, endometrial brachytherapy, vaginal brachytherapy, skin brachytherapy, breast brachytherapy, HDR brachytherapy, and LDR brachytherapy. Videos were ordered by relevance and the top five videos with criteria of length <16 min were independently analyzed by a radiation oncology attending (brachytherapy expert), radiation oncology resident, and undergraduate student. Discordant answers were reviewed by the radiation oncology attending and resident.
    RESULTS: Forty videos across the eight keywords were reviewed with an average of 18,356 views (range: 36-203,131), 216 likes (range: 0-1900), and 15 comments (range: 0-123). There was fair agreement between the reviewers when looking at bias based on Fleiss' kappa κ = 0.378 (95% CI, 0.197-0.559), p < 0.001, however the radiation oncology attending and resident detected more misinformation and bias in the prostate, skin, HDR, and LDR videos than the undergraduate reviewer using Cochran's q test χ2(2) = 12.29, p = 0.002 and χ2(2) = 20.93, p < 0.001 respectively.
    CONCLUSION: Prostate and skin brachytherapy videos have greater frequency of misinformation and/or bias in comparison to gynecologic brachytherapy videos that patients may not readily detect. Increased efforts to create complete, accurate, and unbiased content for brachytherapy patients is warranted.
    Keywords:  Brachytherapy; Cervical; Endometrial; HDR; LDR; Prostate
    DOI:  https://doi.org/10.1016/j.brachy.2026.01.003
  16. Int Wound J. 2026 Mar;23(3): e70885
      Exosome therapy has emerged in recent years as a promising acellular approach for the treatment of tissue regeneration and wound healing. Initial preclinical investigations have demonstrated accelerated fibroblast proliferation, improved angiogenesis and reduced scar formation. It is evident that patients are more likely to gain an understanding of these treatments and acquire information from digital platforms as opposed to from peer-reviewed scientific publications. This study assessed the credibility and instructional merit of exosome therapy content on YouTube, a prominent source of online health information. A cross-sectional study was performed on the initial 50 videos in English obtained using the keyword 'exosome therapy' (15 July 2025). The videos were independently assessed by two plastic surgeons and a dermatologist using three validated scoring systems. The scoring system was based on the Journal of the American Medical Association (JAMA) reference criteria for transparency and reliability, the Global Quality Score (GQS) for overall educational quality and the modified DISCERN tool for content reliability and balance. The median scores obtained were consistently low (DISCERN: 2.67; GQS: 2.67; JAMA: 2.00). However, videos created by doctors exhibited a statistically significant higher average score compared to videos published by patients, companies and YouTubers (p < 0.05). A robust positive correlation was identified between DISCERN and GQS (ρ = 0.95, p < 0.001). Despite the substantial experimental evidence that confirms the efficacy of exosome therapy for acute and chronic wounds, none of the videos addressed the recognised wound healing applications of exosome therapy. In contrast, the contents primarily focus on cosmetic enhancement, anti-ageing interventions and beauty-related applications. The discrepancy between the scientific advancements in the fields of regenerative medicine, dermatology and plastic surgery and the accessibility of online educational resources, highlights the necessity for professional health organisations to furnish accessible, evidence-based materials that accurately demonstrate the therapeutic potential of exosomes in wound healing.
    Keywords:  YouTube; dermatology; digital health information; exosome therapy; plastic surgery; regenerative medicine; wound healing
    DOI:  https://doi.org/10.1111/iwj.70885
  17. Digit Health. 2026 Jan-Dec;12:12 20552076261431598
       Objectives: TikTok and Bilibili have gradually become important platforms for the public to access health information. This study aims to evaluate the quality and reliability of MG-related videos on these platforms.
    Methods: This study collected the top 150 MG-related videos from both platforms. General characteristics, uploader identity, and engagement metrics were extracted. The Global Quality Score (GQS) and modified DISCERN (mDISCERN) were used to evaluate video quality and reliability. Mann-Whitney U-test and Kruskal-Wallis H-test were used for inter-group comparisons, and Spearman's rank correlation analysis was performed to assess correlations.
    Results: A total of 225 videos were included in this study. The content of the videos predominantly focused on symptoms (63.6%) and treatment (49.3%), while diagnosis (25.3%) and prevention (13.3%) were less represented. The median GQS for TikTok was 3 (2-3), and the median mDISCERN was 2 (2-3). For Bilibili, the median GQS was 2 (1-3), and the median mDISCERN was 2 (2-2). Videos uploaded by specialists were of higher quality and reliability compared to those uploaded by individual users (P < 0.05). Engagement metrics showed weak correlations with GQS and mDISCERN (P < 0.05).
    Conclusion: MG-related videos on both platforms have incomplete content structures, with low quality and reliability. Videos uploaded by specialists were of higher quality and reliability compared to those uploaded by individual users. Engagement metrics showed weak correlations with video quality and reliability. Future platforms should strengthen content moderation and professional involvement to improve the quality of digital health education.
    Keywords:  Bilibili; Myasthenia gravis; TikTok; health communication; short videos
    DOI:  https://doi.org/10.1177/20552076261431598
  18. Digit Health. 2026 Jan-Dec;12:12 20552076261429670
       Background: Chimeric antigen receptor T-cell (CAR-T) therapy represents a transformative advancement in cancer treatment. While public interest in CAR-T has surged, particularly through short-video platforms like TikTok and Bilibili in China, concerns remain regarding the reliability and quality of health information disseminated through such media.
    Objective: This study aimed to systematically evaluate the content quality, scientific integrity, and user engagement of CAR-T-related videos on Bilibili and TikTok, and to assess whether high traffic equates to high information quality.
    Methods: A total of 200 Chinese-language videos (100 per platform) were identified using the keyword "CAR-T." Videos were evaluated using three scoring tools: the DISCERN instrument for reliability, the Global Quality Score (GQS), and a novel CAR-T-specific checklist assessing 12 core domains. Content characteristics, source types, and engagement metrics (likes, comments, shares, and saves) were also extracted and compared across platforms and content types.
    Results: TikTok videos demonstrated significantly higher user engagement but poorer structure and lower DISCERN scores than Bilibili (P < 0.001). Videos posted by medical professionals were more common on TikTok (56%) and had higher engagement, but not necessarily higher quality. Bilibili, dominated by academic sources, produced longer videos with more complete and structured information. Correlation analysis revealed strong consistency among quality scoring tools but weak associations between quality and engagement metrics, suggesting a "high popularity-low quality" paradox.
    Conclusion: CAR-T-related content on Chinese short-video platforms is characterized by a disconnect between popularity and information quality. Effective science communication strategies and platform-level interventions are needed to mitigate misinformation risks and improve the dissemination of high-quality medical content.
    Keywords:  Bilibili; CAR-T therapy; DISCERN; Global Quality Score; TikTok; content quality; health communication; short-video platforms
    DOI:  https://doi.org/10.1177/20552076261429670
  19. Digit Health. 2026 Jan-Dec;12:12 20552076261431854
       Objective: Keloids are benign fibrous dermal tumors that typically result in excessive scar tissue formation, affecting appearance and potentially causing discomfort. As social media platforms like TikTok become important sources of health information, the number of keloid-related videos has increased. However, the quality and reliability of these videos remain unclear. This study aims to evaluate the content, quality, and reliability of keloid-related videos on TikTok.
    Methods: A total of 85 keloid-related videos on TikTok were analyzed. Video characteristics, uploader types, and content themes were extracted. The Global Quality Score and modified DISCERN tool were used to assess video quality and reliability. Correlation analysis was conducted between video metrics and quality scores.
    Results: Videos were generally short (median: 48 s) with high engagement (median likes: 166, saves: 44). Common topics included treatment (87.06%), clinical manifestations (55.29%), and diagnosis (51.76%), while prevention, precipitating factors, and recurrence were less frequently discussed. Videos uploaded by healthcare professionals had significantly higher quality than those from individual users. Positive correlations were found among engagement metrics (likes, comments, saves), but no correlation was observed between engagement and video quality.
    Conclusions: While keloid-related TikTok videos show high engagement, their overall quality and reliability are low. Increasing healthcare professional involvement and improving platform content regulation are essential to enhance the educational value of health information.
    Keywords:  Keloids; TikTok; health information; healthcare professionals; video quality
    DOI:  https://doi.org/10.1177/20552076261431854
  20. Clin Ophthalmol. 2026 ;20 575279
       Purpose: Patients increasingly seek information about medical treatment options from the internet. This study evaluated the quality and accuracy of YouTube and Facebook videos on glaucoma treatment options.
    Methods: A comprehensive search of "glaucoma" and "eye pressure" combined with "treatment" or "cure" was performed. YouTube videos with at least 25,000 views and 25 views per day and Facebook videos with at least 1000 total views were included. Videos were excluded if they were not in English or about humans. The quality of videos was evaluated by two independent reviewers using a modified Currency, Relevance, Authority, Accuracy, and Criteria (CRAAP) metric. Videos were categorized as educational, testimonial, or advertisement.
    Results: A total of 74 YouTube videos and 19 Facebook videos were included. Of the YouTube videos, 89.7% were educational, 5.5% testimonials, and 4.8% adverts. Of the Facebook videos, 65.8% were educational, 21.1% testimonials, and 13.2% adverts. The inter-rater reliability was acceptable after kappa values were calculated. Fifteen percent of YouTube videos and eighteen percent of Facebook videos were graded as containing misinformation or misleading information. Audio and video quality scores were similar between categories. Higher accuracy and comprehensiveness scores were seen for educational videos. Seventy-four percent of YouTube videos and 66% of Facebook videos addressed the question of what is glaucoma, 65% of YouTube videos and 47% of Facebook videos discussed the course of untreated disease, 64% of YouTube videos and 34% of Facebook videos discussed the goals of treatment, and only 17% of YouTube videos and 0% of Facebook videos discussed the risks of the proposed treatment options.
    Conclusion: Patients are increasingly using YouTube and Facebook for medical information. This study found that many videos lack useful information and some provide information that may be detrimental. Physicians should be aware of this risk and educate patients appropriately.
    Keywords:  Facebook; YouTube; glaucoma treatment; patient education
    DOI:  https://doi.org/10.2147/OPTH.S575279
  21. Digit Health. 2026 Jan-Dec;12:12 20552076261431461
       Background: Glaucoma is a leading cause of irreversible blindness, with a rising global prevalence, driving an increased public search for health information online. Online video platforms, such as Douyin and Bilibili, have become key channels for health communication. However, the quality and reliability of glaucoma-related content on these platforms remain unclear, raising concerns regarding potential misinformation.
    Methods: On 22 October 2025, the top 100 Chinese-language videos of glaucoma from Douyin and Bilibili were systematically collected. The video metadata and engagement metrics were recorded. Quality and reliability were assessed using the Global Quality Score (GQS), modified Decision-making Information Support Criteria for Evaluating the Reliability of Nonrandomized Studies (mDISCERN), JAMA (Journal of the American Medical Association) benchmark criteria, and Patient Education Materials Assessment Tool (PEMAT) for Audio Visual Content. Statistical analysis including Spearman correlation was used to examine the relationships between video variables and quality scores.
    Results: Douyin videos exhibited significantly higher user interaction (likes, comments, shares, and saves) than Bilibili videos. However, the Bilibili videos demonstrated significantly higher median scores for GQS and PEMAT actionability. Videos from professional sources, particularly institutions, and those focusing on disease prevention or using expert monologue/visual aids consistently showed superior quality and reliability across all the assessment tools. Spearman correlation revealed that longer video duration was positively correlated with higher GQS, mDISCERN, and PEMAT understandability scores, whereas fewer comments were negatively correlated with these scores.
    Conclusions: The overall quality and reliability of glaucoma-related online videos from Douyin and Bilibili were suboptimal. Content from nonprofessional sources was problematic. These findings highlight the need for public vigilance when consuming health information on such platforms, and underscore the importance of encouraging greater involvement from healthcare professionals in creating accurate, high-quality educational content.
    Keywords:  Bilibili; Douyin; GQS; Glaucoma; JAMA benchmark criteria; PEMAT-A/V; modified DISCERN score
    DOI:  https://doi.org/10.1177/20552076261431461
  22. Medicine (Baltimore). 2026 Mar 13. 105(11): e48063
      Uterine fibroids, the most common monoclonal benign tumors of the uterine smooth muscle, show an increasing incidence with age. Currently, social media platforms such as TikTok, YouTube, and Bilibili are increasingly becoming important channels for disseminating health information. However, the quality and reliability of content related to uterine fibroids on these platforms are often unsatisfactory. This study aims to systematically evaluate the quality and content characteristics of uterine fibroid-related videos on 3 major short-video platforms - TikTok, YouTube, and Bilibili - using validated assessment tools. A total of 300 videos (100 per platform) uploaded between 2020 and 2025 were included. Video quality was assessed using the The Journal of the American Medical Association benchmark criteria, modified DISCERN instrument, and Global Quality Score. Content features, uploader identity, presentation format, and engagement metrics were also analyzed. Statistical analyses included nonparametric tests and Spearman correlation. Bilibili consistently outperformed TikTok and YouTube in all quality metrics (The Journal of the American Medical Association, DISCERN, Global Quality Score), though overall video quality across platforms was moderate. Video duration was positively correlated with quality scores (ρ ≈ 0.33 for DISCERN). No significant associations were found between engagement metrics (likes/comments) and professional quality ratings. Key content features such as animated demonstrations, source attribution, and inclusion of recent research advances were significantly associated with higher quality. Overall, content across all platforms exhibits deficiencies. However, videos related to uterine fibroids on Bilibili demonstrate relatively fewer shortcomings, while notable quality disparities persist among the different platforms. Content creators should prioritize videos of 2 to 10 minutes with evidence-based features to improve reliability. Viewers are advised to focus on content depth and source credibility rather than superficial engagement metrics. Enhanced platform regulation and public awareness are urgently needed.
    Keywords:  health education; medical information; social media; uterine fibroids; video quality
    DOI:  https://doi.org/10.1097/MD.0000000000048063
  23. BMC Public Health. 2026 Mar 09.
      
    Keywords:  Bilibili; HPV; Information Quality; Patient Education; Public Education; Social Media; TikTok
    DOI:  https://doi.org/10.1186/s12889-026-26915-2
  24. Digit Health. 2026 Jan-Dec;12:12 20552076261430231
       Objective: To evaluate the informational quality and user engagement of acne-related videos on Bilibili and TikTok, and examine associations with uploader characteristics, disease-related topics, presentation formats, and factors linked to high-quality content.
    Methods: A cross-sectional analysis was conducted on 272 videos (122 from Bilibili, 150 from TikTok) retrieved in May 2025. Video characteristics, uploader types, disease-related topics, and presentation formats were recorded. Quality was assessed using the Journal of the American Medical Association benchmark criteria (JAMA), modified DISCERN instrument (mDISCERN), Global Quality Scale (GQS), and Video Information and Quality Index (VIQI). Engagement metrics (likes, comments, collections, shares) were analyzed. Correlation and predictive modeling were applied to examine associations between quality and engagement.
    Results: Bilibili videos were longer (median 409 s vs 51 s; P < 0.001) and scored higher on VIQI (12.05 ± 2.94 vs 10.90 ± 1.97; P = 0.002). TikTok videos were more often uploaded by verified (68.10% vs 33.33%) and professional accounts (65.52% vs 25.64%), achieved higher JAMA (1.28 ± 0.45 vs 0.92 ± 0.98; P < 0.001) and mDISCERN scores (2.10 ± 0.52 vs 2.02 ± 0.93; P = 0.009), and demonstrated higher daily engagement. High-quality content was primarily produced by verified and professional uploaders, particularly in anatomy/physiology topics and doctor monologues. Official media, epidemiology, and television programs/documentaries achieved the greatest engagement. VIQI and GQS were strongly correlated (ρ = 0.755). VIQI (area under the curve [AUC] 0.922) and collections (AUC 0.901) were the strongest discriminators of high-quality content.
    Conclusions: Acne-related videos on Bilibili and TikTok were generally of suboptimal quality. Bilibili favored coherence and accuracy, while TikTok favored transparency and engagement. Quality assessments outperformed engagement metrics in identifying high-quality content. These findings highlight the need to improve credentialing and promote engaging, evidence-based formats to enhance the reliability and impact of dermatologic information on short-video platforms.
    Keywords:  Acne vulgaris; predictive validity; short-video platforms; user engagement; video quality
    DOI:  https://doi.org/10.1177/20552076261430231
  25. Front Public Health. 2026 ;14 1764220
       Background: Health science popularization short videos have become one of the main sources of acquiring disease-related information. However, the quality of such videos on popular short video platforms varies considerably. This study aims to evaluate the quality of the health science popularization short videos about cerebrovascular diseases on two popular short video platforms (TikTok and Kuaishou) in China.
    Methods: Using Python web crawler, short videos related to cerebrovascular diseases were collected from TikTok and Kuaishou in China, posted from December 10th, 2023, to December 10th, 2024. Ultimately, 915 valid videos were included. Two clinical experts evaluated the quality of the included videos using GQS, mDISCERN, and PEMAT-A/V independently. The median (IQR) was used to describe the features of the short videos, and the Kruskal-Wallis test was used to evaluate the differences between groups. Correlation analysis and the Random Forest regression model were applied to investigate the correlation between the features and the quality score of short videos.
    Results: Health science popularization short videos related to cerebrovascular diseases on the TikTok platform showed significantly more likes, favorites, comments, and shares (p < 0.001). The videos on TikTok had a median score of 3 on mDISCERN, a median score of 3 on GQS, a median Understandability score of 65.38%, and a median Actionability score of 50%, all of which were significantly higher than those on Kuaishou. There were strong correlations between video duration and mDISCERN score (r = 0.219, p < 0.001), GQS (r = 0.495, p < 0.001), Understandability score (r = 0.282, p < 0.001), and Actionability score (r = 0.361, p < 0.001). The four Random Forest regression models for video quality scores demonstrated favorable fitting performance, with R 2 values ranging from 0.862 to 0.903.
    Conclusion: Health science popularization short videos related to cerebrovascular diseases on TikTok and Kuaishou showed a moderate quality, and the quality of the health science popularization short videos on TikTok was better than those on Kuaishou. Video duration was a key determinant of video quality.
    Keywords:  Kuaishou; TikTok; cerebrovascular diseases; health science popularization; short videos
    DOI:  https://doi.org/10.3389/fpubh.2026.1764220
  26. Front Public Health. 2026 ;14 1709429
       Background: Social media platforms like TikTok significantly influence health behaviors, yet the quality of scar management content remains under-evaluated. This study analyzes the quality, reliability, and actionability of scar management information on TikTok and examines the relationship between content quality and user engagement.
    Methods: A cross-sectional analysis of the 100 most-liked scar management videos was conducted. Two independent raters evaluated videos using mDISCERN, JAMA benchmark criteria, PEMAT-A/V, and the Global Quality Score (GQS). Creators were categorized as healthcare professionals (HCPs), content creators, or general users.
    Results: Healthcare professionals produced higher-quality content (GQS: 3.45 vs. 2.15 for creators; p < 0.001) with significantly better reliability and actionability. However, an "engagement paradox" was observed: lower-quality videos from non-professionals garnered significantly higher engagement (likes, shares) than evidence-based professional content. Misinformation was prevalent in 46.2% of content creator videos.
    Conclusion: A structural disconnect exists on TikTok where accurate medical advice is overshadowed by algorithmically favored, visually stimulating, but often misleading content. Addressing this public health risk requires platform-level algorithmic adjustments and enhanced digital strategies from medical professionals to compete in the attention economy.
    Keywords:  TikTok; health information quality; media; scar management; social media
    DOI:  https://doi.org/10.3389/fpubh.2026.1709429
  27. Orthop J Sports Med. 2026 Mar;14(3): 23259671261419114
       Background: Shoulder pain and injury are among the most prevalent musculoskeletal presentations in primary care. With the rise of consumer health information on TikTok, it is pivotal to assess and determine whether the information produced by content creators can serve as a supplement for shoulder rehabilitation and injury prevention.
    Hypothesis: It was hypothesized that content creators with professional degrees and extensive knowledge within the realm of shoulder injuries would yield valuable and accurate health information.
    Study Design: Cross-sectional study.
    Methods: On June 18, 2025, #ShoulderInjury was used as the search item under the TikTok search engine. A total of 9286 videos appeared after the initial search. The authors applied an inclusion criteria of at least 100 likes. Exclusion criteria removed irrelevant, non-English, and duplicate videos, resulting in 209 eligible videos for further analysis. These were evaluated using the DISCERN questionnaire, an instrument used to assess consumer health information on a 1 to 5 scale. Two independent raters scored the videos, and interrater reliability was calculated using weighted Cohen kappa.
    Results: The 209 analyzed videos garnered 1,408,268 likes and 12,536 comments, with a mean DISCERN score of 2.61. Physicians' videos (n = 41) had the highest mean score (3.52), significantly outperforming nonprofessionals (2.18), physical therapists (2.87), and other professionals (2.79) in critical DISCERN areas (P < .001). Educational content yielded the highest mean score (3.29), whereas personal story videos had the lowest (1.89). Weighted Cohen kappa showed very good agreement for physician videos (κ = 0.82), moderate for physical therapists (κ = 0.59), good for nonprofessionals (κ = 0.79), and fair for other professionals (κ = 0.40).
    Conclusion: This study highlights the potential of TikTok as an effective educational tool when used by qualified professionals. Professionally produced content consistently scored higher on the DISCERN scale. Although the findings are promising, it is important to note limitations, like potential biases in DISCERN scoring due to nonblinded raters, the influence of TikTok's algorithm, and the exclusion of videos with <100 likes. Future research should explore social media's role in medical education and assess how to optimize content delivery and engagement.
    Keywords:  physical therapy; rotator cuff; shoulder; shoulder instability
    DOI:  https://doi.org/10.1177/23259671261419114
  28. J Thorac Dis. 2026 Feb 28. 18(2): 138
       Background: The widespread use of chest computed tomography (CT) has substantially increased the detection of ground-glass nodules (GGNs). This often causes significant patient anxiety. While most GGNs are slow-growing, misinformation or incomplete guidance on social media can worsen "scan anxiety". This may lead to demands for unnecessary overtreatment or result in poor adherence to surveillance protocols. This study evaluated the content, quality, and reliability of GGN-related short videos on TikTok and Bilibili to determine their utility for patient education.
    Methods: We searched both platforms using the keyword "ground-glass nodules" ("GGNs") between September 30-October 8, 2025. We analyzed the top 130 videos per platform. We classified uploaders as professionals (surgeons, radiologists, internists including traditional Chinese medicine physicians) or patients. Content was coded for etiology, imaging, diagnosis, treatment, and follow-up. Video quality and reliability were assessed using the Global Quality Score (GQS, 1-5) and modified DISCERN (mDISCERN). Two physicians rated all videos independently, with adjudication by a senior clinician. Nonparametric tests and Spearman correlations were applied (two-sided P<0.05).
    Results: A total of 237 videos were included (TikTok, n=125; Bilibili, n=112). Content analysis revealed significant information gaps: while 92.83% of videos discussed treatment options (often emphasizing surgery), only 16.88% explained GGN etiology, and systematic guidance on risk stratification was frequently lacking. Professionally produced videos (surgeons/radiologists) scored significantly higher than patient-generated content. Although Bilibili had higher median GQS scores (3.00 vs. 2.00, P<0.001) than TikTok, the overall reliability (mDISCERN) across both platforms was modest, with no significant difference. Engagement metrics (likes/shares) did not correlate with medical quality.
    Conclusions: Current short-video algorithms prioritize engagement over clinical accuracy, resulting in fragmented health information that may distort patients' risk perception of GGNs. While professionals produce higher-quality content, the overall reliability remains suboptimal. Clinicians must be aware of these online information deficits to proactively address patient anxiety and correct misconceptions during consultations, ensuring adherence to evidence-based surveillance pathways.
    Keywords:  Ground-glass nodules (GGNs); quality assessment; short videos; social media
    DOI:  https://doi.org/10.21037/jtd-2025-aw-2252
  29. Front Public Health. 2026 ;14 1750738
       Objective: This study aims to evaluate the quality, reliability, and content characteristics of semaglutide-related short videos on three major Chinese short-video platforms (Bilibili, TikTok, and Rednote), with a focus on exploring differences in the presentation of side effects across different platforms and uploader types. Rather than examining specific clinical indications or dosing regimens, this study focuses on how semaglutide-related information is presented to the general public in real-world short-video contexts. It seeks to clarify the impact of platform ecology and creator identity on drug information dissemination, thereby providing a basis for optimizing the quality of online health information and conducting targeted public health communication.
    Methods: A cross-sectional content analysis was adopted. Semaglutide-related videos were retrieved from three major Chinese short-video platforms using the Chinese keyword "" (the standard Mandarin term for "semaglutide"). This exact search term was used to simulate real-world user behavior and ensure reproducibility across platforms. Because most videos did not specify approved indications (type 2 diabetes vs. obesity) or dosage levels, analyses were conducted without stratification by clinical dose or indication. After rigorous screening, eligible videos were assessed for quality and reliability using the modified DISCERN scale and Global Quality Score (GQS). Video content was categorized into five thematic dimensions, and mentions of nine predefined side effects were systematically recorded. A dual-perspective analytical approach-comparing the full sample and a side effect-focused subsample-was employed to examine platform-based differences in adverse event portrayal.
    Results: A total of 607 semaglutide-related videos were included (Bilibili: 309, TikTok: 153, Rednote: 145). The results showed significant differences in content quality (M-DISCERN and GQS scores), video duration, and user engagement metrics across platforms (p < 0.05). There were statistically significant differences in content distribution among platforms (p < 0.05): Bilibili focused on Policy News and Events, Personal Experience Sharing, and Commercial Promotion; Rednote centered on Medication Education and Medical Advice/Medication Guidance; TikTok mainly focused on Policy News and Events. Regarding side effects, the distribution of gastrointestinal and hepatobiliary side effects among the three platforms was statistically significant (p < 0.05), with TikTok having the highest mention rate. These side effect narratives were typically presented without reference to specific dosing levels or approved indications. The dual-perspective analysis further revealed systematic differences in side effect portrayal, with TikTok conferring greater visibility for adverse-event narratives. In terms of content quality, the three platforms showed a gradient difference of "Rednote > TikTok > Bilibili" (p < 0.001).
    Conclusion: The quality and reliability of semaglutide-related short videos are significantly influenced by platform ecology. Publicly accessible drug information on short-video platforms often lacks clear differentiation between approved indications and dosage regimens, contributing to generalized interpretations of drug risks. The drug information accessible to the public exhibits structural biases, with side effect narratives demonstrating a clear "platformization" pattern. The dual-perspective analysis revealed that inter-platform differences in the portrayal of specific adverse events (e.g., gastrointestinal and hepatobiliary effects) were more discernible in the side effect-focused subsample. Future interventions should be tailored to platform-specific features to enhance the completeness and scientific rigor of online pharmaceutical information.
    Keywords:  content characteristics; creator identity; information quality; semaglutide; short-video platforms; side effects
    DOI:  https://doi.org/10.3389/fpubh.2026.1750738
  30. Int J Dermatol. 2026 Mar 09.
       BACKGROUND: Vitiligo can cause substantial psychosocial distress, including stigma, depression, and social exclusion. Cultural factors shape disease perception and information needs. Online search queries provide real-time indicators of unmet public interests. We investigated language- and culture-associated differences in vitiligo-related online information seeking in Germany to inform culturally competent dermatologic care.
    METHODS: We conducted a retrospective observational study using anonymized data from the Google Ads Keyword Planner (October 2019-May 2023). Search terms related to vitiligo were analyzed in six languages (German, Turkish, Arabic, English, Russian, Polish), representing the most commonly spoken languages in Germany. Keywords were thematically categorized and comparatively analyzed across languages.
    RESULTS: A total of 7.8 million vitiligo-related search queries were identified, with the majority in German. "General information" was the most frequently searched category, except in Arabic, where "treatment options" ranked highest. Treatment queries in Turkish and Arabic more frequently mentioned alternative therapies and home remedies than specific evidence-based treatments. Notable cross-language differences emerged in searches for camouflage, depigmentation, faith-related coping, and psychosocial burden.
    CONCLUSIONS: Online search behavior reveals both shared and language-specific interests about vitiligo. Observed language patterns are hypothesis-generating and may reflect differences in information access, healthcare navigation, and language barriers rather than inherent cultural preferences. Language-specific patterns highlight the need for culturally sensitive communication, multilingual patient education, and inclusive care models in dermatology to promote health equity.
    Keywords:  Germany; cultural health disparities; digital epidemiology; health information needs; multilingual; online search behavior; vitiligo
    DOI:  https://doi.org/10.1111/ijd.70385