bims-librar Biomed News
on Biomedical librarianship
Issue of 2025–06–29
eighteen papers selected by
Thomas Krichel, Open Library Society



  1. Healthcare (Basel). 2025 Jun 10. pii: 1385. [Epub ahead of print]13(12):
       BACKGROUND: Mistrust in professional health information may undermine population health by reducing engagement in preventive care and contributing to poorer health outcomes. Although sleep quality is a sensitive indicator of both psychosocial stress and health behavior, little is known about how mistrust influences sleep at the population level, and whether preventive health behavior mediates this relationship.
    METHODS: A weighted cross-sectional analysis of a representative adult sample (n = 2090) from South Tyrol, Italy was conducted. Survey data included mistrust toward professional health information (Mistrust Index), five preventive health behaviors (Health Behavior Checklist, HBC), and sleep quality (Brief Pittsburgh Sleep Quality Index, B-PSQI). Associations between mistrust, behavior, and sleep were examined using multivariable linear regression, robust regression (Huber's M-estimator), and nonparametric correlation.
    RESULTS: Sociodemographic characteristics were not significantly associated with mistrust when weighted data were applied. Higher mistrust was associated with poorer sleep quality (β = 0.09, p = 0.003). Preventive health behaviors varied significantly across mistrust levels, with high-mistrust individuals less likely to report regular engagement (all p < 0.01). Regression analyses confirmed that mistrust was independently associated with poorer sleep quality, while preventive behaviors showed no significant relationship with sleep.
    CONCLUSIONS: Mistrust in professional health information is independently associated with poorer sleep quality and lower engagement in preventive behaviors. However, preventive behavior does not appear to mediate this relationship. These findings highlight mistrust as a direct and potentially modifiable risk factor for sleep disturbance at the population level.
    Keywords:  health information; mistrust; preventive health behavior; public health; sleep quality
    DOI:  https://doi.org/10.3390/healthcare13121385
  2. JBI Evid Synth. 2025 Jun 27.
       OBJECTIVE: The purpose of this scoping review was to identify validated geographic search filters and report on their development and performance measures.
    INTRODUCTION: The number of scientific publications has considerably increased. Measures to limit the search and screening efforts can be helpful to increase the efficiency of preparing systematic reviews. Search filters are useful tools for reviewers to identify reports with a common characteristic in bibliographic databases. Geographic search filters limit literature search results to a specific geographic characteristic (eg, a country or region). Searching the literature using geographic filters can be useful to find evidence about health care practices in distinct geographic regions; provide an overview of cultural, epidemiological, or health economics aspects; or to indicate inequalities in health care in a certain region. Our aim was to identify validated geographic search filters and report on their development and performance measures.
    INCLUSION CRITERIA: We included reports on validated geographic search filters aiming to identify research from or about defined geographic features (eg, countries/regions or groups of them) with no restriction regarding the time frame and language of publication.
    METHODS: This review was conducted in accordance with JBI methodology for scoping reviews and its methods were pre-specified in an a priori protocol. We searched PubMed, Embase (Elsevier), The InterTASC Information Specialists' Sub-Group (ISSG) Search Filter resource, and Google Scholar. The study selection process was independently conducted by 2 reviewers, encompassing both abstract and full-text screening. The data extraction included basic characteristics of the geographic search filter (eg, country/region, database), methods used to develop and validate the search filters, and their performance measures. The extracted data are tabulated and summarized narratively.
    RESULTS: Our literature search yielded 907 hits. We included 9 reports that addressed 6 search filters for a broad range of geographic regions, including Spain, the African continent, the United Kingdom, the United States, OECD countries as a group, as well as publications in high-ranking nursing journals from countries where German is spoken. The methods used for developing geographic search filters were heterogeneous. Gold standard sets were created by database searching (n = 3; 50%) and relative recall (n = 3; 50%). Only 3 filters were created using objective methods and 2 underwent internal validation. The sensitivity of the search filters ranged from 73% to nearly 100%.
    CONCLUSION: The findings show that validated geographic search filters are not widely available. The identified search filters may serve as methodological outlines for the development of search filters for other countries or geographic regions. The calculation methods for specificity were different, which made a comparison difficult. Further efforts to standardize the methods for developing and validating these filters, as well as reporting, are important to increase their reliability and comparability.
    REVIEW REGISTRATION: Open Science Framework osf.io/5czhs.
    Keywords:  database searching; evidence synthesis; geographic search filter; information retrieval
    DOI:  https://doi.org/10.11124/JBIES-24-00395
  3. JMIR Res Protoc. 2025 Jun 27. 14 e67910
       BACKGROUND: Miscommunication in health care is a major source of poor health outcomes, complaints about health care professionals, and poor patient satisfaction. Recordings from real-life consultations provide valuable data for communication research and education. Additionally, recordings from simulation-based education of health care students can provide valuable data for health care education research.
    OBJECTIVE: The Digital Library is a data repository supporting high-quality health care communication research. This is the single-source citation for all projects that use the Digital Library in Australia.
    METHODS: This protocol outlines the logistics and consent process for recording and safely storing the recordings of health care consultations and simulation-based education. The processes are outlined for primary health care settings and health care educational settings as well as for health care narratives from consumers. The repository will be used to answer research questions about health care communication and provide a valuable resource for health care education.
    RESULTS: Data collection for the Digital Library commenced in 2023 and is ongoing at the time of submission of this protocol. The Digital Library has been approved by Monash University's Human Research Ethics Committee.
    CONCLUSIONS: The Digital Library will provide a national resource for the study of health care communication in community settings, general practice, and other environments. The health care narratives may be a valuable resource for sharing the patient perspective when living with different conditions. The research that uses this repository will be shared through regular academic channels as well as the community-based dissemination strategies of the National Centre for Healthy Ageing.
    INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/67910.
    Keywords:  community health care; health care communication; health care education; medical education; patient-physician relationship
    DOI:  https://doi.org/10.2196/67910
  4. J Open Source Softw. 2025 Apr 08. 10(108): 5336
      While different text mining approaches - including the use of Artificial Intelligence (AI) and other machine based methods - continue to expand at a rapid pace, the tools used by researchers to create the labeled datasets required for training, modeling, and evaluation remain rudimentary. Labeled datasets contain the target attributes the machine is going to learn; for example, training an algorithm to delineate between images of a car or truck would generally require a set of images with a quantitative description of the underlying features of each vehicle type. Development of labeled textual data that can be used to build natural language machine learning models for scientific literature is not currently integrated into existing manual workflows used by domain experts. Published literature is rich with important information, such as different types of embedded text, plots, and tables that can all be used as inputs to train ML/natural language processing (NLP) models, when extracted and prepared in machine readable formats. Currently, both normalized data extraction of use to domain experts and extraction to support development of ML/NLP models are labor intensive and cumbersome manual processes. Automatic extraction of data and information from formats such as PDFs that are optimized for layout and human readability, not machine readability. The PDF (Portable Document Format) Entity Annotation Tool (PEAT) was developed with the goal of allowing users to annotate publications within their current print format, while also allowing those annotations to be captured in a machine-readable format. One of the main issues with traditional annotation tools is that they require transforming the PDF into plain text to facilitate the annotation process. While doing so lessens the technical challenges of annotating data, the user loses all structure and provenance that was inherent in the underlying PDF. Also, textual data extraction from PDFs can be an error prone process. Challenges include identifying sequential blocks of text and a multitude of document formats (multiple columns, font encodings, etc.). As a result of these challenges, using existing tools for development of NLP/ML models directly from PDFs is difficult because the generated outputs are not interoperable. We created a system that allows annotations to be completed on the original PDF document structure, with no plain text extraction. The result is an application that allows for easier and more accurate annotations. In addition, by including a feature that grants the user the ability to easily create a schema, we have developed a system that can be used to annotate text for different domain-centric schemas of relevance to subject matter experts. Different knowledge domains require distinct schemas and annotation tags to support machine learning.
    DOI:  https://doi.org/10.21105/joss.05336
  5. Dent J (Basel). 2025 Jun 18. pii: 271. [Epub ahead of print]13(6):
      Background/Objectives: Large Language Models (LLMs) are artificial intelligence (AI) systems with the capacity to process vast amounts of text and generate human-like language, offering the potential for improved information retrieval in healthcare. This study aimed to assess and compare the evidence-based potential of answers provided by four LLMs to common clinical questions concerning the management and treatment of periodontal furcation defects. Methods: Four LLMs-ChatGPT 4.0, Google Gemini, Google Gemini Advanced, and Microsoft Copilot-were used to answer ten clinical questions related to periodontal furcation defects. The LLM-generated responses were compared against a "gold standard" derived from the European Federation of Periodontology (EFP) S3 guidelines and recent systematic reviews. Two board-certified periodontists independently evaluated the answers for comprehensiveness, scientific accuracy, clarity, and relevance using a predefined rubric and a scoring system of 0-10. Results: The study found variability in LLM performance across the evaluation criteria. Google Gemini Advanced generally achieved the highest average scores, particularly in comprehensiveness and clarity, while Google Gemini and Microsoft Copilot tended to score lower, especially in relevance. However, the Kruskal-Wallis test revealed no statistically significant differences in the overall average scores among the LLMs. Evaluator agreement and intra-evaluator reliability were high. Conclusions: While LLMs demonstrate the potential to answer clinical questions related to furcation defect management, their performance varies. LLMs showed different comprehensiveness, scientific accuracy, clarity, and relevance degrees. Dental professionals should be aware of LLMs' capabilities and limitations when seeking clinical information.
    Keywords:  ChatGPT; Google Gemini; Microsoft Copilot; artificial intelligence; furcation; periodontics
    DOI:  https://doi.org/10.3390/dj13060271
  6. Cult Health Sex. 2025 Jun 23. 1-13
      Recent assessments of ChatGPT in relation to a variety of pregnancy-related questions have shown mixed results. Rapidly evolving rules and regulations in the USA have led to a confusing abortion landscape, making up-to-date and evidence-based abortion information essential to those considering an abortion. The purpose of this study was to evaluate ChatGPT as a source of information for commonly asked medication and procedural abortion questions by performing a qualitative analysis. We queried ChatGPT-3.5 on ten fact-based abortion questions and ten clinical scenario abortion questions. Query responses were graded by three complex family planning physicians to be 'acceptable' or 'unacceptable' and 'complete' or 'incomplete'. The responses were then compared to evidence-based research published by the American College of Obstetricians and Gynaecologists (ACOG), the Society of Family Planning (SFP), PubMed-indexed evidence, as well as physician clinical experience. In our assessment, a grade of acceptable was given to 65% of responses, however a grade of complete was only given to 8% of responses. In general, fact-based questions were more accurate than clinical questions. Our analysis of ChatGPT suggested it can regurgitate facts found online, but it still lacks the ability to provide understanding and context to clinical scenarios that clinicians are better equipped to navigate.
    Keywords:  Abortion; ChatGPT; abortion policy; artificial intelligence, USA
    DOI:  https://doi.org/10.1080/13691058.2025.2517289
  7. Yonsei Med J. 2025 Jul;66(7): 405-411
       PURPOSE: Large language models (LLMs) have shown potential in medicine, transforming patient education, clinical decision support, and medical research. However, the effectiveness of LLMs in providing accurate medical information, particularly in non-English languages, remains underexplored. This study aimed to compare the quality of responses generated by ChatGPT and Naver's CLOVA X to cancer-related questions posed in Korean.
    MATERIALS AND METHODS: The study involved selecting cancer-related questions from the National Cancer Institute and Korean National Cancer Information Center websites. Responses were generated using ChatGPT and CLOVA X, and three oncologists assessed their quality using the Global Quality Score (GQS). The readability of the responses generated by ChatGPT and CLOVA X was calculated using KReaD, an artificial intelligence-based tool designed to objectively assess the complexity of Korean texts and reader comprehension.
    RESULTS: The Wilcoxon test for the GQS score of answers using ChatGPT and CLOVA X showed that there is no statistically significant difference in quality between the two LLMs (p>0.05). The chi-square statistic for the variables "Good rating" and "Poor rating" showed no significant difference in the quality of responses between the two LLMs (p>0.05). KReaD scores were higher for CLOVA X than for ChatGPT (p=0.036). The categorical data analysis for the variables "Easy to read" and "Hard to read" revealed no significant difference (p>0.05).
    CONCLUSION: Both ChatGPT and CLOVA X answered Korean-language cancer-related questions with no significant difference in overall quality.
    Keywords:  Korean language; Large language model; cancer; patients
    DOI:  https://doi.org/10.3349/ymj.2024.0200
  8. J Cutan Med Surg. 2025 Jun 24. 12034754251347626
      
    Keywords:  health literacy; online resources; patient education; readability; skin of colour; urticaria
    DOI:  https://doi.org/10.1177/12034754251347626
  9. Trauma Surg Acute Care Open. 2025 ;10(2): e001665
       Background: Firearm-related injuries are a preventable public health epidemic and the leading cause of pediatric death in America. Online injury prevention resources (OIPRs) offer potential for educating the public on firearm safety. National public health organizations recommend a sixth-grade reading level for these resources. We hypothesize that OIPRs for firearm safety may not meet this standard and are inconsistent in content.
    Methods: We analyzed firearm injury OIPRs from three sources: verified trauma centers (TCs), national health organizations, and gun violence prevention advocacy groups. We assessed readability using reading time, Flesch-Kincaid grade level, and Flesch reading ease. We also assessed whether OIPRs included child safety, safe handling, and safe storage of firearms.
    Results: Among 587 TCs, 105 had publicly accessible OIPRs. After removing duplicates, we analyzed 53 unique hospital OIPRs, 25 from national organizations, and 8 from advocacy groups. The mean reading time of hospital-based OIPRs was 2 min and 49 s, and 5 min and 30 s for advocacy organizations. The average Flesch-Kincaid Grade Level for hospital OIPRs was 8.2, national organizations 8.4, and advocacy groups 9.7. Only 21% of hospital and 22% of national OIPRs met the sixth-grade level; none of the advocacy groups met this standard. 79% of hospital-based OIPRs content related to child safety, compared with 44% of national organizations and none of the advocacy groups. Only 21% of TCs and no advocacy groups provided information on safe handling practices.
    Conclusion: Few OIPRs meet recommended readability guidelines and often fail to address key topics such as child safety or safe handling of firearms. This gap in accessible educational information highlights the need for standardized resources to reduce firearm injury. Future research should aim to improve these resources to ensure usability and effective outreach to our communities.
    Keywords:  Accident Prevention; Firearms; Health literacy; patient education
    DOI:  https://doi.org/10.1136/tsaco-2024-001665
  10. PLoS One. 2025 ;20(6): e0325709
       BACKGROUND: Individuals with Ehlers-Danlos Syndromes (EDS) and Generalized Hypermobility Spectrum Disorder (G-HSD) experience musculoskeletal joint instability, cardiopulmonary manifestations, and functional limitations with online exercise resources commonly utilized. This study characterizes and assesses the content, quality, and readability of websites addressing exercise training for individuals with EDS/G-HSD.
    METHODS: The first 350 English websites were Googled using search terms "Ehlers-Danlos Syndrome and exercise" and "Ehlers-Danlos Syndrome and physical activity," targeting educational/instructional sites on exercise training for adults with EDS/G-HSD. Content was assessed using scientific consensus criteria, quality using Modified DISCERN, Global Quality Scale (GQS), and the Patient Education Materials Assessment Tool (PEMAT), and readability using Flesch-Kincaid Grade Level (FKGL) and Flesh-Reading Ease Scores (FRES).
    RESULTS: 78/350 unique websites were included, most from industry organizations (37%) and personal commentary (24%). The mean content score was moderate 13.8 ± 4.4/25. The content most discussed included: short/long-term benefits of muscle strength, resistance training, and generalized exercise safety considerations. Median modified DISCERN and GQS scores were 4/5 IQR [3-4] and 3/5[2.3-4], respectively. Mean PEMAT understandability and actionability scores were 85% ± 12% and 69% ± 23%, respectively. Average FKGL was 11.0 ± 2.7 and FRES was 43.6 ± 7.2. Moderate-strong Spearman correlations were observed between total content scores and GQS (rho = 0.76) and DISCERN (rho = 0.52), p < 0.001 for both.
    CONCLUSION: Website content varied, most addressing general safety recommendations and multiple training modalities. While quality was moderate-to-good, future resources should focus on simplified language, actionable guidance, and visual aids. Incorporating practical examples of daily activities, injury prevention strategies, broader benefits like cardiovascular health, and psychological support can empower safe and confident exercise training.
    DOI:  https://doi.org/10.1371/journal.pone.0325709
  11. Birth Defects Res. 2025 Jul;117(7): e2500
       BACKGROUND: Pregnant and lactating individuals frequently rely on online sources for vaccine information. However, the readability, credibility, and accuracy of such content vary widely, potentially influencing vaccine hesitancy. This study evaluates the accessibility and reliability of online vaccine information across different digital platforms.
    METHODS: A cross-sectional content analysis was conducted on vaccine-related content published between 2018 and 2022. Data were collected from official health websites (e.g., WHO, CDC), social media (Twitter, Facebook), blogs, and parenting forums. Readability was assessed using the Flesch-Kincaid (FK) and SMOG indices, while credibility was evaluated using the DISCERN tool and HONcode certification. Accuracy was determined by comparing claims against scientific evidence from authoritative health organizations. Statistical analyses, including one-way ANOVA and chi-square tests, were performed to examine readability differences and misinformation prevalence across platforms.
    RESULTS: Official health websites had the highest readability complexity (average FK grade level: 11.8 ± 1.2), while social media content was the most accessible (average FK grade level: 7.8 ± 1.0). However, social media also exhibited the highest misinformation prevalence (38%), whereas official sources maintained near-perfect accuracy (98% compliance with scientific evidence). Blogs and forums demonstrated moderate readability (FK grade level: 9.5 ± 1.4 and 8.7 ± 1.1, respectively) but varied in credibility (DISCERN scores: 40-50/80). Thematic analysis revealed dominant misinformation trends, including fear-based narratives (52% of misinformation cases) and scientific distortions (29%). Accessibility barriers were also identified, with only 10% of sources providing multilingual content, and disparities in digital health resources were observed between high- and low-income regions.
    CONCLUSION: This study highlights the trade-off between readability and credibility in online vaccine information. While official sources provide reliable content, their complexity may hinder comprehension. Addressing accessibility gaps through plain-language communication and misinformation mitigation strategies is crucial for improving digital health literacy and supporting informed maternal vaccine decision-making.
    Keywords:  credibility evaluation; maternal health literacy; online vaccine information; readability assessment; vaccine hesitancy
    DOI:  https://doi.org/10.1002/bdr2.2500
  12. Mycopathologia. 2025 Jun 23. 190(4): 57
       BACKGROUND: Aspergillosis, a fungal disease caused by the genus Aspergillus, can lead to various clinical manifestations, especially in immunocompromised individuals. YouTube serves as a major source of health information, but the quality and reliability of its content vary. We evaluated the quality, reliability, and educational value of uploaded YouTube videos on aspergillosis.
    METHODS: On August 20th, 2024, YouTube Videos on aspergillosis were selected based on the number of views and analyzed using the Global Quality Score (GQS), Journal of the American Medical Association (JAMA) Benchmark Criteria, and the Modified DISCERN Questionnaire. Videos were categorized by creators (Doctors, Medical tutors, Patients, Others). Descriptive statistics, Mann-Whitney U test, and Spearman correlation were used to assess video quality and its association with video parameters.
    RESULTS: We included 50 videos, which generally exhibited high content quality with a median GQS of 4.00 (IQR = 0.50), good information quality with a JAMA median score of 3.00 (IQR = 0.75), and moderate reliability with mDISCERN median score of 3.40 (IQR = 1.00). Significant positive correlations were found between video duration and GQS (r = 0.592, p < 0.001), JAMA (r = 0.308, p = 0.031), and mDISCERN (r = 0.667, p < 0.001). High-quality videos had significantly longer durations, with a median of 12.97 min (IQR = 16.57) compared to low-to-medium-quality videos with 2.00 min (IQR = 5.57) (p < 0.001).
    CONCLUSION: There is a significant variability in the quality of videos on aspergillosis on YouTube. While longer videos tend to offer more reliable and comprehensive information, relying on popularity metrics alone may lead to misinformation. There is a need for critical evaluation of online information on this important fungal disease and the promotion of high-quality content to enhance public understanding and health outcomes.
    Keywords:  Aspergillosis; Quality; Reliability; YouTube
    DOI:  https://doi.org/10.1007/s11046-025-00967-1
  13. Int J Rheum Dis. 2025 Jun;28(6): e70341
       BACKGROUND: Systemic lupus erythematosus (SLE) is a complex autoimmune disease, and the quality of health information shared on social media is critical for patient education. The aim of this study was to evaluate the content and quality of SLE-related videos on TikTok and Bilibili platforms.
    METHODS: We retrieved the first 200 Chinese SLE-related videos sorted by default ranking on TikTok and Bilibili. Irrelevant, duplicated, very recent (< 7 days), or pre-2020 videos were excluded. Publisher type (e.g., Doctor, Personal User) was recorded. Video quality was independently assessed using DISCERN, Global Quality Scale (GQS), and JAMA benchmarks; content completeness across six dimensions (e.g., symptoms, management) was evaluated.
    RESULTS: We analyzed 265 videos, revealing that 76.3% of TikTok videos were by doctors (median duration 61 s), whereas 57.8% of Bilibili videos were physician-generated (median duration 297 s). Doctor-produced videos had significantly higher quality, with TikTok's average DISCERN score at 35.22, lower than Bilibili's 40.09. Bilibili videos exhibited significantly higher DISCERN scores across dimensions of clarity and reliability compared to TikTok (p < 0.05). However, GQS and JAMA scores were similar between the two platforms (p > 0.05). Engagement metrics such as likes and retweets correlated positively with DISCERN and GQS scores on TikTok, while video length showed a positive correlation with scores on both platforms.
    CONCLUSION: Bilibili videos provide clearer and more reliable SLE information than TikTok. Establishing better content standards and increasing collaboration with healthcare professionals are crucial to improving the quality of health information on social media.
    Keywords:  Bilibili; TikTok; social media; systemic lupus erythematosus; video
    DOI:  https://doi.org/10.1111/1756-185X.70341
  14. Contemp Clin Trials Commun. 2025 Aug;46 101505
       Background: Little is known regarding information seeking by participants in randomized clinical trials (RCTs) of behavioral interventions. The current study explored the prevalence of information seeking, whether information seeking varied by participant demographic characteristics, and whether information seeking affected participants' study knowledge or trial-related behavior.
    Methods: Adults who were currently or recently enrolled in a behavioral RCT completed an online survey. Respondents were asked retrospectively about their trial participation history, information seeking behavior before and after trial enrollment, and how any information found impacted their trial experience.
    Results: Respondents (N = 92) predominantly identified as women (70.7 %) and White (62.0 %), had an average (mean ± SD) age of 45.1 ± 12.4 years, and were enrolled in trials with a range of foci, from weight loss (38 %) to smoking cessation (31.5 %) and mental health (22.8 %). Overall, 37 % searched for trial information at least once, with 28.3 % searching before enrollment and 17.4 % after enrollment, most commonly via internet search engines. Participants searched for details regarding trial length, study conditions, expected findings, and compensation. Searching for information was not associated with experience during trial consent. Searching prior to enrollment generally increased likelihood of enrollment, whereas searching after enrollment was reported to have limited to no effect on trial behavior.
    Discussion: A sizable minority of trial participants search for trial information from outside sources, which may support increased enrollment. Investigators should consider how information shared online (e.g., via protocol papers or ClinicalTrials.gov) describes study hypotheses and intervention techniques to avoid potential bias due to participant demand characteristics or intervention diffusion.
    Keywords:  Access to information; Clinical trial; Information seeking behavior; Scholarly communication
    DOI:  https://doi.org/10.1016/j.conctc.2025.101505
  15. JMIR Form Res. 2025 Jun 23. 9 e55670
       Background: Urinary incontinence (UI) is a series of clinical episodes featuring involuntary urine leakage. UI affects people in terms of their physical, emotional, and cognitive functioning, and the negative perceptions and impact on patients are not fully understood. In addition, the true demand for the treatment of UI and related issues is yet to be revealed.
    Objective: The aim of this study is to examine the online search trend, user demand, and encyclopedia content quality related to UI on a national and regional scale on Baidu search, the major search engine in Mainland China.
    Methods: The Baidu Index was queried using UI-related terms for the period from January 2011 to August 2023. The search volume for each term was recorded to analyze the search trend and demographic distributions. For user interest, the demand graph data and trend data were collected and analyzed.
    Results: Three search topics were identified with the 18 available UI search keywords. The total Baidu search index for all UI topics was 11,472,745. The annual percent changes (APCs) for the topic Complaint were 1.7% (P<.05) from 2011-2021 and -7.9% (P<.05) from 2021-2023, and the average annual percent change (AAPC) was 0.1% (P<.05). For the topic Inquiry, the APCs were 16% (P<.05) from 2011 to 2016, -27.00% from 2016 to 2019, and 21.2% (P<.05) from 2019 to 2023, with an AAPC of 4.8%. Regarding the topic of Treatment, the APC was 20.3% from 2011-2018 (P<.05), -36.9% from 2018-2021 (P>.05), and 2.2% from 2021-2023, with a -0.4% overall AAPC. The age distribution of the population of each UI search topic inquiry shows that the search inquiries for each topic were mainly made by the population aged 30 to 39 years. People from the eastern part of China made up around 30% of each search query.
    Conclusions: Web-based searching for UI topics has been continuous and traceable since January 2011. Different categorized themes within the UI topic highlight specific demands from various populations, necessitating tailored responses. Although online platforms can offer answers, medical professionals' involvement is crucial to avoid misdiagnosis and delayed treatment.
    Keywords:  Baidu; Baidu encyclopedia; Baidu index; infodemiology; patients' concern; public interest; urinary incontinence
    DOI:  https://doi.org/10.2196/55670
  16. Bioinform Adv. 2025 ;5(1): vbaf096
       Motivation: Pre-trained Language Models (PLMs) have achieved remarkable performance across various natural language processing tasks. However, they encounter challenges in biomedical named entity recognition (NER), such as high computational costs and the need for complex fine-tuning. These limitations hinder the efficient recognition of biological entities, especially within specialized corpora. To address these issues, we introduce GRU-SCANET (Gated Recurrent Unit-based Sinusoidal Capture Network), a novel architecture that directly models the relationship between input tokens and entity classes. Our approach offers a computationally efficient alternative for extracting biological entities by capturing contextual dependencies within biomedical texts.
    Results: GRU-SCANET combines positional encoding, bidirectional GRUs (BiGRUs), an attention-based encoder, and a conditional random field (CRF) decoder to achieve high precision in entity labeling. This design effectively mitigates the challenges posed by unbalanced data across multiple corpora. Our model consistently outperforms leading benchmarks, achieving better performance than BioBERT (8/8 evaluations), PubMedBERT (5/5 evaluations), and the previous state-of-the-art (SOTA) models (8/8 evaluations), including Bern2 (5/5 evaluations). These results highlight the strength of our approach in capturing token-entity relationships more effectively than existing methods, advancing the state of biomedicalNER.
    Availability and implementation: https://github.com/ANR-DIG-AI/GRU-SCANET.
    DOI:  https://doi.org/10.1093/bioadv/vbaf096