bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒02‒18
twenty-one papers selected by
Thomas Krichel, Open Library Society



  1. Gerontologist. 2024 Feb 15. pii: gnae011. [Epub ahead of print]
      Chronological age is invariably used as a categorizing tool for spaces, collections, and programs in public libraries. Stemming from a larger project that seeks to bring attention to the ways in which public libraries engage with community-dwelling older adults, this paper explores older patrons' perspectives on the language (e.g. older adult, seniors, adult) assigned to older adults in library programs and which label best (or least) suits their sense of identity and, in turn, what language encourages or deters their engagement with library programs. Findings illustrate that age-based language describing older adult library programs is often at odds with patrons' perceptions of how library programming relevant to them ought to be labelled. Common to all participants was a clear dislike for the term "elderly". While most participants preferred "older adult" to "senior", others voiced no preference, as long as they felt heard and valued. Many participants linked the use of language used to describe library programs to being excluded from and treated differently from other library patrons. As such, the language used to group and describe different library populations directly shapes feelings of belonging (or exclusion) in library programs. Insights from this research contribute to our evolving understandings of the ways in which language connected to age can shape one's sense of identity. Results also serve to cultivate a more sensitive and critical approach to the question of age within library science, and, by extension, the experiences of older adults who frequent the library.
    Keywords:  Age Labels; Ageism; Institutional Issues; Library and Information Science; Organizational & Social Gerontology
    DOI:  https://doi.org/10.1093/geront/gnae011
  2. MethodsX. 2024 Jun;12 102601
      Evidence synthesis methodologies rely on bibliographic data. The process of searching and retrieving bibliographic data can be supported by using bibliographic APIs. This paper presents a collection of code that serves both as a recipe book and a finished working example of how to interact with Scopus and OpenAlex APIs for the purpose of supporting evidence synthesis. While the procedure and code base presented here were developed as part of an evidence synthesis project in the field of nutrient recovery from human excreta and domestic wastewater for reuse in agriculture, the procedure and code base should be useful more broadly for evidence syntheses or bibliographic analyses also in other fields.•This paper presents a working example of how to interact with Scopus and OpenAlex APIs•The code base is written in SQL (MySQL) and Unix Shell (Bash)•The procedure was developed in an MacOS environment but should be portable to other environments.
    Keywords:  API-CODEBASE; Bibliographic analysis; Bibliometric analysis; Citation map; Systematic map; Systematic review
    DOI:  https://doi.org/10.1016/j.mex.2024.102601
  3. JBI Evid Synth. 2024 Feb 12.
      OBJECTIVE: The purpose of this scoping review is to identify validated geographic search filters and report their methodology and performance measures.INTRODUCTION: Data on specific geographic areas can be required for evidence syntheses topics, such as the investigation of regional inequalities in health care or to answer context-specific epidemiological questions. Search filters are useful tools for reviewers aiming to identify publications with common characteristics in bibliographic databases. Geographic search filters limit the literature search results to a specific geographic feature (eg, a country or region).
    INCLUSION CRITERIA: We will include reports on validated geographic search filters that aim to identify research evidence about a defined geographic area (eg, a country/region or a group of countries/regions). Reports published in any language will be considered for inclusion.
    METHODS: This review will be conducted in accordance with JBI methodology for scoping reviews. The literature search will be conducted in PubMed and Embase (Elsevier). The InterTASC Information Specialists' Sub-Group (ISSG) Search Filter resource and Google Scholar will also be searched. Two researchers will independently screen the title, abstract, and full text of the search results. A third reviewer will be consulted in the event of any disagreements. The data extraction will include study characteristics, basic characteristics of the geographical search filter (eg, country/region), and the methods used to develop and validate the search filter. The extracted data will be summarized narratively and presented in a table.
    REVIEW REGISTRATION: Open Science Framework osf.io/5czhs.
    DOI:  https://doi.org/10.11124/JBIES-23-00445
  4. J Am Med Inform Assoc. 2024 Feb 16. pii: ocae015. [Epub ahead of print]
      OBJECTIVES: Question answering (QA) systems have the potential to improve the quality of clinical care by providing health professionals with the latest and most relevant evidence. However, QA systems have not been widely adopted. This systematic review aims to characterize current medical QA systems, assess their suitability for healthcare, and identify areas of improvement.MATERIALS AND METHODS: We searched PubMed, IEEE Xplore, ACM Digital Library, ACL Anthology, and forward and backward citations on February 7, 2023. We included peer-reviewed journal and conference papers describing the design and evaluation of biomedical QA systems. Two reviewers screened titles, abstracts, and full-text articles. We conducted a narrative synthesis and risk of bias assessment for each study. We assessed the utility of biomedical QA systems.
    RESULTS: We included 79 studies and identified themes, including question realism, answer reliability, answer utility, clinical specialism, systems, usability, and evaluation methods. Clinicians' questions used to train and evaluate QA systems were restricted to certain sources, types and complexity levels. No system communicated confidence levels in the answers or sources. Many studies suffered from high risks of bias and applicability concerns. Only 8 studies completely satisfied any criterion for clinical utility, and only 7 reported user evaluations. Most systems were built with limited input from clinicians.
    DISCUSSION: While machine learning methods have led to increased accuracy, most studies imperfectly reflected real-world healthcare information needs. Key research priorities include developing more realistic healthcare QA datasets and considering the reliability of answer sources, rather than merely focusing on accuracy.
    Keywords:  artificial intelligence; clinical decision support; evidence-based medicine; natural language processing; question answering
    DOI:  https://doi.org/10.1093/jamia/ocae015
  5. Interdiscip Sci. 2024 Feb 10.
      We report a combined manual annotation and deep-learning natural language processing study to make accurate entity extraction in hereditary disease related biomedical literature. A total of 400 full articles were manually annotated based on published guidelines by experienced genetic interpreters at Beijing Genomics Institute (BGI). The performance of our manual annotations was assessed by comparing our re-annotated results with those publicly available. The overall Jaccard index was calculated to be 0.866 for the four entity types-gene, variant, disease and species. Both a BERT-based large name entity recognition (NER) model and a DistilBERT-based simplified NER model were trained, validated and tested, respectively. Due to the limited manually annotated corpus, Such NER models were fine-tuned with two phases. The F1-scores of BERT-based NER for gene, variant, disease and species are 97.28%, 93.52%, 92.54% and 95.76%, respectively, while those of DistilBERT-based NER are 95.14%, 86.26%, 91.37% and 89.92%, respectively. Most importantly, the entity type of variant has been extracted by a large language model for the first time and a comparable F1-score with the state-of-the-art variant extraction model tmVar has been achieved.
    Keywords:  Data mining; Genomics; Name entity recognition; Natural language processing
    DOI:  https://doi.org/10.1007/s12539-024-00605-2
  6. Hepatol Commun. 2024 Mar 01. pii: e0367. [Epub ahead of print]8(3):
      BACKGROUND: The study compared the readability, grade level, understandability, actionability, and accuracy of standard patient educational material against artificial intelligence chatbot-derived patient educational material regarding cirrhosis.METHODS: An identical standardized phrase was used to generate patient educational materials on cirrhosis from 4 large language model-derived chatbots (ChatGPT, DocsGPT, Google Bard, and Bing Chat), and the outputs were compared against a pre-existing human-derived educational material (Epic). Objective scores for readability and grade level were determined using Flesch-Kincaid and Simple Measure of Gobbledygook scoring systems. 14 patients/caregivers and 8 transplant hepatologists were blinded and independently scored the materials on understandability and actionability and indicated whether they believed the material was human or artificial intelligence-generated. Understandability and actionability were determined using the Patient Education Materials Assessment Tool for Printable Materials. Transplant hepatologists also provided medical accuracy scores.
    RESULTS: Most educational materials scored similarly in readability and grade level but were above the desired sixth-grade reading level. All educational materials were deemed understandable by both groups, while only the human-derived educational material (Epic) was considered actionable by both groups. No significant difference in perceived actionability or understandability among the educational materials was identified. Both groups poorly identified which materials were human-derived versus artificial intelligence-derived.
    CONCLUSIONS: Chatbot-derived patient educational materials have comparable readability, grade level, understandability, and accuracy to human-derived materials. Readability, grade level, and actionability may be appropriate targets for improvement across educational materials on cirrhosis. Chatbot-derived patient educational materials show promise, and further studies should assess their usefulness in clinical practice.
    DOI:  https://doi.org/10.1097/HC9.0000000000000367
  7. J Gastrointest Surg. 2024 Jan;pii: S1091-255X(23)00649-2. [Epub ahead of print]28(1): 64-69
      BACKGROUND: The internet is a common source of health information for patients. Interactive online artificial intelligence (AI) may be a more reliable source of health-related information than traditional search engines. This study aimed to assess the quality and perceived utility of chat-based AI responses related to 3 common gastrointestinal (GI) surgical procedures.METHODS: A survey of 24 questions covering general perioperative information on cholecystectomy, pancreaticoduodenectomy (PD), and colectomy was created. Each question was posed to Chat Generative Pre-trained Transformer (ChatGPT) in June 2023, and the generated responses were recorded. The quality and perceived utility of responses were independently and subjectively graded by expert respondents specific to each surgical field. Grades were classified as "poor," "fair," "good," "very good," or "excellent."
    RESULTS: Among the 45 respondents (general surgeon [n = 13], surgical oncologist [n = 18], colorectal surgeon [n = 13], and transplant surgeon [n = 1]), most practiced at an academic facility (95.6%). Respondents had been in practice for a mean of 12.3 years (general surgeon, 14.5 ± 7.2; surgical oncologist, 12.1 ± 8.2; colorectal surgeon, 10.2 ± 8.0) and performed a mean 53 index operations annually (cholecystectomy, 47 ± 28; PD, 28 ± 27; colectomy, 81 ± 44). Overall, the most commonly assigned quality grade was "fair" or "good" for most responses (n = 622/1080, 57.6%). Most of the 1080 total utility grades were "fair" (n = 279, 25.8%) or "good" (n = 344, 31.9%), whereas only 129 utility grades (11.9%) were "poor." Of note, ChatGPT responses related to cholecystectomy (45.3% ["very good"/"excellent"] vs 18.1% ["poor"/"fair"]) were deemed to be better quality than AI responses about PD (18.9% ["very good"/"excellent"] vs 46.9% ["poor"/"fair"]) or colectomy (31.4% ["very good"/"excellent"] vs 38.3% ["poor"/"fair"]). Overall, only 20.0% of the experts deemed ChatGPT to be an accurate source of information, whereas 15.6% of the experts found it unreliable. Moreover, 1 in 3 surgeons deemed ChatGPT responses as not likely to reduce patient-physician correspondence (31.1%) or not comparable to in-person surgeon responses (35.6%).
    CONCLUSIONS: Although a potential resource for patient education, ChatGPT responses to common GI perioperative questions were deemed to be of only modest quality and utility to patients. In addition, the relative quality of AI responses varied markedly on the basis of procedure type.
    Keywords:  Artificial intelligence; ChatGPT; Informational resource; Large language models; Surgical care
    DOI:  https://doi.org/10.1016/j.gassur.2023.11.019
  8. Bone Jt Open. 2024 Feb 15. 5(2): 139-146
      Aims: While internet search engines have been the primary information source for patients' questions, artificial intelligence large language models like ChatGPT are trending towards becoming the new primary source. The purpose of this study was to determine if ChatGPT can answer patient questions about total hip (THA) and knee arthroplasty (TKA) with consistent accuracy, comprehensiveness, and easy readability.Methods: We posed the 20 most Google-searched questions about THA and TKA, plus ten additional postoperative questions, to ChatGPT. Each question was asked twice to evaluate for consistency in quality. Following each response, we responded with, "Please explain so it is easier to understand," to evaluate ChatGPT's ability to reduce response reading grade level, measured as Flesch-Kincaid Grade Level (FKGL). Five resident physicians rated the 120 responses on 1 to 5 accuracy and comprehensiveness scales. Additionally, they answered a "yes" or "no" question regarding acceptability. Mean scores were calculated for each question, and responses were deemed acceptable if ≥ four raters answered "yes."
    Results: The mean accuracy and comprehensiveness scores were 4.26 (95% confidence interval (CI) 4.19 to 4.33) and 3.79 (95% CI 3.69 to 3.89), respectively. Out of all the responses, 59.2% (71/120; 95% CI 50.0% to 67.7%) were acceptable. ChatGPT was consistent when asked the same question twice, giving no significant difference in accuracy (t = 0.821; p = 0.415), comprehensiveness (t = 1.387; p = 0.171), acceptability (χ2 = 1.832; p = 0.176), and FKGL (t = 0.264; p = 0.793). There was a significantly lower FKGL (t = 2.204; p = 0.029) for easier responses (11.14; 95% CI 10.57 to 11.71) than original responses (12.15; 95% CI 11.45 to 12.85).
    Conclusion: ChatGPT answered THA and TKA patient questions with accuracy comparable to previous reports of websites, with adequate comprehensiveness, but with limited acceptability as the sole information source. ChatGPT has potential for answering patient questions about THA and TKA, but needs improvement.
    DOI:  https://doi.org/10.1302/2633-1462.52.BJO-2023-0113.R1
  9. Neurosurgery. 2024 Feb 14.
      BACKGROUND AND OBJECTIVES: The Internet has become a primary source of health information, leading patients to seek answers online before consulting health care providers. This study aims to evaluate the implementation of Chat Generative Pre-Trained Transformer (ChatGPT) in neurosurgery by assessing the accuracy and helpfulness of artificial intelligence (AI)-generated responses to common postsurgical questions.METHODS: A list of 60 commonly asked questions regarding neurosurgical procedures was developed. ChatGPT-3.0, ChatGPT-3.5, and ChatGPT-4.0 responses to these questions were recorded and graded by numerous practitioners for accuracy and helpfulness. The understandability and actionability of the answers were assessed using the Patient Education Materials Assessment Tool. Readability analysis was conducted using established scales.
    RESULTS: A total of 1080 responses were evaluated, equally divided among ChatGPT-3.0, 3.5, and 4.0, each contributing 360 responses. The mean helpfulness score across the 3 subsections was 3.511 ± 0.647 while the accuracy score was 4.165 ± 0.567. The Patient Education Materials Assessment Tool analysis revealed that the AI-generated responses had higher actionability scores than understandability. This indicates that the answers provided practical guidance and recommendations that patients could apply effectively. On the other hand, the mean Flesch Reading Ease score was 33.5, suggesting that the readability level of the responses was relatively complex. The Raygor Readability Estimate scores ranged within the graduate level, with an average score of the 15th grade.
    CONCLUSION: The artificial intelligence chatbot's responses, although factually accurate, were not rated highly beneficial, with only marginal differences in perceived helpfulness and accuracy between ChatGPT-3.0 and ChatGPT-3.5 versions. Despite this, the responses from ChatGPT-4.0 showed a notable improvement in understandability, indicating enhanced readability over earlier versions.
    DOI:  https://doi.org/10.1227/neu.0000000000002856
  10. Phys Occup Ther Pediatr. 2024 Feb 15. 1-10
      AIMS: In addition to the popular search engines on the Internet, ChatGPT may provide accurate and reliable health information. The aim of this study was to examine whether ChatGPT's responses to frequently asked questions concerning cerebral palsy (CP) by families were reliable and useful.METHODS: Google trends were used to find the most frequently searched keywords for CP. Five independent physiatrists assessed ChatGPT responses to 10 questions. Seven-point Likert-type scales were used to rate information reliability and usefulness based on whether the answer can be validated and is understandable.
    RESULTS: The median ratings for reliability of information for each question varied from 2 (very unsafe) to 5 (relatively very reliable). The median rating was 4 (reliable) for four questions. The median ratings for usefulness of information varied from 2 (very little useful) to 5 (moderately useful). The median rating was 4 (partly useful) for seven questions.
    CONCLUSION: Although ChatGPT appears promising as an additional tool for informing family members of individuals with CP about medical information, it should be emphasized that both consumers and health care providers should be aware of the limitations of artificial intelligence-generated information.
    Keywords:  Artificial intelligence; ChatGPT; cerebral palsy; health information; large language model
    DOI:  https://doi.org/10.1080/01942638.2024.2316178
  11. Int J Med Inform. 2024 Feb 11. pii: S1386-5056(24)00035-2. [Epub ahead of print]184 105372
      BACKGROUND: Spontaneous coronary artery dissection (SCAD) survivors often seek information online. However, the quality and content of websites for SCAD survivors is uncertain. This review aimed to systematically identify and appraise websites for SCAD survivors.METHODS: A systematic review approach was adapted for websites. A comprehensive search of SCAD key-phrases was performed using an internet search engine during January 2023. Websites targeting SCAD survivors were included. Websites were appraised for quality using Quality Component Scoring System (QCSS) and Health Related Website Evaluation Form (HRWEF), suitability using the Suitability Assessment Method (SAM), readability using a readability generator, and interactivity. Content was appraised using a tool based on SCAD international consensus literature. Raw scores from tools were concerted to percentages, then classified variably as excellent through to poor.
    RESULTS: A total of 50 websites were identified and included from 600 screened. Overall, content accuracy/scope (53.3 ± 23.3) and interactivity (67.1 ± 11.5) were poor, quality was fair (59.1 ± 22.3, QCSS) and average (83.1 ± 5.8, HRWEF) and suitability was adequate (54.9 ± 13.8, SAM). The mean readability grade was 11.6 (±2.3), far exceeding the recommendations of ≤ 8. By website type, survivor affiliated and medically peer-reviewed health information websites scored highest. Appraisal tools had limitations, such as overlapping appraisal of similar things and less relevant items due to internet modernity.
    CONCLUSION: Many online websites are available for SCAD survivors, but often have limited and/or inaccurate content, poor quality, are not tailored to the demographic, and are difficult to read. Appraisal tools for health website require consolidation and further development.
    Keywords:  Appraisal; Content; Quality; Spontaneous Coronary Artery Dissection (SCAD); Suitability; Websites
    DOI:  https://doi.org/10.1016/j.ijmedinf.2024.105372
  12. BMJ Open. 2024 Feb 06. 14(2): e078552
      OBJECTIVES: Blunt chest trauma (BCT) is characterised by forceful and non-penetrative impact to the chest region. Increased access to the internet has led to online healthcare resources becoming used by the public to educate themselves about medical conditions. This study aimed to determine whether online resources for BCT are at an appropriate readability level and visual appearance for the public.DESIGN: We undertook a (1) a narrative overview assessment of the website; (2) a visual assessment of the identified website material content using an adapted framework of predetermined key criteria based on the Centers for Medicare and Medicaid Services toolkit and (3) a readability assessment using five readability scores and the Flesch reading ease score using Readable software.
    DATA SOURCES: Using a range of key search terms, we searched Google, Bing and Yahoo websites on 9 October 2023 for online resources about BCT.
    RESULTS: We identified and assessed 85 websites. The median visual assessment score for the identified websites was 22, with a range of -14 to 37. The median readability score generated was 9 (14-15 years), with a range of 4.9-15.8. There was a significant association between the visual assessment and readability scores with a tendency for websites with lower readability scores having higher scores for the visual assessment (Spearman's r=-0.485; p<0.01). The median score for Flesch reading ease was 63.9 (plain English) with a range of 21.1-85.3.
    CONCLUSIONS: Although the readability levels and visual appearance were acceptable for the public for many websites, many of the resources had much higher readability scores than the recommended level (8-10) and visually were poor.Better use of images would improve the appearance of websites further. Less medical terminology and shorter word and sentence length would also allow the public to comprehend the contained information more easily.
    Keywords:  Health Literacy; TRAUMA MANAGEMENT; World Wide Web technology
    DOI:  https://doi.org/10.1136/bmjopen-2023-078552
  13. Aesthetic Plast Surg. 2024 Feb 15.
      BACKGROUND: Eyelid ptosis is an underestimated pathology deeply affecting patients' quality of life. Internet has increasingly become the major source of information regarding health care, and patients often browse on websites to acquire an initial knowledge on the subject. However, there is lack of data concerning the quality of available information focusing on the eyelid ptosis and its treatment. We systematically evaluated online information quality on eyelid ptosis by using the "Ensuring Quality Information for Patients" (EQIP) scale.MATERIALS AND METHODS: Google, Yahoo and Bing have been searched for the keywords "Eyelid ptosis," "Eyelid ptosis surgery" and "Blepharoptosis." The first 50 hits were included, evaluating the quality of information with the expanded EQIP tool. Websites in English and intended for general non-medical public use were included. Irrelevant documents, videos, pictures, blogs and articles with no access were excluded.
    RESULTS: Out of 138 eligible websites, 79 (57,7%) addressed more than 20 EQIP items, with an overall median score of 20,2. Only 2% discussed procedure complication rates. The majority fail to disclose severe complications and quantifying risks, fewer than 18% clarified the potential need for additional treatments. Surgical procedure details were lacking, and there was insufficient information about pre-/postoperative precautions for patients. Currently, online quality information has not improved since COVID-19 pandemic.
    CONCLUSIONS: This study highlights the urgent requirement for improved patient-oriented websites adhering to international standards for plastic and oculoplastic surgery. Healthcare providers should effectively guide their patients in finding trustworthy and reliable eyelid ptosis correction information.
    LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
    Keywords:  Blepharoplasty; Blepharoptosis; EQIP scale; Eyelid ptosis surgery; Eyelids; Patient education as topic; Patient-reported outcome measures; Quality online information
    DOI:  https://doi.org/10.1007/s00266-024-03862-0
  14. Arch Ital Urol Androl. 2024 Feb 16.
      BACKGROUND: Social media are widely used information tools, including the medical/health field. Unfortunately, the levels of misinformation on these platforms seem to be high, with a medium-low quality of the proposed content, as evidenced by previous studies. You Tube is one of the most important platforms for audio/video content. It shows content to users through a recommendation algorithm system.MATERIALS AND METHODS: We have classified in two cohorts the first results obtained by researching "bladder tumor treatment" on You Tube through two different user profiles: "Cohort A" with a not logged-in session in incognito mode (46 videos enrolled) and "Cohort B" with a logged-in session with a physician profile (50 videos enrolled). The videos were evaluated using validated instruments such as DISCERN and PEMAT-AV Furthermore, we used a Likert's scale for the evaluation of levels of misinformation.
    RESULTS: Overall quality of information was moderate to poor (DISCERN 3) in 54% of cohort A and 24% of cohort B. Moreover, a high degree of misinformation (Likert score 3) was found in 52% of cohort A cases and 32% of cohort B.
    CONCLUSIONS: Levels of misinformation in both cohorts are positively correlated to the number of views per month. Globally, the levels of information quality, understandability and actionability are lower for the results obtained from searches performed with anonymous user profile (Cohort A).
    DOI:  https://doi.org/10.4081/aiua.2024.12179
  15. Int J Gynaecol Obstet. 2024 Feb 17.
      OBJECTIVES: Back pain during pregnancy is a common issue that impacts the quality of life for many women. YouTube has become an increasingly popular source of health information. Pregnant women often turn to YouTube for advice on managing back pain, but the quality of available videos is highly variable. This study aimed to assess the quality and comprehensiveness of YouTube videos related to back pain during pregnancy.METHODS: A YouTube search was conducted using the keyword "back pain in pregnancy", and the first 100 resulting videos were included in the study. After a thorough review and exclusion of ineligible videos, the final sample consisted of 71 videos. Various parameters such as the number of views, likes, viewer interaction, video age, uploaded source (healthcare or nonhealthcare), and video length were evaluated for all videos.
    RESULTS: Regarding the source of the videos, 44 (61.9%) were created by healthcare professionals, while 27 (38%) were created by nonprofessionals. Videos created by healthcare professionals had significantly higher scores in terms of DISCERN score, Journal of the American Medical Association (JAMA) score, and Global Quality Scale (GQS) (P < 0.001). Our findings indicate a statistically significant and strong positive correlation among the three scoring systems (P < 0.001).
    CONCLUSION: Videos created by healthcare professionals were generally of higher quality, but many videos were still rated as low-moderate quality. The majority of videos focused on self-care strategies, with fewer discussing other treatment options. Our findings highlight the need for improved quality and comprehensiveness of YouTube videos on back pain during pregnancy.
    Keywords:   JAMA ; DISCERN; YouTube; back pain; pregnancy
    DOI:  https://doi.org/10.1002/ijgo.15419
  16. J Asthma. 2024 Feb 15. 1-13
      Background: YouTube has educational videos on inhalers. However, their content and quality are not adequately known.Objectives: This study investigated the quality and content of educational YouTube videos on inhalers.Methods: This descriptive study analyzed 178 YouTube videos on inhalers between May and July 2022. Two researchers independently evaluated the videos. The Global Quality Score, Journal of American Medical Association Benchmark Criteria, and Inhaler Application Checklist were used to assess the quality and content of the videos. Spearman's correlation, Kruskal-Wallis, Mann-Whitney U, ANOVA, and Post hoc analysis Bonferroni test were used for data analysis.Results: The videos had a mean GQS score of 3.70 ± 1.24, and JAMA score of 2.22 ± 0.60. A negative correlation was between the quality score of the videos and views, likes, comments, duration, and likes/views (respectively; r=-0.237 p < 0.005, r=-0.217 p < 0.003, r=-0.220 p < 0.005, r=-0.147, p < 0.005). The videos narrated by nurses and doctors had significantly higher mean JAMA and GQS scores than others (p = 0.001). The videos missed some procedural steps [gargling (29.1%), adding no more than five ml of medication and device cleaning (41.9%), and exhaling through the nose (37.5%)]. Videos uploaded by individual missed significantly more procedural steps than professional organizations (p < 0.05).Conclusions: YouTube videos about inhaler techniques have a moderate level of quality. Videos uploaded by doctors and nurses as content narrators were of higher quality. The videos missed some procedural steps. Individual video uploaders had higher missed procedural steps. Counseling should be provided to patients regarding the reliability of online information.
    Keywords:  YouTube; content; education; inhaler; patient; quality
    DOI:  https://doi.org/10.1080/02770903.2024.2319846
  17. J Health Commun. 2024 Feb 14. 1-9
      The objective of this study was to understand how youth search for mental health information online. Youth partners were engaged at the onset of the project and provided input throughout on the design, conduct and analysis. Individual, semi-structured interviews with Canadian youth with experience searching for mental health information online were conducted. Data collection and reflexive thematic analysis proceeded concurrently. Fourteen youth were interviewed. Four main themes related to how youth search online emerged: mind-set shapes the search process; external factors shape the search process; key attributes of helpful information; and cues affecting trustworthiness of online information. Findings can inform the development of youth-friendly online mental health information that is perceived as helpful and trustworthy by youth. Ensuring youth have access to quality online mental health information, accessible to how they search for it, is critical to the mental health and development of youth.
    DOI:  https://doi.org/10.1080/10810730.2024.2313990
  18. JMIR Public Health Surveill. 2024 Feb 14. 10 e54805
      BACKGROUND: The advent of the internet has changed the landscape of available nutrition information. However, little is known about people's information-seeking behavior toward healthy eating and its potential consequences.OBJECTIVE: We aimed to examine the prevalence and correlates of nutrition information seeking from various web-based and offline media sources.
    METHODS: This cross-sectional study included 5998 Japanese adults aged 20 to 79 years participating in a web-based questionnaire survey (February and March 2023). The dependent variable was the regular use of web-based and offline media as a reliable source of nutrition information. The main independent variables included health literacy, food literacy, and diet quality, which were assessed using validated tools, as well as sociodemographic factors (sex, age, education level, and nutrition- and health-related occupations).
    RESULTS: The top source of nutrition information was television (1973/5998, 32.89%), followed by web searches (1333/5998, 22.22%), websites of government and medical manufacturers (997/5998, 16.62%), newspapers (901/5998, 15.02%), books and magazines (697/5998, 11.62%), and video sites (eg, YouTube; 634/5998, 10.57%). Multivariable logistic regression showed that higher health literacy was associated with higher odds of using all the individual sources examined; odds ratios (ORs) for 1-point score increase ranged from 1.27 (95% CI 1.09-1.49) to 1.81 (95% CI 1.57-2.09). By contrast, food literacy was inversely associated with the use of television (OR 0.65, 95% CI 0.55-0.77), whereas it was positively associated with the use of websites of government and medical manufacturers (OR 1.98, 95% CI 1.62-2.44), books and magazines (OR 2.09, 95% CI 1.64-2.66), and video sites (OR 1.53, 95% CI 1.19-1.96). Furthermore, diet quality was positively associated with the use of newspapers (OR 1.02, 95% CI 1.01-1.03) and books and magazines (OR 1.03, 95% CI 1.02-1.04). Being female was associated with using television and books and magazines, whereas being male was associated with using websites of government and medical manufacturers, newspapers, and video sites. Age was positively associated with using newspapers and inversely associated with using websites of government and medical manufacturers and video sites. People with higher education were more likely to refer to websites of government and medical manufacturers and newspapers but less likely to use television and video sites. Dietitians were more likely to use websites of government and medical manufacturers and books and magazines than the general public but less likely to use television and video sites.
    CONCLUSIONS: We identified various web-based and offline media sources regularly used by Japanese adults when seeking nutrition information, and their correlates varied widely. A lack of positive associations between the use of the top 2 major sources (television and web searches) and food literacy or diet quality is highlighted. These findings provide useful insights into the potential for developing and disseminating evidence-based health promotion materials.
    Keywords:  Japan; diet; diet quality; food literacy; health literacy; information seeking; nutrition
    DOI:  https://doi.org/10.2196/54805
  19. Bioinformatics. 2024 Feb 10. pii: btae075. [Epub ahead of print]
      MOTIVATION: While large language models (LLMs) have been successfully applied to various tasks, they still face challenges with hallucinations. Augmenting LLMs with domain-specific tools such as database utilities can facilitate easier and more precise access to specialized knowledge. In this paper, we present GeneGPT, a novel method for teaching LLMs to use the Web APIs of the National Center for Biotechnology Information (NCBI) for answering genomics questions. Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs by in-context learning and an augmented decoding algorithm that can detect and execute API calls.RESULTS: Experimental results show that GeneGPT achieves state-of-the-art performance on eight tasks in the GeneTuring benchmark with an average score of 0.83, largely surpassing retrieval-augmented LLMs such as the new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: First, API demonstrations have good cross-task generalizability and are more useful than documentations for in-context learning; Second, GeneGPT can generalize to longer chains of API calls and answer multi-hop questions in GeneHop, a novel dataset introduced in this work; Finally, different types of errors are enriched in different tasks, providing valuable insights for future improvements.
    AVAILABILITY: The GeneGPT code and data are publicly available at https://github.com/ncbi/GeneGPT.
    DOI:  https://doi.org/10.1093/bioinformatics/btae075
  20. Geriatr Nurs. 2024 Feb 14. pii: S0197-4572(24)00025-9. [Epub ahead of print]56 204-211
      BACKGROUND: Older adults are becoming more accepting and interested in using digital technologies, but difficulties and barriers remain for accessing reliable health-related information. The purpose of this focused pilot intervention study was to: (1) understand older adults' firsthand experiences and challenges while using smart tablets post-COVID-19 pandemic, and (2) gather suggestions for age-appropriate training materials, preference of training materials, and resources to access reliable online health information.METHODS: A focused pilot intervention study that involved training older adults to use smart tablets followed by focus group of a convenience sample of 13 older adults (65-85 years old; 91.6% female) on their experiences of using smart tablets.
    RESULTS: Thematic analysis revealed three themes: tablets are convenient to access online information and older adults reported technical, security concerns, emotional and cognitive challenges regarding use of smart tablets. Older adults also requested one-on-one support, assistance, and topic specific learning for future training sessions.
    CONCLUSIONS: Future studies should focus on providing detailed, clear instructions at an acceptable pace for older adults.
    Keywords:  Community dwelling older adults; Focus study groups; Reliable health information; Smart tablets training intervention
    DOI:  https://doi.org/10.1016/j.gerinurse.2024.02.010