bims-librar Biomed News
on Biomedical librarianship
Issue of 2024–04–21
thirteen papers selected by
Thomas Krichel, Open Library Society



  1. Health Info Libr J. 2024 Apr 14.
      Community health workers are responsible for finding, processing, and transferring health information to communities with limited access to health-related resources, including farmworkers. This paper is the culmination of an undergraduate student research project to explore the learning processes and preferences of farmworker-serving community health workers in the USA. The project was designed for students from farmworker or agricultural backgrounds at two North Carolina universities and was supported by a North Carolina Department of Health and Human Services workforce development grant. Semi-structured interviews were conducted, in person and virtually, with a convenience sample of 17 current and former community health workers. The interview data were analysed thematically and identified a preference for a combination of learning styles, with visual and hands-on learning being the most preferred. Community health workers also identified the importance of learning preferences in relation to their responsibilities as health educators. This study provides librarians, along with public health and medical professionals, with useful information about learning preferences to inform the creation of new and varied learning materials for community health workers.
    Keywords:  health disparities; health education; health information needs; health professionals; health services research; information dissemination; information skills; learning; library outreach; patient education
    DOI:  https://doi.org/10.1111/hir.12528
  2. J Econ Behav Organ. 2024 Apr;220 675-690
      Online health information seeking behavior (e-HISB) is becoming increasingly common and the trend has accelerated as a result of the COVID-19 pandemic when individuals strongly relied upon the Internet to stay informed by becoming exposed to a wider array of health information. Despite e-HISB having become a global trend, very few empirical investigations have analyzed its potential influence on healthcare access and individuals' health status. In this paper, we try to fill this gap. We use data from the second SHARE Corona Survey, supplemented with data from the previous 8th wave of SHARE, and estimate a recursive model of e-HISB, healthcare access, and individuals' health status that accounts for individuals' unobserved heterogeneity. Our findings suggest that e-HISB can empower individuals to better understand health concerns, facilitating improved health condition management. However, e-HISB can also trigger a chain reaction, as navigating vast amonts of online health information can heighten fear and anxiety. This increased anxiety may lead to higher utilization of medical services, adversely affecting individuals' perceptions of their health.
    Keywords:  Health information seeking behavior; Health status; Healthcare access
    DOI:  https://doi.org/10.1016/j.jebo.2024.02.032
  3. Ginekol Pol. 2024 Apr 18.
       OBJECTIVES: The use of internet-based search engines for health information is very popular and common. The Internet has become an important source of health information and has a considerable impact on patient's decision making process. Knowledge of pregnant women about childbirth comes from health professionals and personal experiences described by friends or family members. There is a growing interest in digital sources used by pregnant women. Analysis of queries related to regarding to natural childbirth and cesarean section in the Google search engine.
    MATERIAL AND METHODS: In this infodemiology, descriptive study tool "AlsoAsked" was used. This is a tool for analyzing data appearing in Google search results. "AlsoAsked" search was conducted on April 19, 2023. Search phrases "natural childbirth" and "cesarean section" in polish language were used. Questions that were typed into the Google search engine, ranked according to popularity (volume) and thematic connections have been discussed.
    RESULTS: The most frequently asked questions were related to the course and duration of labor as well as the preparation for labor and cesarean section (CS). Comparison between a natural labour and CS in the context of safety and pain received a great deal of attention.
    CONCLUSIONS: The most popular questions regarding CS were related to elective CS and indications for it. Some questions concerned the connection between labor and clinical state of a newborn.
    Keywords:  cesarean section; infodemiology; natural childbirth
    DOI:  https://doi.org/10.5603/gpl.97654
  4. World Neurosurg. 2024 Apr 16. pii: S1878-8750(24)00619-3. [Epub ahead of print]
      
    Keywords:  AANS; Flesch-Kincaid; Patient anxiety; grade level; reading ease
    DOI:  https://doi.org/10.1016/j.wneu.2024.04.056
  5. JMIR Cardio. 2024 Apr 19. 8 e53421
       BACKGROUND: Amyloidosis, a rare multisystem condition, often requires complex, multidisciplinary care. Its low prevalence underscores the importance of efforts to ensure the availability of high-quality patient education materials for better outcomes. ChatGPT (OpenAI) is a large language model powered by artificial intelligence that offers a potential avenue for disseminating accurate, reliable, and accessible educational resources for both patients and providers. Its user-friendly interface, engaging conversational responses, and the capability for users to ask follow-up questions make it a promising future tool in delivering accurate and tailored information to patients.
    OBJECTIVE: We performed a multidisciplinary assessment of the accuracy, reproducibility, and readability of ChatGPT in answering questions related to amyloidosis.
    METHODS: In total, 98 amyloidosis questions related to cardiology, gastroenterology, and neurology were curated from medical societies, institutions, and amyloidosis Facebook support groups and inputted into ChatGPT-3.5 and ChatGPT-4. Cardiology- and gastroenterology-related responses were independently graded by a board-certified cardiologist and gastroenterologist, respectively, who specialize in amyloidosis. These 2 reviewers (RG and DCK) also graded general questions for which disagreements were resolved with discussion. Neurology-related responses were graded by a board-certified neurologist (AAH) who specializes in amyloidosis. Reviewers used the following grading scale: (1) comprehensive, (2) correct but inadequate, (3) some correct and some incorrect, and (4) completely incorrect. Questions were stratified by categories for further analysis. Reproducibility was assessed by inputting each question twice into each model. The readability of ChatGPT-4 responses was also evaluated using the Textstat library in Python (Python Software Foundation) and the Textstat readability package in R software (R Foundation for Statistical Computing).
    RESULTS: ChatGPT-4 (n=98) provided 93 (95%) responses with accurate information, and 82 (84%) were comprehensive. ChatGPT-3.5 (n=83) provided 74 (89%) responses with accurate information, and 66 (79%) were comprehensive. When examined by question category, ChatGTP-4 and ChatGPT-3.5 provided 53 (95%) and 48 (86%) comprehensive responses, respectively, to "general questions" (n=56). When examined by subject, ChatGPT-4 and ChatGPT-3.5 performed best in response to cardiology questions (n=12) with both models producing 10 (83%) comprehensive responses. For gastroenterology (n=15), ChatGPT-4 received comprehensive grades for 9 (60%) responses, and ChatGPT-3.5 provided 8 (53%) responses. Overall, 96 of 98 (98%) responses for ChatGPT-4 and 73 of 83 (88%) for ChatGPT-3.5 were reproducible. The readability of ChatGPT-4's responses ranged from 10th to beyond graduate US grade levels with an average of 15.5 (SD 1.9).
    CONCLUSIONS: Large language models are a promising tool for accurate and reliable health information for patients living with amyloidosis. However, ChatGPT's responses exceeded the American Medical Association's recommended fifth- to sixth-grade reading level. Future studies focusing on improving response accuracy and readability are warranted. Prior to widespread implementation, the technology's limitations and ethical implications must be further explored to ensure patient safety and equitable implementation.
    Keywords:  ChatGPT; Facebook; accessibility; accuracy; amyloidosis; amyloidosis-related; artificial intelligence; assessment; cardiologist; cardiology; dissemination; educational resources; gastroenterologist; gastroenterology; institution; institutions; large language model; large language models; medical society; multidisciplinary care; neurologist; neurology; patient education; reliability; reproducibility
    DOI:  https://doi.org/10.2196/53421
  6. Angle Orthod. 2024 May 01. 94(3): 273-279
       OBJECTIVES: To assess the quality and accuracy of information contained within the websites of providers of marketed orthodontic products.
    MATERIALS AND METHODS: Twenty-one websites of orthodontic appliance and adjunct (product) providers were identified. The website content was assessed via two validated quality-of-information instruments (DISCERN and the Journal of the American Medical Association [JAMA] benchmarks) and an accuracy-of-information instrument. Website content was qualitatively analyzed for themes and subthemes.
    RESULTS: More than half (n = 11; 52.3%) of the assessed websites contained clinician testimonials. The mean (SD) DISCERN score was 33.14 (5.44). No website recorded the minimum of three JAMA benchmarks required to indicate reliability. The most common content themes related to quality-of-life impact and treatment duration. Just 8% of the statements within the websites were objectively true. The Pearson correlation coefficient indicated that the DISCERN scores were correlated with the accuracy-of-information scores (r = .83; P < .001).
    CONCLUSIONS: The quality and accuracy of information contained within the websites of the providers of marketed orthodontic products was poor. The combined use of DISCERN and the accuracy-of-information instrument may help overcome the shortcomings of each. Clinicians should check the accuracy of information on orthodontic product provider websites before adding links to those websites on their own sites.
    Keywords:  Advertising; Ethics; Internet; Marketing; Orthodontics; Quality of information
    DOI:  https://doi.org/10.2319/100423-672.1
  7. Digit Health. 2024 Jan-Dec;10:10 20552076241248109
       Introduction: Autosomal dominant polycystic kidney disease (ADPKD) is the most common inherited kidney disease in adults. As a social media platform, YouTube has tremendous potential to both support and hinder public health efforts. The aim of this study was to assess the reliability and quality of the most viewed English-language YouTube videos on ADPKD.
    Methods: A YouTube search was conducted on 3 August 2023, using the keyword ADPKD disease and the top 200 videos were analyzed for relevance. Videos in the "Short" category that were duplicates, were not in English, were not audio or visual, and contained advertisements were excluded. Two reviewers divided the 159 included videos into groups based on their source and content.
    Results: In 106 (66.7%) of the 159 videos, general information about the disease was given, 58 (36.5%) discussed medical treatment, 11 (6.9%) discussed surgical treatment, 30 (18.9%) included patient images and radiological images, and eight (5%) discussed the genetic and pathological features of the disease. Additionally, 16 (10.1%) videos fell into the "other" category. According to the Journal of the American Medical Association, Quality Criteria for Consumer Health Information and Global Quality Scale scoring systems, videos uploaded by health associations and foundations received the highest scores (3 (1-4), 54 (28-70), 4 (1-5), respectively).
    Conclusion: Academic institutions and other official health organizations such as Health Associations/Foundations need to use YouTube more effectively to disseminate accurate, reliable and useful health-related information to society.
    Keywords:  Autosomal dominant polycystic kidney disease; YouTube; internet; scoring; video
    DOI:  https://doi.org/10.1177/20552076241248109
  8. Heliyon. 2024 Apr 15. 10(7): e29020
       Purpose: This study aimed to systematically evaluate the quality of content and information in videos related to gestational diabetes mellitus on Chinese social media platforms.
    Methods: The videos on various platforms, TikTok, Bilibili, and Weibo, were searched with the keyword "gestational diabetes mellitus" in Chinese, and the first 50 videos with a comprehensive ranking on each platform were included for subsequent analysis. Characteristic information of video was collected, such as their duration, number of days online, number of likes, comments, and number of shares. DISCREN, JAMA (The Journal of the American Medical Association) Benchmark Criteria, and GQS (Global Quality Scores) were used to assess the quality of all videos. Finally, the correlation analysis was performed among video features, video sources, DISCERN scores, and JAMA scores.
    Results: Ultimately, 135 videos were included in this study. The mean DISCERN total score was 31.84 ± 7.85, the mean JAMA score was 2.33 ± 0.72, and the mean GQS was 2.00 ± 0.40. Most of the videos (52.6%) were uploaded by independent medical professionals, and videos uploaded by professionals had the shortest duration and time online (P < 0.001). The source of the video was associated with numbers of "likes", "comments", and "shares" for JAMA scores (P < 0.001), but there was no correlation with DISCERN scores. Generally, videos on TikTok with the shortest duration received the most numbers of "likes", "comments", and "shares", but the overall quality of videos on Weibo was higher.
    Conclusion: Although the majority of the videos were uploaded by independent medical professionals, the overall quality appeared to be poor. Therefore, more efforts and actions should be taken to improve the quality of videos related to gestational diabetes mellitus.
    Keywords:  Gestational diabetes mellitus; Health information; Medical education; Quality; Social media platform
    DOI:  https://doi.org/10.1016/j.heliyon.2024.e29020
  9. PeerJ. 2024 ;12 e17215
       Background: Inflammatory back pain is a chronic condition with localized pain, particularly in the axial spine and sacroiliac joints, that is associated with morning stiffness and improves with exercise. YouTube is the second most frequently used social media platform for accessing health information. This study sought to investigate the quality and reliability of YouTube videos on inflammatory back pain (IBP).
    Methods: The study design was planned as cross-sectional. A search was conducted using the term "inflammatory back pain," and the first 100 videos that met the inclusion criteria were selected on October 19, 2023. The data of the videos selected according to the inclusion and exclusion criteria in the study settings were examined. Videos with English language, with audiovisual content , had a duration >30 s, non-duplicated and primary content related to IBP were included in the study. A number of video parameters such as the number of likes, number of views, duration, and content categories were assessed. The videos were assessed for reliability using the Journal of the American Medical Association (JAMA) Benchmark criteria and the DISCERN tool. Quality was assessed using the Global Quality Score (GQS). Continuous variables were checked for normality of distribution using Shapiro-Wilk test and Kolmogorov-Smirnov test. Kruskal-Wallis test and Mann-Whitney U test were used to analyze the continuous data depending on the number of groups. Categorical data were analyzed using Pearson's chi-square test.
    Results: Reliability assessment based on JAMA scores showed 21% of the videos to have high reliability. Quality assessment based on GQS results showed 19% of the videos to have high quality. JAMA, DISCERN, and GQS scores differed significantly by source of video (p < 0.001, < 0.001, and = 0.002, respectively). Video duration had a moderate positive correlation with scores from the GQS (r = 0.418, p < 0.001), JAMA (r = 0.484, p < 0.001), and modified DISCERN (r = 0.418, p < 0.001).
    Conclusion: The results of the present study showed that YouTube offers videos of low reliability and low quality on inflammatory back pain. Health authorities have a responsibility to protect public health and should take proactive steps regarding health information shared on social media platforms.
    Keywords:   Inflammatory back pain; Internet; YouTube; E-learning
    DOI:  https://doi.org/10.7717/peerj.17215
  10. Int J Pediatr Otorhinolaryngol. 2024 Apr 16. pii: S0165-5876(24)00109-5. [Epub ahead of print]180 111955
       PURPOSE: Online resources are increasingly being utilised by patients to guide their clinical decision making, as an alternative or supplement to the traditional clinical-patient relationship. YouTube is an online repository of user and community generated videos, which is one of the most popular websites globally. We undertook a study to examine the quality of information presented in YouTube videos related to tonsillectomy.
    METHODS: We completed a systematic search of YouTube in May 2023 and identified 88 videos for inclusion in our study. Videos were published in the English language, focussing on tonsillectomy and tonsillectomy recovery, and were greater than 2 min in length. We recorded video quality metrics and two authors independently analysed the quality of information using three validated quality assessment tools described in the literature including the modified DISCERN, Global Quality Score, and the JAMA Benchmark Criteria.
    RESULTS: The overall quality of the information was low with mean quality scores of Modified DISCERN (1.8 ± 1.3), GQS (2.6 ± 1.2), and JAMA Benchmark Criteria (1.6 ± 0.7). Information published by medical sources including medical professionals, healthcare organisations, and medical education channels scored significantly higher compared to non-medical sources across all quality measures and were of moderate overall quality and usefulness: Modified DISCERN (2.5 ± 1.1 vs 0.8 ± 0.9, z = -6.0, p < 0.001), GQS (3.2 ± 1.0 vs 1.7 ± 0.9, z = -5.7, p < 0.001), and JAMA (1.9 ± 0.8 vs 1.1 ± 0.3, z = -5.2, p < 0.001). Videos published during or after 2018 scored higher on Modified DISCERN (z = -3.2,p = 0.001) but not on GQS or JAMA. Video quality metrics such as total view count, likes, and comments, and channel subscriber count, did not correlate with higher video quality. However, amongst videos published by authoritative medical sources, total view count correlated positively with higher Modified DISCERN quality scores (p = 0.037).
    CONCLUSION: The overall quality and usefulness of YouTube videos on tonsillectomy is of low quality, but information published by authoritative medical sources score significantly higher. Clinicians should be mindful of increasing use of online information sources such as YouTube when counselling patients. Further research is needed in the medical community to create engaging, high-quality content to provide guidance for patients.
    Keywords:  DISCERN score; Health information; Patient information; Social media; YouTube
    DOI:  https://doi.org/10.1016/j.ijporl.2024.111955
  11. Health Promot Perspect. 2024 Mar;14(1): 61-69
       Background: This study investigated the online information-seeking behaviours of breast cancer patients at Jordan University Hospital, focusing on their dissatisfaction with available online health resources and its impact on their well-being and anxiety levels.
    Methods: Employing descriptive phenomenology and convenience sampling, we conducted five Skype-based focus groups with 4-6 breast cancer survivors each, from March to July 2020. Data analysis was performed using NVivo, following Braun and Clark's inductive thematic analysis framework.
    Results: The thematic analysis revealed critical insights into survivors' interactions with online cancer resources, identifying key subthemes such as the quality of online information, cyberchondriasis, health literacy and search strategies, the distress caused by counterproductive searches, and the tendency to avoid internet searches.
    Conclusion: The study underscores the challenges breast cancer survivors face in accessing online health information, especially in Arabic. It highlights the need to improve the quality and accessibility of these resources. Enhancing the cultural relevance of online materials and educating patients on effective information evaluation are crucial. These measures can significantly boost health literacy, mitigate anxiety, and provide better support for breast cancer survivors.
    Keywords:  Anxiety; Breast neoplasms; Cancer survivors; Health communication; Information seeking behaviour; Internet; Patient education as topic; Psychosocial factors
    DOI:  https://doi.org/10.34172/hpp.42682
  12. PLoS One. 2024 ;19(4): e0300755
       INTRODUCTION: Coronary artery disease (CAD) has a high mortality rate worldwide, and continuous health behavior practice and careful management are required owing to risks such as rapid changes in symptoms and emergency hospitalization. The utilization of health-related information is an important factor for long-term disease management in patients with CAD. For this purpose, an understanding of health information-seeking behavior is needed first.
    METHODS: This study analyzed data from the 2021 Korea Medical Panel Survey, and logistic regression analysis was conducted to confirm the factors influencing the health information-seeking behavior of patients with CAD.
    RESULTS: The health information-seeking behavior of patients with CAD differed according to demographic characteristics, and differences in preferred information use were confirmed. Finally, it was identified that insufficient levels of health literacy were a major reason for CAD patients not engaging in health information-seeking behaviors (OR, 0.17; 95% CI, 0.09-0.33; p < 0.001).
    CONCLUSION: This study suggests that to improve health information-seeking behaviors, the application of education and intervention programs to increase the level of health literacy is necessary.
    DOI:  https://doi.org/10.1371/journal.pone.0300755
  13. J Med Internet Res. 2024 Apr 17. 26 e56655
       BACKGROUND: Although patients have easy access to their electronic health records and laboratory test result data through patient portals, laboratory test results are often confusing and hard to understand. Many patients turn to web-based forums or question-and-answer (Q&A) sites to seek advice from their peers. The quality of answers from social Q&A sites on health-related questions varies significantly, and not all responses are accurate or reliable. Large language models (LLMs) such as ChatGPT have opened a promising avenue for patients to have their questions answered.
    OBJECTIVE: We aimed to assess the feasibility of using LLMs to generate relevant, accurate, helpful, and unharmful responses to laboratory test-related questions asked by patients and identify potential issues that can be mitigated using augmentation approaches.
    METHODS: We collected laboratory test result-related Q&A data from Yahoo! Answers and selected 53 Q&A pairs for this study. Using the LangChain framework and ChatGPT web portal, we generated responses to the 53 questions from 5 LLMs: GPT-4, GPT-3.5, LLaMA 2, MedAlpaca, and ORCA_mini. We assessed the similarity of their answers using standard Q&A similarity-based evaluation metrics, including Recall-Oriented Understudy for Gisting Evaluation, Bilingual Evaluation Understudy, Metric for Evaluation of Translation With Explicit Ordering, and Bidirectional Encoder Representations from Transformers Score. We used an LLM-based evaluator to judge whether a target model had higher quality in terms of relevance, correctness, helpfulness, and safety than the baseline model. We performed a manual evaluation with medical experts for all the responses to 7 selected questions on the same 4 aspects.
    RESULTS: Regarding the similarity of the responses from 4 LLMs; the GPT-4 output was used as the reference answer, the responses from GPT-3.5 were the most similar, followed by those from LLaMA 2, ORCA_mini, and MedAlpaca. Human answers from Yahoo data were scored the lowest and, thus, as the least similar to GPT-4-generated answers. The results of the win rate and medical expert evaluation both showed that GPT-4's responses achieved better scores than all the other LLM responses and human responses on all 4 aspects (relevance, correctness, helpfulness, and safety). LLM responses occasionally also suffered from lack of interpretation in one's medical context, incorrect statements, and lack of references.
    CONCLUSIONS: By evaluating LLMs in generating responses to patients' laboratory test result-related questions, we found that, compared to other 4 LLMs and human answers from a Q&A website, GPT-4's responses were more accurate, helpful, relevant, and safer. There were cases in which GPT-4 responses were inaccurate and not individualized. We identified a number of ways to improve the quality of LLM responses, including prompt engineering, prompt augmentation, retrieval-augmented generation, and response evaluation.
    Keywords:  ChatGPT; generative AI; generative artificial intelligence; laboratory test results; large language models; natural language processing; patient education
    DOI:  https://doi.org/10.2196/56655