bims-librar Biomed News
on Biomedical librarianship
Issue of 2023‒11‒05
29 papers selected by
Thomas Krichel, Open Library Society



  1. J Clin Epidemiol. 2023 Oct 30. pii: S0895-4356(23)00277-9. [Epub ahead of print]
      Performing a systematic search of a methods topic (e.g., "risk of bias", "subgroup analysis") in biomedical databases such as MEDLINE or Embase can be challenging. In this commentary, we address common search-related challenges, including inconsistent terminology for methods and suboptimal indexing. We suggest that reviewers addressing methods topics, compared to clinical topics, may start with specific, methods-oriented journals; invest extra time to scrutinize index terms and identify alternative terms; try citation search and machine learning assisted screening; and anticipate lower sensitivity and precision.
    Keywords:  Literature search; Methods; Methods review; Search terms; Statistics; Systematic review
    DOI:  https://doi.org/10.1016/j.jclinepi.2023.10.017
  2. Med Ref Serv Q. 2023 Oct-Dec;42(4):42(4): 346-351
      An electronic table of contents (eToC) program was implemented by a medical librarian more than 11 years ago at a pediatric hospital (now a clinical and academic health system) with the goal of saving healthcare providers time and assisting them in staying current on the literature in their specific disciplines and/or general medicine. The eToC program still remains a highly popular service with more than 180 clinicians participating. This paper describes the implementation and maintenance of the program.
    Keywords:  Health science librarians; health science libraries; hospital libraries; medical librarians; table of contents
    DOI:  https://doi.org/10.1080/02763869.2023.2260721
  3. Med Ref Serv Q. 2023 Oct-Dec;42(4):42(4): 352-369
      This study reports on a 2022 survey of pediatric hospital librarians in the U.S. and Canada to assess the status of staffing, resources, and services in their libraries. The report compares the data against the MLA Hospital Library Caucus Standards (2022) and the Canadian Hospital Library Association Standards (2020). The report also provides a comparison of the libraries' rankings using the Regional U.S. News & World Report Best Children's Hospitals and Magnet status. This approach is intended to determine how librarians and library services at hospitals that are recognized by the above programs differ from those that are not recognized.
    Keywords:  Accreditation; hospital librarianship; library services; pediatric hospital libraries; standards
    DOI:  https://doi.org/10.1080/02763869.2023.2258054
  4. Med Ref Serv Q. 2023 Oct-Dec;42(4):42(4): 381-386
      The article explores the role of "prompt engineers" as a professional title, extending beyond the field of generative AI for developers, comparing certain tasks to the role of librarians, such as conducting search queries. It is possible for librarians to work with AI models in conjunction with traditional literature databases with emphasizing the need to recognize the distinct nature of these information resources. We should take cautious consideration of the specific skills worth acquiring to improve work efficiency, as well as an understanding of the development trends in generative AI and library science.
    Keywords:  Generative AI; large language models; literature databases; literature search; prompt engineering
    DOI:  https://doi.org/10.1080/02763869.2023.2250680
  5. J Surg Res. 2023 Oct 30. pii: S0022-4804(23)00505-X. [Epub ahead of print]294 220-227
      INTRODUCTION: Clinical publications use mortality as a hard end point. It is unknown how many patient deaths are under-reported in institutional databases. The objective of this study was to query mortality in our patient cohort from our data warehouse and compare these deaths to those identified in different databases.METHODS: We passed the first/last name and date of birth of 134 patients through online mortality search engines (Find a Grave Index, US Cemetery and Funeral Home Collection, etc.) to assess their ability to capture patient deaths and compared that to deaths recorded from our institutional data warehouse.
    RESULTS: Our institutional data warehouse found approximately one-third of the total patient mortalities. After the Social Security Death Index, we found that the Find a Grave Index captured the most mortalities missed by the institutional data warehouse. These results highlight the advantages of incorporating readily available search engines into institutional data warehouses for the accurate collection of patient mortalities, particularly those that occur outside of index operative admission.
    CONCLUSIONS: The incorporation of the mortality search engines significantly augmented the capture of patient deaths. Our approach may be useful for tailored patient outreach and reporting mortalities with institutional data.
    Keywords:  Database; EMR; Mortality; NDI; SSDI
    DOI:  https://doi.org/10.1016/j.jss.2023.09.065
  6. Med Ref Serv Q. 2023 Oct-Dec;42(4):42(4): 370-377
      For more than 25 years, the National Resource Center on Domestic Violence has operated VAWnet, a freely available, online network focused on violence against women and other forms of gender-based violence. This column will provide an overview of the resources available from VAWnet, including a sample search that demonstrates how to access the resources available within as well as a discussion of how to effectively browse the thousands of materials available on VAWnet that provide life-saving information on gender-based violence and related issues.
    Keywords:  Domestic violence; gender-based violence; online database; review
    DOI:  https://doi.org/10.1080/02763869.2023.2248817
  7. Health Info Libr J. 2023 Oct 30.
      BACKGROUND: The COVID-19 pandemic has triggered a significant increase in academic research in the realm of social sciences. As such, there is an increasing need for the scientific community to adopt effective and efficient methods to examine the potential role and contribution of social sciences in the fight against COVID-19.OBJECTIVES: This study aims to identify the key topics and explore publishing trends in social science research pertaining to COVID-19 via automated literature analysis.
    METHODS: The automated literature analysis employed utilizes keyword analysis and topic modelling technique, specifically Latent Dirichlet Allocation, to highlight the most relevant research terms, overarching research themes and research trends within the realm of social science research on COVID-19.
    RESULTS: The focus of research and topics were derived from 9733 full-text academic papers. The bulk of social science research on COVID-19 centres on the following themes: 'Clinical Treatment', 'Epidemic Crisis', 'Mental Influence', 'Impact on Students', 'Lockdown Influence' and 'Impact on Children'.
    CONCLUSION: This study adds to our understanding of key topics in social science research on COVID-19. The automated literature analysis presented is particularly useful for librarians and information specialists keen to explore the role and contributions of social science topics in the context of pandemics.
    Keywords:  Artificial Intelligence (AI); bibliometrics; pandemic; review, literature; social sciences
    DOI:  https://doi.org/10.1111/hir.12508
  8. Med Ref Serv Q. 2023 Oct-Dec;42(4):42(4): 315-329
      Consumers increasingly search for health information online but can become frustrated in their efforts. Here, public libraries can play an important role as trusted sources. A random sample of 200 U.S. public libraries was used to identify the availability of online consumer health information (CHI) and related characteristics. We found that 110 libraries provided online CHI. The average site provided 28 sources and required two clicks to reach the information. About a third of libraries collaborated by sharing sources or linking to existing content. Collaboration may provide a way to expand the availability and quality of online CHI on public library websites.
    Keywords:  CHI; consumer health information; online; public libraries; website
    DOI:  https://doi.org/10.1080/02763869.2023.2261792
  9. J Med Internet Res. 2023 10 30. 25 e49324
      BACKGROUND: As advancements in artificial intelligence (AI) continue, large language models (LLMs) have emerged as promising tools for generating medical information. Their rapid adaptation and potential benefits in health care require rigorous assessment in terms of the quality, accuracy, and safety of the generated information across diverse medical specialties.OBJECTIVE: This study aimed to evaluate the performance of 4 prominent LLMs, namely, Claude-instant-v1.0, GPT-3.5-Turbo, Command-xlarge-nightly, and Bloomz, in generating medical content spanning the clinical specialties of ophthalmology, orthopedics, and dermatology.
    METHODS: Three domain-specific physicians evaluated the AI-generated therapeutic recommendations for a diverse set of 60 diseases. The evaluation criteria involved the mDISCERN score, correctness, and potential harmfulness of the recommendations. ANOVA and pairwise t tests were used to explore discrepancies in content quality and safety across models and specialties. Additionally, using the capabilities of OpenAI's most advanced model, GPT-4, an automated evaluation of each model's responses to the diseases was performed using the same criteria and compared to the physicians' assessments through Pearson correlation analysis.
    RESULTS: Claude-instant-v1.0 emerged with the highest mean mDISCERN score (3.35, 95% CI 3.23-3.46). In contrast, Bloomz lagged with the lowest score (1.07, 95% CI 1.03-1.10). Our analysis revealed significant differences among the models in terms of quality (P<.001). Evaluating their reliability, the models displayed strong contrasts in their falseness ratings, with variations both across models (P<.001) and specialties (P<.001). Distinct error patterns emerged, such as confusing diagnoses; providing vague, ambiguous advice; or omitting critical treatments, such as antibiotics for infectious diseases. Regarding potential harm, GPT-3.5-Turbo was found to be the safest, with the lowest harmfulness rating. All models lagged in detailing the risks associated with treatment procedures, explaining the effects of therapies on quality of life, and offering additional sources of information. Pearson correlation analysis underscored a substantial alignment between physician assessments and GPT-4's evaluations across all established criteria (P<.01).
    CONCLUSIONS: This study, while comprehensive, was limited by the involvement of a select number of specialties and physician evaluators. The straightforward prompting strategy ("How to treat…") and the assessment benchmarks, initially conceptualized for human-authored content, might have potential gaps in capturing the nuances of AI-driven information. The LLMs evaluated showed a notable capability in generating valuable medical content; however, evident lapses in content quality and potential harm signal the need for further refinements. Given the dynamic landscape of LLMs, this study's findings emphasize the need for regular and methodical assessments, oversight, and fine-tuning of these AI tools to ensure they produce consistently trustworthy and clinically safe medical advice. Notably, the introduction of an auto-evaluation mechanism using GPT-4, as detailed in this study, provides a scalable, transferable method for domain-agnostic evaluations, extending beyond therapy recommendation assessments.
    Keywords:  ChatGPT; LLM; accuracy; artificial intelligence; chatbot; chatbots; dermatology; health information; large language models; medical advice; medical information; ophthalmology; orthopedic; orthopedics; quality; recommendation; recommendations; reliability; reliable; safety; therapy
    DOI:  https://doi.org/10.2196/49324
  10. Cureus. 2023 Sep;15(9): e46213
      BACKGROUND: Due to their ability to mimic human responses, anthropomorphic entities such as ChatGPT have a higher likelihood of gaining people's trust. This study aimed to evaluate the quality of information generated by ChatGPT-4, as an artificial intelligence (AI) chatbot, on periodontal disease (PD) using the DISCERN instrument.METHODS: Using Google Bard, the topics related to PD that had the highest search volume according to Google Trends were identified. An interactive dialogue was created by placing the topics in the standard question pattern. As a patient with PD, detailed information was requested from ChatGPT-4 regarding the relevant topics. The 'regenerate response' feature was not employed, and the initial response generated by ChatGPT-4 was carefully considered for each topic as new prompts in the form of questions were entered. The response to each question was independently assessed and rated by two experienced raters using the DISCERN instrument.
    RESULTS: Based on the total DISCERN scores, the qualities of the responses generated by ChatGPT-4 were 'good', except for the two responses that rater-2 scored as 'fair'. It was also observed that the 'treatment choices' section of both raters had significantly fewer scores than the other sections. In both weighted kappa and Krippendorff alpha measures, the strength of agreement varied from 'substantial' to 'almost-perfect', and the correlation between values was statistically significant.
    CONCLUSION: Despite some limitations in providing complete treatment choice information according to the DISCERN instrument, it is considered valuable for PD patients seeking information, as it consistently offered accurate guidance in the majority of responses.
    Keywords:  artificial intelligence; chatbot; health information management; oral health; periodontal disease
    DOI:  https://doi.org/10.7759/cureus.46213
  11. Aesthetic Plast Surg. 2023 Oct 30.
      BACKGROUND: Breast implant-associated anaplastic large cell lymphoma (BIA-ALCL) is a rare complication associated with the use of breast implants. Breast implant illness (BII) is another potentially concerning issue related to breast implants. This study aims to assess the quality of ChatGPT as a potential source of patient education by comparing the answers to frequently asked questions on BIA-ALCL and BII provided by ChatGPT and Google.METHODS: The Google and ChatGPT answers to the 10 most frequently asked questions on the search terms "breast implant associated anaplastic large cell lymphoma" and "breast implant illness" were recorded. Five blinded breast plastic surgeons were then asked to grade the quality of the answers according to the Global Quality Score (GQS). A Wilcoxon paired t-test was performed to evaluate the difference in GQS ratings for Google and ChatGPT answers. The sources provided by Google and ChatGPT were also categorized and assessed.
    RESULTS: In a comparison of answers provided by Google and ChatGPT on BIA-ALCL and BII, ChatGPT significantly outperformed Google. For BIA-ALCL, Google's average score was 2.72 ± 1.44, whereas ChatGPT scored an average of 4.18 ± 1.04 (p < 0.01). For BII, Google's average score was 2.66 ± 1.24, while ChatGPT scored an average of 4.28 ± 0.97 (p < 0.01). The superiority of ChatGPT's responses was attributed to their comprehensive nature and recognition of existing knowledge gaps. However, some of ChatGPT's answers had inaccessible sources.
    CONCLUSION: ChatGPT outperforms Google in providing high-quality answers to commonly asked questions on BIA-ALCL and BII, highlighting the potential of AI technologies in patient education.
    LEVEL OF EVIDENCE: Level III, comparative study LEVEL OF EVIDENCE III: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
    Keywords:  Artificial intelligence; Breast; Breast implant associated anaplastic large cell lymphoma; ChatGPT; Google; Implant; Implant illness; Patient education
    DOI:  https://doi.org/10.1007/s00266-023-03713-4
  12. Urol Pract. 2023 Nov 01. 101097UPJ0000000000000490
      INTRODUCTION: ChatGPT is an artificial intelligence (AI) platform available to patients seeking medical advice. Traditionally, urology patients consulted official provider-created materials, particularly the Urology Care Foundation (UCF). Today, men increasingly go online due to the rising costs of healthcare and the stigma surrounding sexual health. Online health information is largely inaccessible to laypersons as it exceeds the recommended American sixth-eighth grade reading level. We conducted a comparative assessment of patient education materials generated by ChatGPT versus UCF regarding men's health conditions.METHODS: All 6 UCF men's health resources were identified. ChatGPT responses were generated using patient questions obtained from UCF. Adjusted ChatGPT (ChatGPT-a) responses were generated by prompting "Explain it to me like I am in sixth grade." Textual analysis was performed using sentence, word, syllable, and complex word count. Six validated formulae were used for readability analysis. Two physicians independently scored responses for accuracy, comprehensiveness, and understandability. Statistical analysis involved Wilcoxon matched-pairs test.
    RESULTS: ChatGPT responses were longer and more complex. Both UCF and ChatGPT failed official readability standards, although ChatGPT performed significantly worse across all 6 topics (all P < .001). Conversely, ChatGPT-a readability typically surpassed UCF, even meeting the recommended level for 2 topics. Qualitatively, UCF and ChatGPT had comparable accuracy, although ChatGPT had better comprehensiveness and worse understandability.
    CONCLUSION: When comparing readability, ChatGPT-generated education is less accessible than provider-written content, although neither meets the recommended level. Our analysis indicates that specific AI prompts can simplify educational materials to meet national standards and accommodate individual literacy.
    Keywords:  ChatGPT; artificial intelligence; health literacy; men’s health; patient education
    DOI:  https://doi.org/10.1097/UPJ.0000000000000490
  13. Eur Arch Otorhinolaryngol. 2023 Nov 02.
      PURPOSE: To perform the first head-to-head comparative evaluation of patient education material for obstructive sleep apnoea generated by two artificial intelligence chatbots, ChatGPT and its primary rival Google Bard.METHODS: Fifty frequently asked questions on obstructive sleep apnoea in English were extracted from the patient information webpages of four major sleep organizations and categorized as input prompts. ChatGPT and Google Bard responses were selected and independently rated using the Patient Education Materials Assessment Tool-Printable (PEMAT-P) Auto-Scoring Form by two otolaryngologists, with a Fellowship of the Royal College of Surgeons (FRCS) and a special interest in sleep medicine and surgery. Responses were subjectively screened for any incorrect or dangerous information as a secondary outcome. The Flesch-Kincaid Calculator was used to evaluate the readability of responses for both ChatGPT and Google Bard.
    RESULTS: A total of 46 questions were curated and categorized into three domains: condition (n = 14), investigation (n = 9) and treatment (n = 23). Understandability scores for ChatGPT versus Google Bard on the various domains were as follows: condition 90.86% vs.76.32% (p < 0.001); investigation 89.94% vs. 71.67% (p < 0.001); treatment 90.78% vs.73.74% (p < 0.001). Actionability scores for ChatGPT versus Google Bard on the various domains were as follows: condition 77.14% vs. 51.43% (p < 0.001); investigation 72.22% vs. 54.44% (p = 0.05); treatment 73.04% vs. 54.78% (p = 0.002). The mean Flesch-Kincaid Grade Level for ChatGPT was 9.0 and Google Bard was 5.9. No incorrect or dangerous information was identified in any of the generated responses from both ChatGPT and Google Bard.
    CONCLUSION: Evaluation of ChatGPT and Google Bard patient education material for OSA indicates the former to offer superior information across several domains.
    Keywords:  Artificial intelligence; ChatGPT; Google Bard; Large language models; Obstructive sleep apnoea; Patient education material
    DOI:  https://doi.org/10.1007/s00405-023-08319-9
  14. Cureus. 2023 Sep;15(9): e45986
      The Academic Life in Emergency Medicine (ALiEM) Approved Instructional Resources (AIR) Series was created in 2014 to address the Free Open Access Medical Education (FOAM) movement's decentralized nature and lack of inherent peer review. The AIR series provides a topic-based, curated list of online educational content vetted by academic emergency medicine (EM) faculty that meets individualized interactive instruction criteria for EM trainees. Relevant FOAM resources were identified from the top 50 FOAM websites using the Social Media Index and then scored by EM faculty using a validated instrument to identify the highest quality posts related to a topic. This article reviews FOAM resources pertaining to EM procedures that were labeled as an "Approved Instructional Resource" or "Honorable Mention" using the AIR series methodology.
    Keywords:  emergency medicine; free open access medical education; lumbar puncture (lp); medical education; procedures; regional anesthesia; regional nerve blocks; wound closure;  laryngoscopy
    DOI:  https://doi.org/10.7759/cureus.45986
  15. Med Ref Serv Q. 2023 Oct-Dec;42(4):42(4): 330-345
      Librarians can participate in the innovative field of graphic medicine by developing a collection of this genre. To assess the appropriateness of a graphic medicine collection in a university health science library, this study assessed knowledge of and usage of graphic medicine materials, as well as the materials' perceived utility and effectiveness. Given that responses suggested that graphic medicine resources can be useful to educational and clinical initiatives, it is reasonable for health science libraries to collect in this area. Further research in a practical setting can help illuminate the true effectiveness of graphic medicine materials in these realms.
    Keywords:  Collection development; graphic medicine; health sciences; librarianship
    DOI:  https://doi.org/10.1080/02763869.2023.2260674
  16. Cureus. 2023 Sep;15(9): e45984
      Introduction Brain arteriovenous malformations (AVMs) are vascular deformities created by improper connections between arteries and veins, most commonly in the brain and spinal cord. The management is complex and patient-dependent; further understanding of patient education activities is imperative. Internet access has become more ubiquitous, allowing patients to utilize a large database of medical information online. Using Google Trends (GT) (Google LLC, Mountain View, CA, USA), one can see the public interest in a particular topic over time. Further, when presented with numerous search results, patients may not be able to identify the highest-yielding resources, making objective measures of information quality and readability imperative. Methods A GT analysis was conducted for "hereditary hemorrhagic telangiectasia," "cerebral aneurysm," and "arteriovenous malformation". These relative search volumes (RSV) were compared with the 2017 to 2019 annual USA AVM diagnosis quantity for correlation. These RSVs were also compared with the 2017 to 2019 annual USA deaths due to cerebral hemorrhagic conditions. One search was conducted for "brain arteriovenous malformation". Since most users looking for health information online use only the first page of sources, the quality and readability analyses were limited to the first page of results on Google search. Five quality tools and six readability formulas were used. Results Pearson's correlation coefficients showed positive correlations between USA AVM RSVs and annual AVM deaths per capita from 2017 to 2019 (R2=0.932). The AVM annual diagnosis quantity and AVM RSVs showed a strong positive correlation as well (R2=0.998). Hereditary hemorrhagic telangiectasia and cerebral aneurysms had strong positive correlations between their RSVs and their corresponding annual diagnoses in the 2017 to 2019 time period (R2=0.982, R2=0.709). One-way ANOVA, for USA's 2004 to 2021 AVM RSVs and 2004 to 2019 deaths per capita, displayed no month-specific statistically significant repeating pattern (all p>0.483). The DISCERN tool had four websites that qualified as "poor" and five as "good." The average score for the tool was "good." The Journal of the American Medical Association (JAMA) benchmark scores were very low on average, as four websites achieved zero points. There was a wide variance in the currency, relevance, authority, accuracy, and purpose (CRAAP) scores, indicating an inconsistent level of webpage reliability across results. The patient education materials assessment tool (PEMAT) understandability (86.6%) showed much higher scores than the PEMAT actionability (54.6%). No readability score averaged at or below the American Medical Association (AMA)-recommended sixth-grade reading level. Conclusion These GT correlations may be due to patients and families with new diagnoses researching those same conditions online. The seasonality results reflect that no prior research has detected seasonality for AVM diagnosis or presentation. The quality study showed a wide variance in website ethics, treatment information quality, website/author qualifications, and actionable next steps regarding AVMs. Overall, this study showed that patients are routinely attempting to access information regarding these intracranial conditions, but the information available, specifically regarding AVMs, is not routinely reliable and the reading level required to understand them is too high.
    Keywords:  arteriovenous malformations; cerebral aneurysms; google trends; hereditary hemorrhagic telangiectasias; quality of information; readability; seasonality
    DOI:  https://doi.org/10.7759/cureus.45984
  17. Cureus. 2023 Sep;15(9): e46263
      Background A dental implant is one of the most commonly used treatments to replace missing teeth. A reasonable number of implant cases necessitate using a bone graft before or at the time of implant placement. This study aims to evaluate the quality and readability of online patient-centered information about implant bone grafts. Methodology This cross-sectional study used Google, Yahoo, and Bing search engines. The keywords were entered to screen 900 websites. The DISCERN, Journal of the American Medical Association (JAMA), and Health on the Net (HON) code tools evaluated the included websites for quality. The Flesch reading-ease score (FRES), Flesch-Kincaid grade level, and simple measure of gobbledygook tests measured readability. Statistical analysis was done using SPSS version 25 (IBM Corp., Armonk, NY, USA). Results A total of 161 websites were included; 65 (40.4%) of the included websites belonged to a university or medical center. Only five (3.1%) websites were exclusively related to dental implant treatments. DISCERN showed moderate quality for 82 (50.9%) websites. There was a statistical difference between commercial and non-profit organization websites. In the JAMA evaluation, currency was the most commonly achieved in 67 (41.6%) websites. For the HON code, four (2.5%) websites were certified. Based on FRES, the most common readability category was "fair difficult," accounting for 64 (39.8%), followed by "standard" in 56 (34.8%) websites. Conclusions The study findings suggest that English-language patient-centered information about implant bone grafts is challenging to comprehend and of low quality. Hence, there is a need to establish websites that provide trustworthy, high-quality information on implant bone grafts.
    Keywords:  dental implant; discern; jama benchmark; patient education; ridge augmentation; web-based knowledge
    DOI:  https://doi.org/10.7759/cureus.46263
  18. J Curr Glaucoma Pract. 2023 Jul-Sep;17(3):17(3): 141-148
      Aim: In this study, we analyze the content quality and characteristics of the most viewed search results on various internet platforms related to lifestyle measures for patients with glaucoma.Materials and methods: In this internet-based cross-sectional study, we used search keywords "glaucoma," "lifestyle," "glaucoma," and "exercise" on the most popular internet platforms-Google, Facebook, YouTube, and Reddit. The top 30 Google searches about each of the keyword combinations were identified. We also assessed the first 30 videos on YouTube and Facebook Watch, the first 30 Reddit posts and the first 30 Google images about each of the keyword combinations. The quality of content from the platforms was evaluated by three independent reviewers using the well-established Sandvik score, Health on Net (HON) code, and risk score for different uploaders. The quality of content regarding lifestyle measures in glaucoma uploaded by healthcare professionals (HCPs) was further evaluated.
    Results: The established criteria resulted in 48 websites from the Google search engine, 22 videos from YouTube, 37 posts from Reddit, and 28 videos from Facebook Watch, which were included in the final analysis. The mean Sandvik scores were 11.14 ± 1.8 (Google webpages), 10.4 ± 2.19 (YouTube videos), 10.54 ± 2.21 (Facebook Watch), and 4.24 ± 1.18 (Reddit). The mean risk scores were 0.22 ± 0.68 (YouTube videos), 0.18 ± 0.47 (Facebook Watch), and 0.11 ± 0.31 (Reddit). The mean HON code scores were 5.45 ± 1.62 (YouTube), 6.55 ± 1.44 (Google webpages), 5.29 ± 1.04 (Facebook Watch), and 8.27 ± 3.05 (Reddit). The content uploaded by HCPs was primarily from ophthalmologists and had significantly (p < 0.05) higher content quality scores. The majority of the content recommended aerobic exercise as a lifestyle measure in patients with glaucoma as an adjuvant to medical and surgical management.
    Conclusion: The majority of the content regarding lifestyle measures in glaucoma was uploaded by HCPs and had medically accurate and well-referenced information, especially on Google and YouTube.
    Clinical significance: Primary care physicians and ophthalmologists can reliably use social media content to guide recently diagnosed patients about the requisite lifestyle measures.
    How to cite this article: Chahal R, Jindal A, Parmar UPS, et al. Lifestyle Measures for Glaucoma Patients: An Objective Social Media Content Analysis. J Curr Glaucoma Pract 2023;17(3):141-148.
    Keywords:  Glaucoma; Lifestyle changes; Patient education; Social media
    DOI:  https://doi.org/10.5005/jp-journals-10078-1412
  19. Eur Arch Otorhinolaryngol. 2023 Nov 01.
      PURPOSE: Several therapeutic options are usually discussed for otosclerosis management. Patients seek medical advice from an ENT specialist but are also increasingly using the internet for medical issues. This study intends to assess readability and quality of websites with information on otosclerosis.MATERIALS AND METHODS: This is a cross-sectional study performed in a tertiary care centre. The results of the first two pages of a Google search with the keyword "otosclerosis" were reviewed by two independent investigators. Readability was assessed with the Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease Score (FRES) and Gunning Fog Index. For quality and reliability assessment, the 16-item DISCERN instrument was used. Spearman's coefficient was used for correlations, and multivariate analyses of variance were used to assess differences. Inter-rater agreement was evaluated with concordance correlation coefficient.
    RESULTS: 18 websites were included. Two websites (11.0%) were authored by academic institutions, 5/18 (28%) by government agencies, 6/18 (33%) by professional organisations and 5/18 (28%) were medical information websites. The mean DISCERN score of the 18 websites was 40.8 ± 6.7/80 (range 28.7-51.7), corresponding to "fair" quality. The mean FRES score was 43.27 ± 10.6, and the mean FKGL was 11.43 ± 2.30, corresponding to "difficult to read". The mean Gunning Fog index was 12.90 ± 2.19 (range 9.81-18.20), corresponding to a "college freshman" level.
    CONCLUSIONS: This study shows that internet information on otosclerosis has an overall low readability, while the quality is heterogeneous and varies from "poor" to "good". Efforts should be made to improve the readability of otosclerosis websites.
    Keywords:  Comprehension; Health literacy; Internet use; Otosclerosis; Readability; Stapes surgery
    DOI:  https://doi.org/10.1007/s00405-023-08311-3
  20. Am J Ophthalmol. 2023 Oct 31. pii: S0002-9394(23)00454-3. [Epub ahead of print]
      PURPOSE: This study aims to evaluate the readability and quality of internet-based health information on sickle cell retinopathy.DESIGN: The study is a retrospective cross-sectional website analysis.
    METHODS: To simulate a patient's online search, the terms "sickle cell retinopathy" and "sickle cell disease in the eye" were entered into the top three search engines (Google, Bing and Yahoo). The first 20 results of each search were retrieved and screened for analysis. The DISCERN questionnaire, the Journal of the American Medical Association (JAMA) standards and the Health on the Net (HON) criteria were used to evaluate the quality of the information. The Flesch-Kincaid Grade Level (FKGL), the Flesch Reading Ease (FRES) and the Automated Readability Index (ARI) were used to assess the readability of each website.
    RESULTS: Out of 16 online sources, 12 (75%) scored moderately on the DISCERN tool. The mean DISCERN score was 40.91 (SD 10.39; maximum possible, 80). None of the sites met all the JAMA benchmarks and only three (18.75%) of the websites had HONcode certification. All the websites had scores above the target American Medical Association grade level of six on both the FKGL and ARI. The mean FRES was 57.76 (±4.61), below the recommended FRES of 80-90.
    CONCLUSION: There is limited online information available on sickle cell retinopathy. Most included websites were fairly difficult to read and of substandard quality. The quality and readability of internet-based patient-focused information on sickle cell retinopathy needs to be improved.
    Keywords:  ARI; DISCERN; FKGL; FRES; HONcode; JAMA; Patient information; education; internet; quality; readability; sickle cell retinopathy
    DOI:  https://doi.org/10.1016/j.ajo.2023.10.023
  21. Medicine (Baltimore). 2023 Oct 27. 102(43): e35543
      This study aimed to examine the readability, reliability, quality, and content of patient education materials (PEM) on the Internet about "Helicobacter pylori (H pylori)." A search was conducted on March 14, 2023, using the keyword "H pylori" in the Google search engine. The readability of PEMs was assessed using the Flesch reading ease score, FKGL, simple measure of gobbledygook, and gunning fog readability formulas. The reliability and quality of the websites were determined using the Journal of American Medical Association score, health on the net foundation code of conduct, global quality score, and DISCERN score. A total of 93 patient education websites were included in the study. In the readability analysis of PEMs, we determined that the Flesch reading ease score was 49,73 (47,46-52,00) (difficult), the mean Flesch-Kincaid grade level and simple measure of gobbledygook were 9,69 (9,26-10,12) and 9,28 (8,96-9,61) years, respectively, and the mean gunning fog score was 12,47 (12,03-12,91) (very difficult). Most of the evaluated patient educational materials were commercial websites (n = 50, 53.8%). It was found that 16.1% of the websites were of high quality according to global quality score, 30.1% were HON code certified, and 23.7% of the websites were highly reliable according to Journal of American Medical Association scores. There was no statistically significant difference between website typologies and readability (P > .05). However, there was a statistically significant difference between website typologies and quality and reliability scores (P < .005). Compared to the sixth grade level recommended by the American Medical Association and National Institutes of Health, the readability of H pylori-related internet-based PEMs is quite high. On the other hand, the reliability and quality of the PEMs were determined as moderate to poor. PEMs for issues threatening public health should be prepared with attention to recommendations on readability.
    DOI:  https://doi.org/10.1097/MD.0000000000035543
  22. J Vasc Surg Venous Lymphat Disord. 2023 Oct 26. pii: S2213-333X(23)00395-5. [Epub ahead of print] 101695
      BACKGROUND: The internet is an increasingly favourable source of information regarding health-related issues. The aim of this study is to apply appropriate evaluation tools to assess the evidence available online about Inferior Vena Cava filters with a focus on quality and readability.METHODS: A search was performed during December 2022 using three popular search engines, namely Google, Yahoo, and Bing. Websites were categorised into academic, physician, commercial, and unspecified websites according to their content. Information quality was determined using JAMA criteria, the DISCERN scoring tool, and whether a HONcode seal was present. Readability was established using the Flesch Reading Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL). Statistical significance was accepted as p<0.05.
    RESULTS: In total, 110 websites were included in our study. The majority of websites were categorised as commercial (25%), followed by hospital (24%), academic (21%), unspecified (16%), and physician (14%). Average scores for all websites using JAMA and DISCERN were 1.93 ± 1.19 (median 1.5; range 0-4) and 45.20 ± 12.58 (median 45.5; range 21-75) respectively. The highest JAMA mean score of 3.07 ± 1.16 was allocated to physician websites, and the highest DISCERN mean score of 52.85 ± 12.66 was allocated to hospital websites. The HONcode seal appeared on two of the selected websites. Physician, hospital and unspecified websites had a significantly higher mean JAMA score than academic and commercial websites (all with p < 0.001). Hospital websites had a significantly higher mean DISCERN score than academic (p = 0.007), commercial (p < 0.001) and unspecified websites (p = 0.017). Readability evaluation generated a mean FRES score of 51.57 ±12.04 which represented a 10th-12th grade reading level and a mean FKGL score of 8.20 ± 1.70, which represented an 8th-10th grade reading level. Only twelve sources were found to meet the ≤6th grade target reading level. No significant correlation was found between overall DISCERN score and overall FRES score.
    CONCLUSION: The study results demonstrate that the quality of online information about IVC filters is suboptimal, and academic and commercial websites in particular must enhance their content quality regarding the use of IVC filters. Considering the discontinuation of the HONcode as a standardised quality assessment marker, it is recommended that a similar certification tool is developed and implemented for the accreditation of patient information online.
    Keywords:  DISCERN; Inferior Vena Cava filter; Interventional Radiology; JAMA; Venous Thromboembolism; internet information
    DOI:  https://doi.org/10.1016/j.jvsv.2023.101695
  23. JMIR Perioper Med. 2023 Nov 02. 6 e47714
      BACKGROUND: More than 300 million patients undergo surgical procedures requiring anesthesia worldwide annually. There are 2 standard-of-care general anesthesia administration options: inhaled volatile anesthesia (INVA) and total intravenous anesthesia (TIVA). There is limited evidence comparing these methods and their impact on patient experiences and outcomes. Patients often seek this information from sources such as the internet. However, the majority of websites on anesthesia-related topics are not comprehensive, updated, and fully accurate. The quality and availability of web-based patient information about INVA and TIVA have not been sufficiently examined.OBJECTIVE: This study aimed to (1) assess information on the internet about INVA and TIVA for availability, readability, accuracy, and quality and (2) identify high-quality websites that can be recommended to patients to assist in their anesthesia information-seeking and decision-making.
    METHODS: Web-based searches were conducted using Google from April 2022 to November 2022. Websites were coded using a coding instrument developed based on the International Patient Decision Aids Standards criteria and adapted to be appropriate for assessing websites describing INVA and TIVA. Readability was calculated with the Flesch-Kincaid (F-K) grade level and the simple measure of Gobbledygook (SMOG) readability formula.
    RESULTS: A total of 67 websites containing 201 individual web pages were included for coding and analysis. Most of the websites provided a basic definition of general anesthesia (unconsciousness, n=57, 85%; analgesia, n=47, 70%). Around half of the websites described common side effects of general anesthesia, while fewer described the rare but serious adverse events, such as intraoperative awareness (n=31, 46%), allergic reactions or anaphylaxis (n=29, 43%), and malignant hyperthermia (n=18, 27%). Of the 67 websites, the median F-K grade level was 11.3 (IQR 9.5-12.8) and the median SMOG score was 13.5 (IQR 12.2-14.4), both far above the American Medical Association (AMA) recommended reading level of sixth grade. A total of 51 (76%) websites distinguished INVA versus TIVA as general anesthesia options. A total of 12 of the 51 (24%) websites explicitly stated that there is a decision to be considered about receiving INVA versus TIVA for general anesthesia. Only 10 (20%) websites made any direct comparisons between INVA and TIVA, discussing their positive and negative features. A total of 12 (24%) websites addressed the concept of shared decision-making in planning anesthesia care, but none specifically asked patients to think about which features of INVA and TIVA matter the most to them.
    CONCLUSIONS: While the majority of websites described INVA and TIVA, few provided comparisons. There is a need for high-quality patient education and decision support about the choice of INVA versus TIVA to provide accurate and more comprehensive information in a format conducive to patient understanding.
    Keywords:  anesthesia; anesthesiologist; anesthesiology; decision-making; general anesthesia; information; inhaled volatile anesthesia; internet; patient education; shared decision-making; surgery; total intravenous anesthesia; web-based
    DOI:  https://doi.org/10.2196/47714
  24. Eurasian J Med. 2023 Oct;55(3): 208-212
      OBJECTIVE: The aims of this survey study were to evaluate the contribution of YouTube to nerve-block learning/education and the advantages and disadvantages of the YouTube.MATERIALS AND METHODS: A total of 24 questions were selected for the survey by consensus of the authors. Information in the form of web data was obtained through an electronic data form that was distributed via WhatsApp to known email addresses and phone numbers of 300 practitioners (anesthesia residents, anesthesiologists, and academicians). There were a total of 24 questions on the survey. The first section included 5 questions collecting demographic data, and the second part encompassed 19 questions about the YouTube nerve block videos.
    RESULTS: Among the participants, 232 of practitioners (86.9%) performed peripheral nerve blocks, and only 35 practitioners (13.1%) had no experience of nerve blocks so and used YouTube videos for educational purposes. According to our results, YouTube videos frequently improved performance. In addition, YouTube improved the training of practitioners in terms of the type of block procedure, identifying anatomical landmarks, target structures like nerves and blood vessels, needle visualization, needle depth, and patient position.
    CONCLUSION: YouTube contributes to the performance of regional anesthesia and to learning at all academic levels. It should not be forgotten that such videos are not peer reviewed by professionals in the relevant field.
    DOI:  https://doi.org/10.5152/eurasianjmed.2023.23075
  25. Phlebology. 2023 Oct 30. 2683555231209401
      OBJECTIVE: YouTube® has gained popularity as an unofficial educational resource for surgical trainees, but its content's quality and educational value remain to be evaluated. The aim of this study is to analyze the current content on these techniques for lower extremity DVT (LEDVT) on YouTube®.METHODS: A search was performed on YouTube® using 13 search terms in August 2022 on a clear-cached browser. Open-access videos focusing on the surgical techniques of venous thrombolysis or thrombectomy for LEDVT were included. Quality and educational value were assessed and graded based on metrics for accountability (4 items), content (13 items), and production (9 items).
    RESULTS: Out of 138 videos regarding LEDVT oriented towards medical professionals, only 14 met inclusion criteria. Videos ran for a median of 3.4 min (range 0.37-35.6 min) with a median of 941 views (range 106-54624). Videos scored a median of 5.5 (range 1.0-8.0) out of 11 for content, a median of 2.0 out of 6.0 (range 0.0-2.0) for accountability, and a median of 5.5 out of 9.0 (range 3.0-9.0) for production.
    CONCLUSION: Few YouTube® videos focus on the technical aspects of DVT thrombolysis/thrombectomy, and they vary significantly in content with overall poor accountability and production quality.
    Keywords:  Deep vein thrombosis; YouTube®; education; internet web resources; thrombectomy; thrombolysis
    DOI:  https://doi.org/10.1177/02683555231209401
  26. Plast Surg (Oakv). 2023 Nov;31(4): 371-376
      Background: YouTube is currently the most popular online platform and is increasingly being utilized by patients as a resource on aesthetic surgery. Yet, its content is largely unregulated and this may result in dissemination of unreliable and inaccurate information. The objective of this study was to evaluate the quality and reliability of YouTube liposuction content available to potential patients. Methods: YouTube was screened using the keywords: "liposuction," "lipoplasty," and "body sculpting." The top 50 results for each term were screened for relevance. Videos which met the inclusion criteria were scored using the Global Quality Score (GQS) for educational value and the Journal of the American Medical Association (JAMA) criteria for video reliability. Educational value, reliability, video views, likes, dislikes, duration and publishing date were compared between authorship groups, high/low reliability, and high/low educational value. Results: A total of 150 videos were screened, of which 89 videos met the inclusion criteria. Overall, the videos had low reliability (mean JAMA score = 2.78, SD = 1.15) and low educational value (mean GQS score = 3.55, SD = 1.31). Videos uploaded by physicians accounted for 83.1% percent of included videos and had a higher mean educational value and reliability score than those by patients. Video views, likes, dislikes, comments, popularity, and length were significantly greater in videos with high reliability. Conclusions: To ensure liposuction-seeking patients are appropriately educated and informed, surgeons and their patients may benefit from an analysis of educational quality and reliability of such online content. Surgeons may wish to discuss online sources of information with patients.
    Keywords:  YouTube; educational quality; liposuction; patient education; reliability
    DOI:  https://doi.org/10.1177/22925503211064382
  27. J Med Internet Res. 2023 10 30. 25 e47595
      BACKGROUND: Generation Z (born 1995-2010) members are digital residents who use technology and the internet more frequently than any previous generation to learn about their health. They are increasingly moving away from conventional methods of seeking health information as technology advances quickly and becomes more widely available, resulting in a more digitalized health care system. Similar to all groups, Generation Z has specific health care requirements and preferences, and their use of technology influences how they look for health information. However, they have often been overlooked in scholarly research.OBJECTIVE: First, we aimed to identify the information-seeking preferences of older individuals and Generation Z (those between the ages of 18 and 26 years); second, we aimed to predict the effects of digital health literacy and health empowerment in both groups. We also aimed to identify factors that impact how both groups engage in digital health and remain in control of their own health.
    METHODS: The Health Information National Trends Survey was adopted for further use in 2022. We analyzed 1862 valid data points by conducting a survey among Chinese respondents to address the research gap. A descriptive analysis, 2-tailed t test, and multiple linear regression were applied to the results.
    RESULTS: When compared with previous generations, Generation Z respondents (995/1862, 53.44%) were more likely to use the internet to find out about health-related topics, whereas earlier generations relied more on traditional media and interpersonal contact. Web-based information-seeking behavior is predicted by digital health literacy (Generation Z: β=.192, P<.001; older population: β=.337, P<.001). While this was happening, only seeking health information from physicians positively predicted health empowerment (Generation Z: β=.070, P=.002; older population: β=.089, P<.001). Despite more frequent use of the internet to learn about their health, Generation Z showed lower levels of health empowerment and less desire to look for health information, overall.
    CONCLUSIONS: This study examined and compared the health information-seeking behaviors of Generation Z and older individuals to improve their digital health literacy and health empowerment. The 2 groups demonstrated distinct preferences regarding their choice of information sources. Health empowerment and digital health literacy were both significantly related to information-seeking behaviors.
    Keywords:  Generation Z; digital health literacy; digitally savvy; health empowerment; health information seeking
    DOI:  https://doi.org/10.2196/47595