bims-librar Biomed News
on Biomedical librarianship
Issue of 2024–07–28
23 papers selected by
Thomas Krichel, Open Library Society



  1. Med Ref Serv Q. 2024 Jul-Sep;43(3):43(3): 262-267
      While medical and health science librarians' median salaries have increased over the last forty years; however, inflation-adjusted salaries are lower than in 2008. Utilizing data from the Medical Library Association's salary surveys from 1983 to 2023, this column explores median salary changes over time by discussing the median salary's performance against inflation and how the 2008 recession and the 2020 COVID-19 pandemic impacted salaries. From 2017 to 2023, the median salary increased by 18%, but after adjusting for inflation, the median salary decreased by almost 6%. The findings have serious implications for recruitment and retention in medical and health sciences librarianship.
    Keywords:  Compensation; health science librarians; librarianship; medical librarians; salaries
    DOI:  https://doi.org/10.1080/02763869.2024.2371753
  2. Med Ref Serv Q. 2024 Jul-Sep;43(3):43(3): 217-233
      Public libraries serve as sources of health information, and partnerships between public and academic health sciences libraries may improve a community's access to and understanding of health information. Librarians at a medical school in Kentucky conducted interviews with public librarians to better understand their experiences with health information with the goal of informing future outreach to public libraries. All participants reported receiving requests for health-related information at least occasionally. Most participants used books to answer health questions, although a wide range of electronic resources were also used. Implications for academic health sciences librarians are discussed.
    Keywords:  Academic libraries; health information; outreach; public libraries
    DOI:  https://doi.org/10.1080/02763869.2024.2370755
  3. Health Info Libr J. 2024 Jul 21.
      The International Perspectives and Initiatives Regular Feature seeks to expand the Health Information and Libraries Journal's global coverage of health library and knowledge practice. The current focus of this Regular Feature is how health library and knowledge services are responding to technological advances.
    Keywords:  information and communication technologies (ICTs); librarians, international; library and information sector
    DOI:  https://doi.org/10.1111/hir.12546
  4. Med Ref Serv Q. 2024 Jul-Sep;43(3):43(3): 203-216
      Librarians' involvement in Evidence-Based Medical Practice (EBMP) has been widely reported from the Global North. The cross-sectional study designed a survey to investigate how African medical librarians integrate into EBMP. The respondents comprised medical librarians from 12 African countries. Findings revealed that African medical librarians are mostly involved in EBMP activities related to resource use, management, and evidence dissemination. The leading EBMP tools reportedly used or promoted by the librarians include UpToDate and Cochrane Library, while the leading challenges encountered in offering support for EBMP are related to skill deficiency, poor funding, and poor internet connectivity.
    Keywords:  Africa; Evidence-Based Medical Practice; Evidence-Based Medicine; digital tools; health information; human resources for health; medical librarians; patient care
    DOI:  https://doi.org/10.1080/02763869.2024.2370756
  5. Med Ref Serv Q. 2024 Jul-Sep;43(3):43(3): 268-276
      The Centers for Disease Control and Prevention (CDC) Science Clips is an online weekly bibliographical digest showcasing over 46,000 scientific articles and publications from 2009 to present. The digest is curated by the Stephen B. Thacker CDC Library to bring awareness to relevant and quality public health literature. This overview describes how users can access and navigate the database, and evaluates the database usability and relevance in public health.
    Keywords:  Bibliographical digest; Centers for Disease Control and Prevention (CDC); online database; public health; review; scientific publications
    DOI:  https://doi.org/10.1080/02763869.2024.2369469
  6. Med Ref Serv Q. 2024 Jul-Sep;43(3):43(3): 243-261
      Health sciences librarians often lack knowledge of the motivations behind faculty publishing behavior. This study establishes some understanding of their choices through interviews with academic health sciences faculty members. Knowledge of the concepts of open access was lacking, as was the differences between open access and predatory publishing. Faculty had varied opinions on publication without robust peer review, its ethical implications, manuscript quality, and trust in scientific publishing. Evidence from this study suggests that librarians must take an active role in shaping the future of scholarly communication through education, advocacy, and a commitment to moving science forward equitably and ethically.
    Keywords:  Faculty publishing choices; health sciences faculty; open access publishing; predatory publishing; publication ethics; scholarly publishing
    DOI:  https://doi.org/10.1080/02763869.2024.2373019
  7. Stud Health Technol Inform. 2024 Jul 24. 315 746-747
      Ovarian cancer (OvCa) patients encounter complex treatment decisions, and often have difficulties in searching and integrating online health information to guide their treatment decision-making. The objective of this study was to explore the preference of online health information among OvCa patients and caregivers, by exploring their preferred content, format, and function features for the design of a personalized recommender system. This study used qualitative research methods to collect data through in-depth interviews with 18 OvCa patients and 2 caregivers. A total of (N=20) face-to-face interviews were conducted, and subsequently analyzed by audio recordings, verbatim transcription, and theory-driven approach with thematic analysis. A total of 5 themes were identified for content-related design, 4 themes identified for system function and one theme identified for frequency format. The results of this study inform the preference and therefore OvCa specific features can be tailor-made in a recommendation system.
    Keywords:  COVID-19; Ovarian cancer; gynecological cancer; online health information seeking (OHIS)
    DOI:  https://doi.org/10.3233/SHTI240310
  8. Med Ref Serv Q. 2024 Jul-Sep;43(3):43(3): 234-242
      This article examines the development and implementation of a customized Python script utilizing the Elsevier Scopus and Clarivate Web of Science Journal Citation Reports Application Programming Interfaces (APIs). The aim was to streamline and expedite the labor-intensive process of collecting research metrics, which were traditionally compiled manually by librarians at the University of Miami Miller School of Medicine Louis Calder Memorial Library. The script significantly reduces the time and effort required to generate comprehensive reports on research productivity, thereby enabling more efficient resource allocation and aiding in faculty evaluations.
    Keywords:  API; Python; research impact; research metrics
    DOI:  https://doi.org/10.1080/02763869.2024.2371751
  9. Res Synth Methods. 2024 Jul 25.
      Searching multiple resources to locate eligible studies for research syntheses can result in hundreds to thousands of duplicate references that should be removed before the screening process for efficiency. Research investigating the performance of automated methods for deduplicating references via reference managers and systematic review software programs can become quickly outdated as new versions and programs become available. This follow-up study examined the performance of default de-duplication algorithms in EndNote 20, EndNote online classic, ProQuest RefWorks, Deduklick, and Systematic Review Accelerator's new Deduplicator tool. On most accounts, systematic review software programs outperformed reference managers when deduplicating references. While cost and the need for institutional access may restrict researchers from being able to utilize some automated methods for deduplicating references, Systematic Review Accelerator's Deduplicator tool is free to use and demonstrated the highest accuracy and sensitivity, while also offering user-mediation of detected duplicates to improve specificity. Researchers conducting syntheses should take automated de-duplication performance, and methods for improving and optimizing their use, into consideration to help prevent the unintentional removal of eligible studies and potential introduction of bias to syntheses. Researchers should also be transparent about their de-duplication process to help readers critically appraise their synthesis methods, and to comply with the PRISMA-S extension for reporting literature searches in systematic reviews.
    Keywords:  duplicate references; reference managers; study design; synthesis methods; systematic review software
    DOI:  https://doi.org/10.1002/jrsm.1736
  10. Minerva Cardiol Angiol. 2024 Jul 26.
      This tutorial provides a comprehensive guide on leveraging ChatGPT for systematic literature reviews, leveraging actual applications in cardiovascular research. Systematic reviews, while essential, are resource-intensive, and ChatGPT offers a potential solution to streamline the process. The tutorial covers the entire review process, from preparation to finalization. In the preparation phase, ChatGPT assists in defining research questions and generating search strings. During the screening phase, ChatGPT can efficiently screen titles and abstracts, processing multiple abstracts simultaneously. The tutorial also introduces an intermediate step of generating study summaries that leads to the generation of reliable data extraction tables. For assessing the risk of bias, ChatGPT can be prompted to perform these tasks. Using each tool's explanation document to generate an appropriate prompt is an efficient method of reliable risk of bias assessments using ChatGPT. However, users are cautioned about potential hallucinations in ChatGPT's outputs and the importance of manual validation. The tutorial emphasizes the need for vigilance, continuous refinement, and gaining experience with ChatGPT to ensure accurate and reliable results. The methods presented have been successfully tried in several projects, but they remain in nascent stages, with ample room for improvement and refinement.
    DOI:  https://doi.org/10.23736/S2724-5683.24.06568-2
  11. J Am Med Inform Assoc. 2024 Jul 23. pii: ocae166. [Epub ahead of print]
       OBJECTIVE: This paper aims to address the challenges in abstract screening within systematic reviews (SR) by leveraging the zero-shot capabilities of large language models (LLMs).
    METHODS: We employ LLM to prioritize candidate studies by aligning abstracts with the selection criteria outlined in an SR protocol. Abstract screening was transformed into a novel question-answering (QA) framework, treating each selection criterion as a question addressed by LLM. The framework involves breaking down the selection criteria into multiple questions, properly prompting LLM to answer each question, scoring and re-ranking each answer, and combining the responses to make nuanced inclusion or exclusion decisions.
    RESULTS AND DISCUSSION: Large-scale validation was performed on the benchmark of CLEF eHealth 2019 Task 2: Technology-Assisted Reviews in Empirical Medicine. Focusing on GPT-3.5 as a case study, the proposed QA framework consistently exhibited a clear advantage over traditional information retrieval approaches and bespoke BERT-family models that were fine-tuned for prioritizing candidate studies (ie, from the BERT to PubMedBERT) across 31 datasets of 4 categories of SRs, underscoring their high potential in facilitating abstract screening. The experiments also showcased the viability of using selection criteria as a query for reference prioritization. The experiments also showcased the viability of the framework using different LLMs.
    CONCLUSION: Investigation justified the indispensable value of leveraging selection criteria to improve the performance of automated abstract screening. LLMs demonstrated proficiency in prioritizing candidate studies for abstract screening using the proposed QA framework. Significant performance improvements were obtained by re-ranking answers using the semantic alignment between abstracts and selection criteria. This further highlighted the pertinence of utilizing selection criteria to enhance abstract screening.
    Keywords:  abstract screening; automated systematic review; large language model; question answering; zero-shot re-ranking
    DOI:  https://doi.org/10.1093/jamia/ocae166
  12. Access Microbiol. 2024 ;pii: 000790.v3. [Epub ahead of print]6(6):
      ChatGPT and Bard (now called Gemini), two conversational AI models developed by OpenAI and Google AI, respectively, have garnered considerable attention for their ability to engage in natural language conversations and perform various language-related tasks. While the versatility of these chatbots in generating text and simulating human-like conversations is undeniable, we wanted to evaluate their effectiveness in retrieving biological knowledge for curation and research purposes. To do so we asked each chatbot a series of questions and scored their answers based on their quality. Out of a maximal score of 24, ChatGPT scored 5 and Bard scored 13. The encountered issues included missing information, incorrect answers, and instances where responses combine accurate and inaccurate details. Notably, both tools tend to fabricate references to scientific papers, undermining their usability. In light of these findings, we recommend that biologists continue to rely on traditional sources while periodically assessing the reliability of ChatGPT and Bard. As ChatGPT aptly suggested, for specific and up-to-date scientific information, established scientific journals, databases, and subject-matter experts remain the preferred avenues for trustworthy data.
    Keywords:  biocuration; large language model
    DOI:  https://doi.org/10.1099/acmi.0.000790.v3
  13. Clin Genitourin Cancer. 2024 Jun 29. pii: S1558-7673(24)00116-2. [Epub ahead of print]22(5): 102145
       AIM: To examine the reliability of ChatGPT in evaluating the quality of medical content of the most watched videos related to urological cancers on YouTube.
    MATERIAL AND METHODS: In March 2024 a playlist was created of the first 20 videos watched on YouTube for each type of urological cancer. The video texts were evaluated by ChatGPT and by a urology specialist using the DISCERN-5 and Global Quality Scale (GQS) questionnaires. The results obtained were compared using the Kruskal-Wallis test.
    RESULTS: For the prostate, bladder, renal, and testicular cancer videos, the median (IQR) DISCERN-5 scores given by the human evaluator and ChatGPT were (Human: 4 [1], 3 [0], 3 [2], 3 [1], P = .11; ChatGPT: 3 [1.75], 3 [1], 3 [2], 3 [0], P = .4, respectively) and the GQS scores were (Human: 4 [1.75], 3 [0.75], 3.5 [2], 3.5 [1], P = .12; ChatGPT: 4 [1], 3 [0.75], 3 [1], 3.5 [1], P = .1, respectively), with no significant difference determined between the scores. The repeatability of the ChatGPT responses was determined to be similar at 25 % for prostate cancer, 30 % for bladder cancer, 30 % for renal cancer, and 35 % for testicular cancer (P = .92). No statistically significant difference was determined between the median (IQR) DISCERN-5 and GQS scores given by humans and ChatGPT for the content of videos about prostate, bladder, renal, and testicular cancer (P > .05).
    CONCLUSION: Although ChatGPT is successful in evaluating the medical quality of video texts, the results should be evaluated with caution as the repeatability of the results is low.
    Keywords:  Artificial intelligence; DISCERN; Global quality score; Information sources; Urological malignancies
    DOI:  https://doi.org/10.1016/j.clgc.2024.102145
  14. Int J Environ Res Public Health. 2024 Jun 29. pii: 857. [Epub ahead of print]21(7):
      The Internet is the most used source of HIV information second to information received from healthcare professionals. The aim of this study was to assess the quality of Internet information about periodontitis in people living with HIV (PLWH). An Internet search was performed on 18 April 2024 using the search terms "Periodontitis", "Periodontal disease", and "Gum disease" in combination with "HIV" in the most popular search engines (Google™, Bing™, and YAHOO!®). The first 20 results from each search term engine were pooled for analysis. Quality was assessed by JAMA benchmarks. Readability was assessed using the Flesch reading ease score (FRES). Origin of the site, type of author, and information details were also recorded. The quality of Internet information about periodontitis in PLWH varied. The mean JAMA score was 2.81 (SD = 1.0). The websites were generally fairly difficult to read (mean FRES = 57.1, SD = 15.0). Most websites provided some advice about self-treatment of oral problems, accompanied by a strong recommendation to seek professional dental care. In conclusion, advanced reading skills on periodontitis in PLWH were required and quality features were mostly not provided. Therefore, healthcare professionals should be actively involved in developing high-quality information resources and direct patients to evidence-based materials on the Internet.
    Keywords:  HIV; internet information; periodontitis; quality; readability; websites
    DOI:  https://doi.org/10.3390/ijerph21070857
  15. JAMA Netw Open. 2024 Jul 01. 7(7): e2422275
       Importance: The mainstream use of chatbots requires a thorough investigation of their readability and quality of information.
    Objective: To identify readability and quality differences in information between a free and paywalled chatbot cancer-related responses, and to explore if more precise prompting can mitigate any observed differences.
    Design, Setting, and Participants: This cross-sectional study compared readability and information quality of a chatbot's free vs paywalled responses with Google Trends' top 5 search queries associated with breast, lung, prostate, colorectal, and skin cancers from January 1, 2021, to January 1, 2023. Data were extracted from the search tracker, and responses were produced by free and paywalled ChatGPT. Data were analyzed from December 20, 2023, to January 15, 2024.
    Exposures: Free vs paywalled chatbot outputs with and without prompt: "Explain the following at a sixth grade reading level: [nonprompted input]."
    Main Outcomes and Measures: The primary outcome measured the readability of a chatbot's responses using Flesch Reading Ease scores (0 [graduate reading level] to 100 [easy fifth grade reading level]). Secondary outcomes included assessing consumer health information quality with the validated DISCERN instrument (overall score from 1 [low quality] to 5 [high quality]) for each response. Scores were compared between the 2 chatbot models with and without prompting.
    Results: This study evaluated 100 chatbot responses. Nonprompted free chatbot responses had lower readability (median [IQR] Flesh Reading ease scores, 52.60 [44.54-61.46]) than nonprompted paywalled chatbot responses (62.48 [54.83-68.40]) (P < .05). However, prompting the free chatbot to reword responses at a sixth grade reading level was associated with increased reading ease scores than the paywalled chatbot nonprompted responses (median [IQR], 71.55 [68.20-78.99]) (P < .001). Prompting was associated with increases in reading ease in both free (median [IQR], 71.55 [68.20-78.99]; P < .001)and paywalled versions (median [IQR], 75.64 [70.53-81.12]; P < .001). There was no significant difference in overall DISCERN scores between the chatbot models, with and without prompting.
    Conclusions and Relevance: In this cross-sectional study, paying for the chatbot was found to provide easier-to-read responses, but prompting the free version of the chatbot was associated with increased response readability without changing information quality. Educating the public on how to prompt chatbots may help promote equitable access to health information.
    DOI:  https://doi.org/10.1001/jamanetworkopen.2024.22275
  16. Eur Urol Focus. 2024 Jul 23. pii: S2405-4569(24)00117-2. [Epub ahead of print]
       BACKGROUND AND OBJECTIVE: Readability of patient education materials is of utmost importance to ensure understandability and dissemination of health care information in uro-oncology. We aimed to investigate the readability of the official patient education materials of the European Association of Urology (EAU) and American Urology Association (AUA).
    METHODS: Patient education materials for prostate, bladder, kidney, testicular, penile, and urethral cancers were retrieved from the respective organizations. Readability was assessed via the WebFX online tool for Flesch Kincaid Reading Ease Score (FRES) and for reading grade levels by Flesch Kincaid Grade Level (FKGL), Gunning Fog Score (GFS), Smog Index (SI), Coleman Liau Index (CLI), and Automated Readability Index (ARI). Layperson readability was defined as a FRES of ≥70 and with the other readability indexes <7 according to European Union recommendations. This study assessed only objective readability and no other metrics such as understandability.
    KEY FINDINGS AND LIMITATIONS: Most patient education materials failed to meet the recommended threshold for laypersons. The mean readability for EAU patient education material was as follows: FRES 50.9 (standard error [SE]: 3.0), and FKGL, GFS, SI, CLI, and ARI all with scores ≥7. The mean readability for AUA patient material was as follows: FRES 64.0 (SE: 1.4), with all of FKGL, GFS, SI, and ARI scoring ≥7 readability. Only 13 out of 70 (18.6%) patient education materials' paragraphs met the readability requirements. The mean readability for bladder cancer patient education materials was the lowest, with a FRES of 36.7 (SE: 4.1).
    CONCLUSIONS AND CLINICAL IMPLICATIONS: Patient education materials from leading urological associations reveal readability levels beyond the recommended thresholds for laypersons and may not be understood easily by patients. There is a future need for more patient-friendly reading materials.
    PATIENT SUMMARY: This study checked whether health information about different cancers was easy to read. Most of it was too hard for patients to understand.
    Keywords:  Health information dissemination; Layperson; Patient education material; Urology
    DOI:  https://doi.org/10.1016/j.euf.2024.06.012
  17. Spec Care Dentist. 2024 Jul 23.
       INTRODUCTION: The use of the internet has surged significantly over the years. Patients and caregivers of patients with autism spectrum disorder (ASD) might consult the internet for oral health-related information. Hence, this study aimed to assess the quality and readability of online information available in the English language regarding oral health in ASD.
    METHODS: Online search using Google.com was conducted using the terms "Autism and dental care," "Autism and oral health," and "Autism and dentistry". The first 100 websites for each term were screened. Quality of information was assessed using the Patient Education Materials Assessment Tool for printed material (PEMAT-P) and the Journal of American Medical Association (JAMA) benchmarks. A PEMAT score higher than 70% is considered acceptable for readability and actionability. The JAMA benchmarks are authorship, attribution, disclosure, and currency. Readability was evaluated using the Flesch reading ease score and Simple Measure of Gobbledygook (SMOG) readability formula.
    RESULTS: Out of the 300 screened websites, 66 were eventually included. The mean PEMAT understandability and actionability scores were 77.13%, and 42.12%, respectively. Only 12.1% of the websites displayed all four JAMA benchmarks. The mean Flesch score was 10th-12th grade level, and the mean SMOG score was 10th grade level.
    CONCLUSION: While the understandability of the information was acceptable, the readability and actionability were too challenging for lay people. Health care professionals and organizations involved in patient education should place more efforts in promoting the quality of online information targeting patients with ASD.
    Keywords:  autism; oral health; quality; readability; web‐based information
    DOI:  https://doi.org/10.1111/scd.13045
  18. Digit Health. 2024 Jan-Dec;10:10 20552076241264390
       Background: Assessment of the Arabic online patient-centered information is understudied. The study aims to assess the quality and readability of the Arabic web-based knowledge about dental extraction.
    Methods: The first 100 Arabic websites focusing on dental extraction were gathered using popular terms from Google, Bing, and Yahoo searches. These sites were organized and their quality was assessed using three key standards: the Journal of the American Medical Association (JAMA) benchmark criteria, the DISCERN instrument, and the inclusion of the Health on the Net Foundation Code of Conduct (HON code) seal. Additionally, the ease of reading of these websites was evaluated through various online readability indexes.
    Results: Out of 300 initially reviewed websites on dental extraction in Arabic, 80 met the eligibility criteria. Nonprofit organizations were most common (41.3%), followed by university/medical centers (36.3%), and commercial entities (21.3%). Government organizations were minimally represented (1.3%). All websites were medically oriented, with 60% offering Q&A sections. Quality assessment showed moderate scores on the DISCERN instrument, with no site reaching the highest score. JAMA benchmarks were poorly met, and none had the HON code seal. Readability was generally high, with most sites scoring favorably on readability scales.
    Conclusions: The rapidly evolving online information about dental extraction lacks readability and quality and can spread misinformation. Creators should focus on clear, unbiased content using simple language for better public understanding.
    Keywords:  Arabic knowledge; Quality; dental extraction; readability
    DOI:  https://doi.org/10.1177/20552076241264390
  19. J Surg Res. 2024 Jul 23. pii: S0022-4804(24)00383-4. [Epub ahead of print]301 540-546
       INTRODUCTION: Parathyroidectomy is recommended for severe secondary hyperparathyroidism (SHPT) due to end-stage kidney disease (ESKD), but surgery is underutilized. High quality and accessible online health information, recommended to be at a 6th-grade reading level, is vital to improve patient health literacy. This study evaluated available online resources for SHPT from ESKD based on information quality and readability.
    METHODS: Three search engines were queried using the terms "parathyroidectomy for secondary hyperparathyroidism," "parathyroidectomy kidney/renal failure," "parathyroidectomy dialysis patients," "should I have surgery for hyperparathyroidism due to kidney failure?," and "do I need surgery for hyperparathyroidism due to kidney failure if I do not have symptoms?" Websites were categorized by source and origin. Two independent reviewers determined information quality using JAMA (0-4) and DISCERN (1-5) frameworks, and scores were averaged. Cohen's kappa evaluated inter-rater reliability. Readability was determined using the Flesch Kincaid Reading Ease, Flesch Kincaid Grade Level, and Simple Measure of Gobbledygook tools. Median readability scores were calculated, and corresponding grade level determined. Websites with reading difficulties >6th grade level were calculated.
    RESULTS: Thirty one (86.1%) websites originated from the U.S., with most from hospital-associated (63.9%) and foundation/advocacy sources (30.6%). The mean JAMA and DISCERN scores for all websites were 1.3 ± 1.4 and 2.6 ± 0.7, respectively. Readability scores ranged from grade level 5-college level, and most websites scored above the recommended 6th grade level.
    CONCLUSIONS: Patient-oriented websites tailoring SHPT from ESKD are at a reading level higher than recommended, and the quality of information is low. Efforts must be made to improve the accessibility and quality of information for all patients.
    Keywords:  Chronic kidney disease; End-stage kidney disease; Healthcare communication; Information accessibility; Online health information; Quality; Readability; Secondary hyperparathyroidism
    DOI:  https://doi.org/10.1016/j.jss.2024.07.004
  20. Am J Orthod Dentofacial Orthop. 2024 Jul 23. pii: S0889-5406(24)00231-2. [Epub ahead of print]
       INTRODUCTION: Patients seeking health information on the Internet is commonplace. This scoping review aimed to collate and synthesize the evidence regarding the quality of Web-based orthodontic information.
    METHODS: A systematic search and independent screening process was conducted by 2 independent reviewers across 4 databases. The review was conducted in alignment with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines.
    RESULTS: Of 661 records identified, 30 publications satisfied the inclusion criteria. Reviewed studies included those related to the information contained within Web sites regarding dental and orthodontist practices, orthodontic interventions, appliances and auxiliaries, orthodontic conditions commonly requiring orthodontic therapy, issues related to patient experience, and advice during orthodontic treatment. A total of 5 quality of information (QOI) instruments and 3 readability tools were employed, with the University of Oxford, United Kingdom (DISCERN) instrument and the Flesch Reading Ease Score tool being the most frequently used. Most studies determined that the QOI contained within evaluated Web sites was poor and provided suboptimal information related to treatment risks and Web site reliability. Most studies indicated that the information was more difficult to read than recommended by guidelines.
    CONCLUSIONS: The QOI of orthodontic information available on Web sites was low to moderate, with the readability of content at a level that was considered challenging for many readers to understand. A recommendation for greater involvement of the dental and orthodontic specialty in Web site development was commonplace. Research is required for the development of validated tools that can determine the accuracy of information in addition to Web site reliability.
    DOI:  https://doi.org/10.1016/j.ajodo.2024.05.018
  21. Lung. 2024 Jul 26.
       OBJECTIVES: Readability of patient-facing information of oral antibiotics detailed in the WHO all oral short (6 months, 9 months) has not been described to date. The aim of this study was therefore to examine (i) how readable patient-facing TB antibiotic information is compared to readability reference standards and (ii) if there are differences in readability between high-incidence countries versus low-incidence countries.
    METHODS: Ten antibiotics, including bedaquiline, clofazimine, ethambutol, ethionamide, isoniazid, levofloxacin, linezolid, moxifloxacin, pretomanid, pyrazinamide, were investigated. TB antibiotic information sources were examined, consisting of 85 Patient Information Leaflets (PILs) and 40 antibiotic web resouces. Of these 85 PILs, 72 were taken from the National Medicines Regulator from six countries (3 TB high-incidence [Rwanda, Malaysia, South Africa] + 3 TB low-incidence [UK, Ireland, Malta] countries). Readability data was grouped into three categories, including (i) high TB-incidence countries (n = 33 information sources), (ii) low TB-incidence countries (n = 39 information sources) and (iii) web information (n = 53). Readability was calculated using Readable software, to obtain four readability scores [(i) Flesch Reading Ease (FRE), (ii) Flesch-Kincaid Grade Level (FKGL), (iii) Gunning Fog Index and (iv) SMOG Index], as well as two text metrics [words/sentence, syllables/word].
    RESULTS: Mean readability scores of patient-facing TB antibiotic information for FRE and FKGL, were 47.4 ± 12.6 (sd) (target ≥ 60) and 9.2 ± 2.0 (target ≤ 8.0), respectively. There was no significant difference in readability between low incidence countries and web resources, but there was significantly poorer readability associated with PILs from high incidence countries versus low incidence countries (FRE; p = 0.0056: FKGL; p = 0.0095).
    CONCLUSIONS: Readability of TB antibiotic PILs is poor. Improving readability of PILs should be an important objective when preparing patient-facing written materials, thereby improving patient health/treatment literacy.
    Keywords:  Antibiotic resistance; Antibiotics; Readability; Treatment literacy; Tuberculosis
    DOI:  https://doi.org/10.1007/s00408-024-00732-z
  22. Support Care Cancer. 2024 Jul 24. 32(8): 540
       BACKGROUND: Breast cancer-related lymphedema in the upper limb remains one of the most distressful complications of breast cancer treatment. YouTube is considered a potential digital resource for population health and decision making. However, access to inadequate information or misinformation could have undesirable impacts. This cross-sectional study aimed to evaluate the reliability, quality and content of YouTube videos on lymphedema as an information source for Spanish-speaking breast cancer survivors.
    METHODS: A search of YouTube was conducted in January 2023 using the key words "breast cancer lymphedema" and "lymphedema arm breast cancer." Reliability and quality of the videos were evaluated using the Discern tool, content, source of production, number of likes, comments, views, duration, Video Power Index, likes ratio, view ratio and age on the platform.
    RESULTS: Amongst the 300 Spanish language videos identified on YouTube, 35 were selected for analysis based on the inclusion and exclusion criteria. Of the 35 selected videos, 82.9% (n = 29) were developed by healthcare or academic professionals and 17.1% (n = 9) by others. Reliability (p < 0.017) and quality (p < 0.03) were higher in the videos made by professionals. The Discern total score (r = 0.476; p = 0.004), reliability (r = 0.472; p = 0.004) and quality (r = 0.469; p = 0.004) were positively correlated with the duration of the videos.
    CONCLUSIONS: Our findings provide a strong rationale for educating breast cancer survivors seeking lymphedema information to select videos made by healthcare or academic professionals. Standardised evaluation prior to video publication is needed to ensure that the end-users receive accurate and quality information from YouTube.
    Keywords:  Breast cancer; Internet; Lymphedema; Patient education; YouTube
    DOI:  https://doi.org/10.1007/s00520-024-08746-2