bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒07‒21
27 papers selected by
Thomas Krichel, Open Library Society



  1. Brain Behav. 2024 Jul;14(7): e3627
      PURPOSE: The lack of requisite library resources has an enormous effect on academic life in most universities. While previous studies have suggested that the lack of resources such as textbooks affects academic success, this study seeks to provide empirical evidence on the chain effect of the lack of recommended textbooks in universities.DESIGN/METHODOLOGY/APPROACH: The study uses a quantitative dataset from 636 students from five public universities in Ghana collected using well-structured questionnaires. The study adopts exploratory factor analysis, confirmatory factor analysis, and partial least squares structural equation modeling (PLS-SEM) to analyze the measurement and structural models.
    FINDINGS: The study concludes that limited library resources (such as recommended textbooks) frustrate library users and eventually birth antisocial behaviors such as stealing, hiding, and eroding books (or pages).
    ORIGINALITY/VALUE: This study highlights the significance of providing adequate library resources. It also guides library managers, policymakers, and scholars to manage library resources effectively.
    Keywords:  aggressive behaviors; book hide and seek; book stealing; frustration; library resources
    DOI:  https://doi.org/10.1002/brb3.3627
  2. J Nurs Adm. 2024 Jul-Aug 01;54(7-8):54(7-8): 440-445
      Due to shifting priorities and unforeseen challenges, nurse leaders often lack sufficient time and resources to systematically review and appraise the available literature in search of the best evidence to guide decisions. A nurse-led rapid review service can produce accelerated knowledge synthesis and contextualized translation of evidence in a resource-efficient manner. This article describes a nurse-led rapid review service implemented at a large academic medical center and provides a reproducible process to guide other healthcare organizations in developing similar programs.
    DOI:  https://doi.org/10.1097/NNA.0000000000001454
  3. Front Med (Lausanne). 2024 ;11 1434427
      
    Keywords:  collaboration; information retrieval; literature review methods; meta-research; regulatory science
    DOI:  https://doi.org/10.3389/fmed.2024.1434427
  4. J Int Bioethique Ethique Sci. 2024 ;35(2): 77-92
      Blockchain technology has proven to be a plausible, even miraculous foundation for selling, transferring, and tracking large integers. This article investigates the adoption of blockchain technologies in library services among students of Ahmadu Bello University, Zaria. Libraries need to become adept with blockchain technology to survive and ensure timely and adequate provision of information services to their patrons. The paper therefore concludes by recommending Nigerian library professionals to create a blockchain website, blogs and webinars to engage researchers, students and information professionals to harness their contributions to the development of a white paper. (policy) and promoting emerging technologies for better service delivery in libraries adding value to libraries and the community they serve.
  5. J Clin Epidemiol. 2024 Jul 15. pii: S0895-4356(24)00222-1. [Epub ahead of print] 111466
      OBJECTIVE: The aim of this paper is to provide clinicians and authors of clinical guidelines or patient information with practical guidance on searching and choosing systematic reviews(s) (SR[s]) and, where adequate, on making use of SR(s).STUDY DESIGN AND SETTING: At the German conference of the EBM-Network a workshop on the topic was held to identify the most important areas where guidance for practice appears necessary. After the workshop, we established working groups. These included SR users with different backgrounds (e. g. information specialists, epidemiologists) and working areas. Each working group developed and consented a draft guidance based on their expert knowledge and experiences. The results were presented to the entire group and finalized in an iterative process.
    RESULTS: We developed a practical guidance that answers questions that usually arise when choosing and using SR(s). 1: How to efficiently find high-quality SRs? 2: How to choose the most appropriate SR? 3: What to do if no SR of sufficient quality could be identified? In addition, we developed an algorithm that links these steps and accounts for their interaction. The resulting guidance is primarily directed at clinicians and developers of clinical practice guidelines or patient information resources.
    CONCLUSION: We suggest practical guidance for making the best use of SRs when answering a specific research question. The guidance may contribute to the efficient use of existing SRs. Potential benefits when using existing SRs should be always weighted against potential limitations.
    Keywords:  Systematic reviews; literature searching; redundant reviews; study selection
    DOI:  https://doi.org/10.1016/j.jclinepi.2024.111466
  6. Comput Struct Biotechnol J. 2024 Dec;23 2661-2668
      Background: During the COVID-19 pandemic a need to process large volumes of publications emerged. As the pandemic is winding down, the clinicians encountered a novel syndrome - Post-acute Sequelae of COVID-19 (PASC) - that affects over 10 % of those who contract SARS-CoV-2 and presents a significant challenge in the medical field. The continuous influx of publications underscores a need for efficient tools for navigating the literature.Objectives: We aimed to develop an application which will allow monitoring and categorizing COVID-19-related literature through building publication networks and medical subject headings (MeSH) maps to identify key publications and networks.
    Methods: We introduce CORACLE (COVID-19 liteRAture CompiLEr), an innovative web application designed to analyse COVID-19-related scientific articles and to identify research trends. CORACLE features three primary interfaces: The "Search" interface, which displays research trends and citation links; the "Citation Map" interface, allowing users to create tailored citation networks from PubMed Identifiers (PMIDs) to uncover common references among selected articles; and the "MeSH" interface, highlighting current MeSH trends and their associations.
    Results: CORACLE leverages PubMed data to categorize literature on COVID-19 and PASC, aiding in the identification of relevant research publication hubs. Using lung function in PASC patients as a search example, we demonstrate how to identify and visualize the interactions between the relevant publications.
    Conclusion: CORACLE is an effective tool for the extraction and analysis of literature. Its functionalities, including the MeSH trends and customizable citation mapping, facilitate the discovery of emerging trends in COVID-19 and PASC research.
    Keywords:  COVID-19; Citation maps; Literature mining; MeSH maps
    DOI:  https://doi.org/10.1016/j.csbj.2024.06.018
  7. Clin Chem. 2024 Jul 16. pii: hvae093. [Epub ahead of print]
      BACKGROUND: The integration of ChatGPT, a large language model (LLM) developed by OpenAI, into healthcare has sparked significant interest due to its potential to enhance patient care and medical education. With the increasing trend of patients accessing laboratory results online, there is a pressing need to evaluate the effectiveness of ChatGPT in providing accurate laboratory medicine information. Our study evaluates ChatGPT's effectiveness in addressing patient questions in this area, comparing its performance with that of medical professionals on social media.METHODS: This study sourced patient questions and medical professional responses from Reddit and Quora, comparing them with responses generated by ChatGPT versions 3.5 and 4.0. Experienced laboratory medicine professionals evaluated the responses for quality and preference. Evaluation results were further analyzed using R software.
    RESULTS: The study analyzed 49 questions, with evaluators reviewing responses from both medical professionals and ChatGPT. ChatGPT's responses were preferred by 75.9% of evaluators and generally received higher ratings for quality. They were noted for their comprehensive and accurate information, whereas responses from medical professionals were valued for their conciseness. The interrater agreement was fair, indicating some subjectivity but a consistent preference for ChatGPT's detailed responses.
    CONCLUSIONS: ChatGPT demonstrates potential as an effective tool for addressing queries in laboratory medicine, often surpassing medical professionals in response quality. These results support the need for further research to confirm ChatGPT's utility and explore its integration into healthcare settings.
    DOI:  https://doi.org/10.1093/clinchem/hvae093
  8. J Oral Maxillofac Surg. 2024 Jul 02. pii: S0278-2391(24)00587-1. [Epub ahead of print]
      BACKGROUND: Artificial intelligence (AI) platforms such as Chat Generative Pre-Trained Transformer (ChatGPT) (Open AI, San Francisco, California, USA) have the capacity to answer health-related questions. It remains unknown whether AI can be a patient-friendly and accurate resource regarding third molar extraction.PURPOSE: The purpose was to determine the accuracy and readability of AI responses to common patient questions regarding third molar extraction.
    STUDY DESIGN, SETTING, SAMPLE: This is a cross sectional in-silico assessment of readability and soundness of a computer-generated report.
    INDEPENDENT VARIABLE: Not applicable.
    MAIN OUTCOME VARIABLES: Accuracy, or the ability to provide clinically correct and relevant information, was determined subjectively by 2 reviewers using a 5-point Likert scale, and objectively by comparing responses to American Association of Oral and Maxillofacial Surgeons (AAOMS) clinical consensus papers. Readability, or how easy a piece of text is to read, was assessed using the Flesch Kincaid Reading Ease (FKRE) and Flesch Kincaid Grade Level (FKGL). Both assess readability based on mean number of syllables per word, and words per sentence. To be deemed "readable", FKRE should be >60 and FKGL should be <8.
    COVARIATES: Not applicable.
    ANALYSES: Descriptive statistics were used to analyze the findings of this study.
    RESULTS: AI-generated responses above the recommended level for the average patient (FKRE: 52; FKGL: 10). The average Likert score was 4.36, suggesting that most responses were accurate with minor inaccuracies or missing information. AI correctly deferred to the provider in instances where no definitive answer exists. Of the responses that addressed content in AAOMS consensus papers, 18/19 responses closely aligned with them. All prompts did not provide citations or references.
    CONCLUSION AND RELEVANCE: AI was able to provide mostly accurate responses, and content was closely aligned with AAOMS guidelines. However, responses were too complex for the average third molar extraction patient, and were deficient in citations and references. It is important for providers to educate patients on the utility of AI, and to decide whether to recommend using it for information. Ultimately, the best resource for answers is from the practitioners themselves because the AI platform lacks clinical experience.
    DOI:  https://doi.org/10.1016/j.joms.2024.06.177
  9. Surgery. 2024 Jul 18. pii: S0039-6060(24)00448-3. [Epub ahead of print]
      BACKGROUND: Breast cancer is the leading cause of cancer-specific mortality in Hispanic women in the United States. Given the complexity of treatment options, disparities in access to quality care, and increased rates of inadequate or marginal health literacy within this population, these patients face significant barriers to informed decision-making. We aimed to assess the health literacy of Spanish breast cancer surgery websites.METHODS: A web search using "cirugía de cancer de mama or seno" was performed to identify the top 20 websites in Spanish, divided on the basis of affiliation with academic centers or private institutions and by international/US region. Validated metrics were used to assess readability, understandability, actionability, and cultural sensitivity using Simplified Measure of Gobbledygook in Spanish, Patient Education and Materials Assessment for Understandability and Actionability, and Cultural Sensitivity and Assessment Tool, respectively.
    RESULTS: Online materials in Spanish had a mean reading grade level of 10.9 (Simplified Measure of Gobbledygook in Spanish) for academic centers and 10.4 for private institutions. The average understandability score was significantly greater for academic centers at 77% compared with private institutions at 67% (P = .019). Actionability scores were low for both centers at 26% and 37%, respectively. The mean Cultural Sensitivity and Assessment Tool scores were 2.3 and 2.2, respectively.
    CONCLUSION: Current Spanish resources for breast cancer surgery are unfitting not only from a readability standpoint but also in their quality and cultural sensitivity. As the Latino population in the United States increases and online resources become more accessible, we must ensure that these resources cater to their target audience, bridging the health care access gap and empowering patients in decision-making.
    DOI:  https://doi.org/10.1016/j.surg.2024.06.025
  10. Laryngoscope Investig Otolaryngol. 2024 Aug;9(4): e1300
      Objective: Safe home tracheostomy care requires engagement and troubleshooting by patients, who may turn to online, AI-generated information sources. This study assessed the quality of ChatGPT responses to such queries.Methods: In this cross-sectional study, ChatGPT was prompted with 10 hypothetical tracheostomy care questions in three domains (complication management, self-care advice, and lifestyle adjustment). Responses were graded by four otolaryngologists for appropriateness, accuracy, and overall score. The readability of responses was evaluated using the Flesch Reading Ease (FRE) and Flesch-Kincaid Reading Grade Level (FKRGL). Descriptive statistics and ANOVA testing were performed with statistical significance set to p < .05.
    Results: On a scale of 1-5, with 5 representing the greatest appropriateness or overall score and a 4-point scale with 4 representing the highest accuracy, the responses exhibited moderately high appropriateness (mean = 4.10, SD = 0.90), high accuracy (mean = 3.55, SD = 0.50), and moderately high overall scores (mean = 4.02, SD = 0.86). Scoring between response categories (self-care recommendations, complication recommendations, lifestyle adjustments, and special device considerations) revealed no significant scoring differences. Suboptimal responses lacked nuance and contained incorrect information and recommendations. Readability indicated college and advanced levels for FRE (Mean = 39.5, SD = 7.17) and FKRGL (Mean = 13.1, SD = 1.47), higher than the sixth-grade level recommended for patient-targeted resources by the NIH.
    Conclusion: While ChatGPT-generated tracheostomy care responses may exhibit acceptable appropriateness, incomplete or misleading information may have dire clinical consequences. Further, inappropriately high reading levels may limit patient comprehension and accessibility. At this point in its technological infancy, AI-generated information should not be solely relied upon as a direct patient care resource.
    Keywords:  artificial intelligence; education; head and neck cancer; patient knowledge; tracheostomy
    DOI:  https://doi.org/10.1002/lio2.1300
  11. J Endourol. 2024 Jul 13.
      INTRODUCTION: Kidney stones are common and morbid conditions in the general population with a rising incidence globally. Prior studies show substantial limitations of online sources of information regarding prevention and treatment. The objective of this study was to examine the quality of information about kidney stones from artificial intelligence (AI) chatbots.METHODS: The most common online searches about kidney stones from Google Trends and headers from the National Institute of Diabetes and Digestive and Kidney Diseases website were used as inputs to 4 AI chatbots (ChatGPT version 3.5, Perplexity, Chat Sonic, and Bing AI). Validated instruments were used to assess the quality (DISCERN instrument from 1 low to 5 high), understandability and actionability (PEMAT, from 0 to 100%) of the chatbot outputs. In addition, we examined the reading level of the information and whether there was misinformation compared to guidelines (5 point Likert scale).
    RESULTS: AI chatbots generally provide high-quality consumer health information (median DISCERN 4 out of 5) and did not include misinformation (median 1 out of 5). The median understandability was moderate (median 69.6%) and actionability was moderate to poor (median 40%). Responses were presented at an advanced reading level (11th grade; median Flesch-Kincaid score 11.3).
    CONCLUSIONS: AI chatbots provide generally accurate information on kidney stones and lack misinformation; however, it is not easily actionable and is presented above the recommended reading level for consumer health information.
    DOI:  https://doi.org/10.1089/end.2023.0484
  12. Arthrosc Sports Med Rehabil. 2024 Jun;6(3): 100939
      Purpose: To replicate a patient's internet search to evaluate ChatGPT's appropriateness in answering common patient questions about anterior cruciate ligament reconstruction compared with a Google web search.Methods: A Google web search was performed by searching the term "anterior cruciate ligament reconstruction." The top 20 frequently asked questions and responses were recorded. The prompt "What are the 20 most popular patient questions related to 'anterior cruciate ligament reconstruction?'" was input into ChatGPT and questions and responses were recorded. Questions were classified based on the Rothwell system and responses assessed via Flesch-Kincaid Grade Level, correctness, and completeness were for both Google web search and ChatGPT.
    Results: Three of 20 (15%) questions were similar between Google web search and ChatGPT. The most common question types among the Google web search were value (8/20, 40%), fact (7/20, 35%), and policy (5/20, 25%). The most common question types amongst the ChatGPT search were fact (12/20, 60%), policy (6/20, 30%), and value (2/20, 10%). Mean Flesch-Kincaid Grade Level for Google web search responses was significantly lower (11.8 ± 3.8 vs 14.3 ± 2.2; P = .003) than for ChatGPT responses. The mean correctness for Google web search question answers was 1.47 ± 0.5, and mean completeness was 1.36 ± 0.5. Mean correctness for ChatGPT answers was 1.8 ± 0.4 and mean completeness was 1.9 ± 0.3, which were both significantly greater than Google web search answers (P = .03 and P = .0003).
    Conclusions: ChatGPT-4 generated more accurate and complete responses to common patient questions about anterior cruciate ligament reconstruction than Google's search engine.
    Clinical Relevance: The use of artificial intelligence such as ChatGPT is expanding. It is important to understand the quality of information as well as how the results of ChatGPT queries compare with those from Google web searches.
    DOI:  https://doi.org/10.1016/j.asmr.2024.100939
  13. Lasers Med Sci. 2024 Jul 17. 39(1): 183
      Just as tattoos continue to increase in popularity, many people with tattoos also seek removal, often due to career concerns. Prospective clients interested in laser tattoo removal may do research about the procedure online, as the internet increasingly becomes a resource to get preliminary health information. However, it is important that the online health information on the topic be of high quality and be accessible to all patients. We analyzed 77 websites from a Google search query using the terms "Laser tattoo removal patient Information" and "Laser tattoo removal patient Instructions" to assess this. The websites were evaluated for their readability using multiple validated indices and comprehensiveness. We found that websites had a broad readability range, from elementary to college, though most were above the recommended eighth-grade reading level. Less than half of the websites adequately discussed the increased risk of pigmentary complications in the skin of color clients or emphasized the importance of consulting with a board-certified dermatologist/plastic surgeon before the procedure. Over 90% of the websites noted that multiple laser treatments are likely needed for complete clearance of tattoos. The findings from our study underscore a significant gap in the accessibility and quality of online information for patients considering laser tattoo removal, particularly in addressing specific risks for patients with darker skin tones and emphasizing the need for consulting a board-certified physician before undergoing the procedure. It is important that online resources for laser tattoo removal be appropriately written to allow better decision-making, expectations, and future satisfaction for potential clients interested in the procedure.
    Keywords:  Laser tattoo removal; Patient education; Patient safety; Readability; Skin of color
    DOI:  https://doi.org/10.1007/s10103-024-04110-2
  14. OTO Open. 2024 Jul-Sep;8(3):8(3): e137
      Objective: To evaluate the readability, understandability, actionability, and accuracy of online resources covering vestibular migraine (VM).Study Design: Cross-sectional descriptive study design.
    Setting: Digital collection of websites appearing on Google search.
    Methods: Google searches were conducted to identify common online resources for VM. We examined readability using the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level scores, understandability and actionability using the Patient Education Materials Assessment Tool (PEMAT), and accuracy by comparing the website contents to the consensus definition of "probable vestibular migraine."
    Results: Eleven of the most popular websites were analyzed. Flesch-Kincaid Grade Level averaged at a 13th-grade level (range: 9th-18th). FRE scores averaged 35.5 (range: 9.1-54.4). No website had a readability grade level at the US Agency for Healthcare Research and Quality recommended 5th-grade level or an equivalent FRE score of 90 or greater. Understandability scores varied ranging from 49% to 88% (mean 70%). Actionability scores varied more, ranging from 12% to 87% (mean 44%). There was substantial inter-rater agreement for both PEMAT understandability scoring (mean κ = 0.76, SD = 0.08) and actionability scoring (mean κ = 0.65, SD = 0.08). Three sites included all 3 "probable vestibular migraine" diagnostic criteria as worded in the consensus statement.
    Conclusion: The quality of online resources for VM is poor overall in terms of readability, actionability, and agreement with diagnostic criteria.
    Keywords:  Flesch Reading Ease (FRE); Flesch‐Kincaid Grade Level (FKGL); Internet; Patient Education Materials Assessment Tool (PEMAT); actionability; health literacy; patient education; readability; understandability; vestibular migraine
    DOI:  https://doi.org/10.1002/oto2.137
  15. Surg Endosc. 2024 Jul 15.
      INTRODUCTION: Health literacy is the ability of individuals to use basic health information and services to make well-informed decisions. Low health literacy among surgical patients has been associated with nonadherence to preoperative and/or discharge instructions as well as poor comprehension of surgery. It likely poses as a barrier to patients considering foregut surgery which requires an understanding of different treatment options and specific diet instructions. The objective of this study was to assess and compare the readability of online patient education materials (PEM) for foregut surgery.METHODS: Using Google, the terms "anti-reflux surgery, "GERD surgery," and "foregut surgery" were searched and a total of 30 webpages from universities and national organizations were selected. The readability of the text was assessed with seven instruments: Flesch Reading Ease formula (FRE), Gunning Fog (GF), Flesch-Kincaid Grade Level (FKGL), Coleman Liau Index (CL), Simple Measure of Gobbledygook (SMOG), Automated Readability Index (ARI), and Linsear Write Formula (LWF). Mean readability scores were calculated with standard deviations. We performed a qualitative analysis gathering characteristics such as, type of information (preoperative or postoperative), organization, use of multimedia, inclusion of a version in another language.
    RESULTS: The overall average readability of the top PEM for foregut surgery was 12th grade. There was only one resource at the recommended sixth grade reading level. Nearly half of PEM included some form of multimedia.
    CONCLUSIONS: The American Medical Association and National Institute of Health have recommended that PEMs to be written at the 5th-6th grade level. The majority of online PEM for foregut surgery is above the recommended reading level. This may be a barrier for patients seeking foregut surgery. Surgeons should be aware of the potential gaps in understanding of their patients to help them make informed decisions and improve overall health outcomes.
    Keywords:  Foregut surgery; Health literacy; Patient education; Readability
    DOI:  https://doi.org/10.1007/s00464-024-11042-z
  16. Phys Sportsmed. 2024 Jul 17.
      OBJECTIVE: Developing softball pitchers are prone to injury due to the repetitive throwing motion. Many children and parents use the internet as a source of medical advice, but this information may not always be aligned with medical guidelines. The purpose of this study was to assess the medical advisability of injury prevention guidelines for developing softball pitchers on websites using Google as the primary search engine.METHODS: The first 100 websites populated from a Google search using the term softball youth pitching recommendations were evaluated. Each website was categorized as discussing baseball, softball, or both, and as athletic, commercial, or educational. For every website, 16 recommendations described by the American Orthopaedic Society for Sports Medicine (AOSSM) Stop Sports Injuries softball injury prevention guidelines (Table 1) were scored as in agreement (+1), different guideline mentioned (0.5), no mention (0), or discordant (-1).[Table: see text].
    RESULTS: Of the 98 qualifying websites, 57 advised only about softball, while 19 advised about both baseball and softball. Fifty websites had no mention of any recommendation outlined by AOSSM. Websites that were mostly in agreement with AOSSM were educational websites (mean score = 3.9, p = 0.02), websites discussing only softball (mean score = 2.0, p = 0.02), and the first 50 websites (mean score = 2.2, p = 0.04). The most common discordant guideline was differing opinions in pitch count (13 websites).
    CONCLUSION: The most common category in disagreement with AOSSM was different pitch count guidelines, highlighting a need for websites to provide more consistent information using high-quality resources. Educational websites, websites discussing only softball, and the first 50 websites had the highest scores, indicating that these types of websites are most likely to have the highest amount of medically advisable information. We recommend users conduct targeted Google searches on reliable websites for information on pitching softball recommendations to maximize the validity of Google search results.
    Keywords:  Female; Injury; Pitch count; Pitching; Softball
    DOI:  https://doi.org/10.1080/00913847.2024.2381474
  17. Cureus. 2024 Jun;16(6): e62510
      AIM:  The increasing prevalence of obesity has led to the popularity of bariatric surgery. Laparoscopic Roux-en-Y gastric bypass (LRYGB) is one of the most complex methods in bariatric surgery. The main steps of LRYGB were determined in the Delphi Consensus. This study investigated the instructiveness and reliability of YouTube videos about LRYGB based on the Delphi Consensus.METHODS: In February 2024, three different searches were done in the search bar of the YouTube platform with the terms "laparoscopic gastric bypass" "laparoscopic Roux-en-Y gastric bypass" and "laparoscopic RYGB". The first 50 videos in each search were evaluated. Animations, lectures, advertisements, non-English videos, and non-surgical videos (pre-surgery, post-surgery vlog, etc.) were excluded from the study. Delphi consensus steps were used to determine the reliability of the videos. The quality of the videos was measured using the Global Quality Scale (GQS) and modified DISCERN test.
    RESULTS: Forty-five videos were included in the evaluation. While 14 (31.1%) of these videos were classified as reliable, 31 (68.8%) were not found reliable. In reliable videos, video description, high definition (HD) resolution, GQS, and modified DISCERN were significantly higher (p-value 0.023, 0.004, 0.017, and 0.025 respectively).
    CONCLUSION: The rate of unreliable videos was higher on the YouTube platform. We conclude that YouTube alone is insufficient to learn LRYGB.
    Keywords:  education; gastric bypass; lrygb; reliable; youtube®
    DOI:  https://doi.org/10.7759/cureus.62510
  18. Rev Neurol. 2024 Aug 01. 79(3): 77-88
      INTRODUCTION: The use of YouTube® has spread among patients with chronic diseases such as multiple sclerosis (MS). These patients consult the available videos to learn more about their disease in terms of diagnosis and making decisions about treatments, including rehabilitation. The aim of this study was to evaluate the content, educational value, and quality analysis of MS videos about neurorehabilitation on YouTube® using quantitative instruments.MATERIALS AND METHODS: A search was conducted on YouTube®. The first 30 videos that met the inclusion criteria were reviewed. The videos were classified according to the upload source and the content. All videos included in the review were assessed by the DISCERN questionnaire, the JAMA benchmark, the global quality scale (GQS) and the video information and quality index (VIQI).
    RESULTS: The mean scores were: 28.3 (±9.33) in DISCERN, 2 (±0.81) in JAMA, 2.57 (±1.22) in GQS, and 11.73 (±4.06) in VIQI. JAMA score statistically significantly differed according to upload source (p = 0.002), video content (p = 0.023) and the speaker (p = 0.002). The DISCERN, JAMA, GQS, and VIQI scores showed significant correlations with each other.
    CONCLUSIONS: The analyzed videos about neurorehabilitation in people with MS on YouTube® were quite old since the upload, with a moderate duration and number of views, but with a poor quality of the content, educational value, and quality analysis of the videos. Our research showed that there were statistically significant differences in terms of quality, transparency, and reliability of the information, depending on the upload source, video content and the speaker.
    DOI:  https://doi.org/10.33588/rn.7903.2024091
  19. Spec Care Dentist. 2024 Jul 15.
      BACKGROUND: Caregivers seeking additional information about Presurgical Infant Orthopedics (PSIO) may turn to online sources, but the quality of information on platforms like YouTube is uncertain.AIM: To investigate the content and quality of PSIO videos on YouTube.
    DESIGN: YouTube videos were searched using keywords related to PSIO appliances. Videos that met the eligibility criteria (n = 52) were categorized as care provider or caregiver-based. Engagement metrics were analyzed and quality assessments were performed by two raters using the Global Quality Score (GQS), Video Information and Quality Index (VIQI), and Medical Quality Video Evaluation Tool (MQ-VET).
    RESULTS: Inter-rater and intra-rater correlations were high (r ≥0.9; p < 0.01), indicating excellent reliability. Strong correlations were observed between the GQS, VIQI, and MQ-VET scores (r: 0.86-0.91; p < 0.01). Mean GQS (2.7 ± 1.1), VIQI (13.0 ± 4.1), and MQ-VET (42.6 ± 12.4) scores indicated poor to moderate video quality. Most videos (73.1%) were in the care provider category and rated significantly higher (p < 0.05) in quality than the caregiver category for all three indices, but not for video engagement metrics.
    CONCLUSION: YouTube PSIO videos are not comprehensive and lack quality. Caregivers of infants undertaking PSIO should seek advice from care providers and not rely solely on YouTube videos.
    Keywords:  YouTube; cleft lip; cleft palate; eHealth; infant; orthopedic treatment; social media
    DOI:  https://doi.org/10.1111/scd.13041
  20. Int Ophthalmol. 2024 Jul 18. 44(1): 329
      PURPOSE: To evaluate the quality and reliability of YouTube videos as an educational resource about myopia.METHODS: The videos were identified by searching YouTube with the keywords 'myopia' and 'nearsightedness', using the website's default search settings. The number of views, likes, dislikes, view ratio, source of the upload, country of origin, video type, and described treatment techniques were assessed. Each video was evaluated using the DISCERN, Journal of the American Medical Association (JAMA), Ensuring Quality Information for Patients (EQIP), Health On the Net Code of Conduct Certification (HONcode), and the Global Quality Score (GQS) scales.
    RESULTS: A total of 112 videos were included. The classification of videos by source indicated that the top three contributors were health channels (30 videos [26.8%]), physicians (24 videos [21.4%]), and academic centers (19 videos [16.9%]). Most of these videos originated from the United States (74 videos [66.1%]) and focused on the pathophysiology (n = 89, 79.4%) and the treatment (n = 77, 68.7%) of myopia. Statistical comparisons among the groups of video sources showed no significant difference in the mean DISCERN score (p = 0.102). However, significant differences were noted in the JAMA (p = 0.011), GQS (p = 0.009), HONcode (p = 0.011), and EQIP (p = 0.002) scores.
    CONCLUSIONS: This study underscored the variability in the quality and reliability of YouTube videos related to myopia, with most content ranging from 'weak to moderate' quality based on the DISCERN and GQS scales, yet appearing to be 'excellent' according to the HONcode and EQIP scales. Videos uploaded by physicians generally exhibited higher standards, highlighting the importance of expert involvement in online health information dissemination. Given the potential risks of accessing incorrect medical data that can affect the decision-making processes of patients, caution should be exercised when using online content as a source of information.
    Keywords:  DISCERN score; Ensuring quality information for patients (EQIP) score; Global quality score (GQS); Journal of the american medical association (JAMA) score; YouTube
    DOI:  https://doi.org/10.1007/s10792-024-03250-2
  21. BMC Oral Health. 2024 Jul 15. 24(1): 798
      BACKGROUND: The aim of this study was to evaluate the content and quality of videos about bruxism treatments on YouTube, a platform frequently used by patients today to obtain information.METHODS: A YouTube search was performed using the keywords "bruxism treatment" and "teeth grinding treatment". "The sort by relevance" filter was used for both search terms and the first 150 videos were saved. A total of 139 videos that met the study criteria were included in the study. Videos were classified as poor, moderate or excellent based on a usefulness score that evaluated content quality. The modified DISCERN tool was also used to evaluate video quality. Additionally, videos were categorized according to the upload source, target audience and video type. The types of treatments mentioned in the videos and the demographic data of the videos were recorded.
    RESULTS: According to the usefulness score, 59% of the videos were poor-quality, 36.7% were moderate-quality and 4.3% were excellent-quality. Moderate-quality videos had a higher interaction index than excellent-quality videos (p = 0.039). The video duration of excellent-quality videos was longer than that of moderate and poor-quality videos (p = 0.024, p = 0.002). Videos with poor-quality content were found to have significantly lower DISCERN scores than videos with moderate (p < 0.001) and excellent-quality content (p = 0.008). Additionally, there was a significantly positive and moderate (r = 0.446) relationship between DISCERN scores and content usefulness scores (p < 0.001). There was only a weak positive correlation between DISCERN scores and video length (r = 0.359; p < 0.001). The videos uploaded by physiotherapists had significantly higher views per day and viewing rate than videos uploaded by medical doctors (p = 0.037), university-hospital-institute (p = 0.024) and dentists (p = 0.006). The videos uploaded by physiotherapists had notably higher number of likes and number of comments than videos uploaded by medical doctors (p = 0.023; p = 0.009, respectively), university-hospital-institute (p = 0.003; p = 0.008, respectively) and dentists (p = 0.002; p = 0.002, respectively).
    CONCLUSIONS: Although the majority of videos on YouTube about bruxism treatments are produced by professionals, most of the videos contain limited information, which may lead patients to debate treatment methods. Health professionals should warn patients against this potentially misleading content and direct them to reliable sources.
    Keywords:  Bruxism treatment; Discern; Internet; Patient information; Youtube
    DOI:  https://doi.org/10.1186/s12903-024-04571-5
  22. Arthrosc Sports Med Rehabil. 2024 Jun;6(3): 100921
      Purpose: To assess the quality of YouTube videos for patient education on shoulder dislocation.Methods: A standard YouTube search was performed in March 2023 using the terms "shoulder dislocation," "dislocated shoulder," and "glenohumeral joint dislocation" to identify eligible videos. Multiple scoring systems, including DISCERN (a validated tool for analyzing the quality of health information in consumer-targeted videos), Journal of the American Medical Association (JAMA) Benchmark Criteria, and the Global Quality Score (GQS) were used to evaluate the videos. Video quality scores from various sources were compared using the Kruskal-Wallis test for initial analysis, followed by Dunn's post-hoc test with Bonferroni correction, and the strength of relationship between variables was assessed using Spearman's rank correlation coefficient.
    Results: A total of 162 eligible videos were identified. The mean video duration was 11.38 ± 3.01 minutes, the median number of views was 653. Median number of days since upload was 1,972, the median view rate was 0.343, and median number of likes was 66.12. Based on the DISCERN classification, a substantial proportion of videos were classified as insufficient quality, with 19.4% as "very insufficient" and 42.1% as "insufficient"; 24.1% were classified as "average" quality, whereas only 13.1% were classified as "good" and 1.2% were "excellent." Videos from academic and professional sources showed a significant positive correlation with DISCERN scores (rho: +0.784, P < .001) and greater scores on all 4 scoring systems compared to health information websites.
    Conclusions: This study reveals that the majority of YouTube videos on shoulder dislocation lack sufficient quality for patient education, with content quality significantly influenced by the source.
    Clinical Relevance: Examining the accuracy of information that patients encounter on YouTube is essential for health care providers to direct individuals toward more reliable sources of information.
    DOI:  https://doi.org/10.1016/j.asmr.2024.100921
  23. Surg Endosc. 2024 Jul 15.
      BACKGROUND: Many surgeons use online videos to learn. However, these videos vary in content, quality, and educational value. In the setting of recent work questioning the safety of robotic-assisted cholecystectomies, we aimed (1) to identify highly watched online videos of robotic-assisted cholecystectomies, (2) to determine whether these videos demonstrate suboptimal techniques, and (3) to compare videos based on platform.METHODS: Two authors searched YouTube and a members-only Facebook group to identify highly watched videos of robotic-assisted cholecystectomies. Three members of the Society of American Gastrointestinal and Endoscopic Surgeons Safe Cholecystectomy Task Force then reviewed videos in random order. These three members rated each video using Sanford and Strasberg's six-point criteria for critical view of safety (CVS) scoring and the Parkland grading scale for cholecystitis. We performed regression to determine any association between Parkland grade and CVS score. We also compared scores between the YouTube and Facebook videos using a t test.
    RESULTS: We identified 50 videos of robotic-assisted cholecystectomies, including 25 from YouTube and 25 from Facebook. Of the 50 videos, six demonstrated a top-down approach. The remaining 44 videos received a mean of 2.4 of 6 points for the CVS score (SD = 1.8). Overall, 4 of the 50 videos (8%) received a passing CVS score of 5 or 6. Videos received a mean of 2.4 of 5 points for the Parkland grade (SD = 0.9). Videos on YouTube had lower CVS scores than videos on Facebook (1.9 vs. 2.8, respectively), though this difference was not significant (p = 0.09). By regression, there was no association between Parkland grade and CVS score (p = 0.13).
    CONCLUSION: Publicly available and closed-group online videos of robotic-assisted cholecystectomy demonstrated inadequate dissection and may be of limited educational value. Future work should center on introducing measures to identify and feature videos with high-quality techniques most useful to surgeons.
    Keywords:  Cholecystectomy; Online education; Robotic surgery; Video-based education
    DOI:  https://doi.org/10.1007/s00464-024-11054-9
  24. Urology. 2024 Jul 11. pii: S0090-4295(24)00563-6. [Epub ahead of print]
      OBJECTIVES: To assess the reliability and quality analyses of ThuLEP videos on YouTube, as a source of public information.MATERIALS AND METHODS: In this study, a YouTube search with the keyword "ThuLEP" was performed on November 15, 2022 and 142 videos were listed according to relevance. Video features and source of upload were recorded. The quality of videos was evaluated by using both the Journal of American Medical Association (JAMA) score and the Global Quality Score(GQS); the reliability of videos was evaluated by using 5-point modified DISCERN tool,respectively. The correlation analysis was performed by using Spearman test between video features and these three scores.
    RESULTS: 77 videos were analysed after exclusion and the most common source of upload were urologists (54.5%) and the videos containing only ThuLEP surgery (74%) were the majority of the videos. The median JAMA score, 5-point modified DISCERN score, and GQS were 2, 1, 1, respectively. There were no statistical differences in these three scores according to the source of the upload. All three scores were analysed separately by language and no significant statistical difference was found.There was a positive correlation between the video power index and as well as JAMA, GQS and m. DISCERN scores.
    CONCLUSIONS: Despite abundant videos on ThuLEP on YouTube, most of these videos are not targeted to public and information provided may not be as useful for patients. Information presented in these videos may be inaccurate and not reliable.
    DOI:  https://doi.org/10.1016/j.urology.2024.07.008
  25. Arthrosc Sports Med Rehabil. 2024 Jun;6(3): 100927
      Purpose: To evaluate the quality of meniscus-related TikTok videos to better understand their value for patient education.Methods: The term "meniscus" was used as the key word for an extensive online search of video content on the TikTok on November 14, 2023. The first 100 videos were used for analysis. The duration of the videos and the number of likes, shares, and views were recorded for each video. Furthermore, videos were categorized based on the source (health workers, private user), the type of subject (patient experience, physical therapy, anatomy, clinical examination, surgical technique and injury mechanism), type of content (patient experience/testimony, education, rehabilitation), and the presence of music or voice. The quality and reliability assessments of video contents were conducted using the DISCERN instrument, the Journal of the American Medical Association benchmark criteria, and Global Quality Score.
    Results: Of the 100 videos included in this study, 62 (62%) videos were published by health workers and 38 by private users (38%). Most of the information regarded patient experience (36, 36%), followed by physical therapy (32, 32%), anatomy (14, 14%), clinical examination (8, 8%), surgical technique (6, 6%), and injury mechanism (4, 4%). Video content reported patient experience in 39 (39%) videos, rehabilitation in 31 (31%) videos, and education in the remaining 30 (30%). The mean length of the videos was 39.12 ± 49.56 seconds. The mean number of views was 1,383,001.65 ± 5,291,822.28, whereas the mean numbers of comments, likes and shares were 408.53 ± 1976.90, 54,763.43 ± 211,823.44 and 873.70 ± 2,802.01, respectively. The mean DISCERN score, Journal of the American Medical Association benchmark criteria score, and Global Quality Score were 17.93 ± 5.07, 0.24 ± 0.47, and 1.15 ± 0.41, respectively.
    Conclusions: Meniscus-related videos on TikTok are widely viewed and shared but the overall educational value to patients is poor.
    Clinical Relevance: As patients increasingly use social media to learn about their conditions, it is important for orthopaedic health care professionals to understand the limitations of TikTok videos addressing the meniscus as potential sources of information for their patients.
    DOI:  https://doi.org/10.1016/j.asmr.2024.100927