bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒05‒26
twenty-one papers selected by
Thomas Krichel, Open Library Society



  1. F1000Res. 2024 ;13 134
      With the enormous growth in interest and use of generative artificial intelligence (AI) systems seen since the launch of ChatGPT in autumn 2022 have come questions both about the legal status of AI outputs, and of using protected works as training inputs. It is inevitable that UK higher education institution (HEI) library copyright advice services will see an increase in questions around use of works with AI as a result. Staff working in such library services are not lawyers or able to offer legal advice to their academic researchers. Nonetheless, they must look at the issues raised, consider how to advise in analogous situations of using copyright material, and offer opinion to researchers accordingly. While the legal questions remain to be answered definitively, copyright librarians can still offer advice on both open licences and use of copyright material under permitted exceptions. We look here at how library services can address questions on copyright and open licences for generative AI for researchers in UK HEIs.
    Keywords:  AI; artificial intelligence; copyright; generative AI systems; open access; open licences
    DOI:  https://doi.org/10.12688/f1000research.143131.1
  2. JMIR Med Inform. 2024 May 14. 12 e51187
      Background: A large language model is a type of artificial intelligence (AI) model that opens up great possibilities for health care practice, research, and education, although scholars have emphasized the need to proactively address the issue of unvalidated and inaccurate information regarding its use. One of the best-known large language models is ChatGPT (OpenAI). It is believed to be of great help to medical research, as it facilitates more efficient data set analysis, code generation, and literature review, allowing researchers to focus on experimental design as well as drug discovery and development.Objective: This study aims to explore the potential of ChatGPT as a real-time literature search tool for systematic reviews and clinical decision support systems, to enhance their efficiency and accuracy in health care settings.
    Methods: The search results of a published systematic review by human experts on the treatment of Peyronie disease were selected as a benchmark, and the literature search formula of the study was applied to ChatGPT and Microsoft Bing AI as a comparison to human researchers. Peyronie disease typically presents with discomfort, curvature, or deformity of the penis in association with palpable plaques and erectile dysfunction. To evaluate the quality of individual studies derived from AI answers, we created a structured rating system based on bibliographic information related to the publications. We classified its answers into 4 grades if the title existed: A, B, C, and F. No grade was given for a fake title or no answer.
    Results: From ChatGPT, 7 (0.5%) out of 1287 identified studies were directly relevant, whereas Bing AI resulted in 19 (40%) relevant studies out of 48, compared to the human benchmark of 24 studies. In the qualitative evaluation, ChatGPT had 7 grade A, 18 grade B, 167 grade C, and 211 grade F studies, and Bing AI had 19 grade A and 28 grade C studies.
    Conclusions: This is the first study to compare AI and conventional human systematic review methods as a real-time literature collection tool for evidence-based medicine. The results suggest that the use of ChatGPT as a tool for real-time evidence generation is not yet accurate and feasible. Therefore, researchers should be cautious about using such AI. The limitations of this study using the generative pre-trained transformer model are that the search for research topics was not diverse and that it did not prevent the hallucination of generative AI. However, this study will serve as a standard for future studies by providing an index to verify the reliability and consistency of generative AI from a user's point of view. If the reliability and consistency of AI literature search services are verified, then the use of these technologies will help medical research greatly.
    Keywords:  ChatGPT; artificial intelligence; clinical decision support system; decision support; education; evidence-based medicine; language model; search engine; support; systematic review; tool; treatment
    DOI:  https://doi.org/10.2196/51187
  3. J Med Internet Res. 2024 May 22. 26 e53164
      BACKGROUND: Large language models (LLMs) have raised both interest and concern in the academic community. They offer the potential for automating literature search and synthesis for systematic reviews but raise concerns regarding their reliability, as the tendency to generate unsupported (hallucinated) content persist.OBJECTIVE: The aim of the study is to assess the performance of LLMs such as ChatGPT and Bard (subsequently rebranded Gemini) to produce references in the context of scientific writing.
    METHODS: The performance of ChatGPT and Bard in replicating the results of human-conducted systematic reviews was assessed. Using systematic reviews pertaining to shoulder rotator cuff pathology, these LLMs were tested by providing the same inclusion criteria and comparing the results with original systematic review references, serving as gold standards. The study used 3 key performance metrics: recall, precision, and F1-score, alongside the hallucination rate. Papers were considered "hallucinated" if any 2 of the following information were wrong: title, first author, or year of publication.
    RESULTS: In total, 11 systematic reviews across 4 fields yielded 33 prompts to LLMs (3 LLMs×11 reviews), with 471 references analyzed. Precision rates for GPT-3.5, GPT-4, and Bard were 9.4% (13/139), 13.4% (16/119), and 0% (0/104) respectively (P<.001). Recall rates were 11.9% (13/109) for GPT-3.5 and 13.7% (15/109) for GPT-4, with Bard failing to retrieve any relevant papers (P<.001). Hallucination rates stood at 39.6% (55/139) for GPT-3.5, 28.6% (34/119) for GPT-4, and 91.4% (95/104) for Bard (P<.001). Further analysis of nonhallucinated papers retrieved by GPT models revealed significant differences in identifying various criteria, such as randomized studies, participant criteria, and intervention criteria. The study also noted the geographical and open-access biases in the papers retrieved by the LLMs.
    CONCLUSIONS: Given their current performance, it is not recommended for LLMs to be deployed as the primary or exclusive tool for conducting systematic reviews. Any references generated by such models warrant thorough validation by researchers. The high occurrence of hallucinations in LLMs highlights the necessity for refining their training and functionality before confidently using them for rigorous academic purposes.
    Keywords:  Bard; ChatGPT; artificial intelligence; hallucinated; human conducted; large language models; literature search; rotator cuff; systematic reviews
    DOI:  https://doi.org/10.2196/53164
  4. Cureus. 2024 Apr;16(4): e58618
      Objective This study aimed to assess the quality of online patient educational materials regarding posterior cruciate ligament (PCL) reconstruction. Methods We performed a search of the top-50 results on Google® (terms: "posterior cruciate ligament reconstruction," "PCL reconstruction," "posterior cruciate ligament surgery," and "PCL surgery") and subsequently filtered to rule out duplicated/inaccessible websites or those containing only videos (67 websites included). Readability was assessed using six formulas: Flesch-Kincaid Reading Ease (FRE), Flesch-Kincaid Grade Level (FKG), Gunning Fog Score (GF), Simple Measure of Gobbledygook (SMOG) Index, Coleman-Liau Index (CLI), Automated Readability Index (ARI); quality was assessed using the JAMA benchmark criteria and recording the presence of the HONcode seal. Results The mean FRE was 49.3 (SD 11.2) and the mean FKG level was 8.09. These results were confirmed by the other readability formulae (average: GF 8.9; SMOG Index 7.3; CLI 14.7; ARI 6.5). A HONcode seal was available for 7.4 % of websites. The average JAMA score was 1.3. Conclusion The reading level of online patient materials concerning PCL reconstruction is too high for the average reader, requiring high comprehension skills. Practice implications Online medical information has been shown to influence patient healthcare decision processes. Patient-oriented educational materials should be clear and easy to understand.
    Keywords:  information quality; internet; patient education; posterior cruciate ligament (pcl) reconstruction; readability
    DOI:  https://doi.org/10.7759/cureus.58618
  5. Surgery. 2024 May 19. pii: S0039-6060(24)00238-1. [Epub ahead of print]
      BACKGROUND: In the United States, over 6 million people are affected by chronic wounds. Patients often rely on the Internet for treatment information; however, these educational materials typically exceed the average eighth-grade health literacy level. This study aimed to assess the readability and language accessibility of online patient education materials on wound care strategies.METHODS: A search was conducted on Google for articles related to wound care strategies. The first 12 unique websites from each search strategy were selected for further analysis. Readability was assessed using 11 tests, with the mean scores calculated for each.
    RESULTS: A total of 66 articles pertaining to wound care strategies were retrieved from 43 websites. All articles had an average reading grade level of 13.5 ± 2.5 and an average reading age of 18.7 ± 2.5 years. Websites were categorized by the following sources: academic (34.9%), reagent/biologic manufacturers (27.9%), wound care (18.6%), news media organizations (14%), and other (4.7%). Flesch Reading Ease Score, graded from 0 for most difficult to 100 for least difficult, was found to be highest for academic websites (44.2, P = .01) and lowest for news media websites (24.9, P = .01). Academic websites were available in the more languages compared to all other website categories (P < .01).
    CONCLUSION: Online materials related to wound care strategies often exceed the National Institute of Health recommended eighth-grade reading level. This study emphasizes the need for healthcare providers to create more accessible educational materials to address the gap in health literacy and optimize patient care.
    DOI:  https://doi.org/10.1016/j.surg.2024.04.014
  6. Cureus. 2024 Apr;16(4): e58603
      Background This cross-sectional study aimed to assess the readability of strabismus-related websites and the quality of their content. Methodology This cross-sectional study evaluated the websites on strabismus disease using Ateşman and Bezirci-Yilmaz's readability formulas, which have been scientifically verified to be effective for Turkish people. The study picked texts from the first 50 websites that appeared on the screen after typing "strabismus treatment" into the Google search engine based on their Turkish reading level and information reliability. In this study, 41 of the first 50 websites were reviewed. Furthermore, two separate senior ophthalmologists evaluated the JAMA and DISCERN indexes, as well as the credibility of the content on these sites. Results The Bezirci-Yilmaz readability index indicated that the websites were readable for individuals with an average education level of 10.5 ± 2.3 years. The websites scored an average of 55.2 ± 7.9 on the Ateşman readability formula, indicating that they were readable for students in the 11-12th grade. The websites had an average JAMA score of 0.8 ± 0.7 points and a DISCERN score of 34.2 ± 8.6 points, indicating low-quality content. Conclusions The readability of websites providing information regarding strabismus was significantly higher than Turkey's average educational level. Websites should not only be designed to be easy to read so that strabismus patients may learn about their condition but should also provide higher-quality strabismus content.
    Keywords:  discern; jama; readability; strabismus; website
    DOI:  https://doi.org/10.7759/cureus.58603
  7. Cureus. 2024 Apr;16(4): e58559
      Introduction Sarcoidosis is an inflammatory disease characterized by the formation of noncaseating granulomas in multiple organ systems. The presentation can vary widely; although some patients with sarcoidosis can be asymptomatic, sarcoidosis can also present in others with symptomatic multiorgan system involvement. Considering the potential severity of the disease, patients need to be well-informed about sarcoidosis to better manage their health. This study aims to assess the readability levels of online resources about sarcoidosis. Methods We conducted a retrospective cross-sectional study. The term "sarcoidosis" was searched online using both Google and Bing to find websites written in English. Each website was categorized by type: academic, commercial, government, nonprofit, and physician. The readability scores for each website were calculated using six different readability tests: the Flesch-Kincaid reading ease (FKRE), Flesch-Kincaid grade level (FKGL), Gunning fog score (GFS), Simple Measure of Gobbledygook (SMOG), automated readability index (ARI), and Coleman-Liau index (CLI). FKRE gives a score that corresponds to the difficulty of the text, while the remaining tests give a score that corresponds to a grade level in terms of reading ability. A one-sample t-test was used to compare all test scores with the national recommended standard of a sixth-grade reading level. Our null hypothesis was that the readability scores of the websites searched would not differ statistically significantly from the sixth-grade reading level and that there would be no significant differences across website categories. To evaluate the difference between the categories of websites, ANOVA testing was used. Results Thirty-four websites were analyzed. Each of the six readability tests for the websites had an average score, which corresponded to being significantly harder to read than the nationally recommended sixth-grade reading level (p<0.001). None of the mean readability scores showed a statistically significant difference across the five different website categories. Conclusions This is the first study, to our knowledge, to examine the readability of online English resources on sarcoidosis and calculate standardized readability scores for them. It implies that the online English material for sarcoidosis is above the health literacy recommended reading levels for patients. There is a need to simplify the material to be easier to read for patients.
    Keywords:  health literacy; internet; medical education; readability; sarcoidosis
    DOI:  https://doi.org/10.7759/cureus.58559
  8. Indian J Psychiatry. 2024 Apr;66(4): 352-359
      Background: Management of dementia involves a multidisciplinary approach which also requires active participation from family members and caregivers. Thus, having easy access to information about dementia care is pertinent. Internet-based information is an emerging method for the same.Aim: To perform a comparative assessment of patient-oriented online information available on treatment of dementia over web pages in English and Hindi language.
    Methods: Observational study was conducted online through a general internet search engine (www.google.com). Web pages containing patient-oriented online information on treatment of dementia in English and Hindi were reviewed to assess their content and quality, esthetics, and interactivity. Appropriate descriptive and inferential statistics were conducted using the Statistical Package for the Social Sciences.
    Results: A total of 70 web pages met the eligibility criteria. Content quality assessed using the DISCERN score was significantly higher for English web pages compared to Hindi web pages (P < 0.01). About 72.4% (21/29) of English and only 9.8% (4/41) of Hindi web pages had a total DISCERN score of 40 or above, indicating good quality. For esthetics, the median score for English pages was significantly higher than for Hindi web pages (P < 0.01). The web pages with Health On Net (HON) certification had significantly better content quality.
    Conclusion: Our study revealed a scarcity of good quality online information about dementia and its treatment, especially in the Hindi language. English language websites showed better content quality than Hindi websites. HON Code label might be used as an indicator of better content quality for online resources informing on dementia treatment by lay people.
    Keywords:  DISCERN; Dementia; Internet; esthetics; interactivity
    DOI:  https://doi.org/10.4103/indianjpsychiatry.indianjpsychiatry_506_23
  9. Can J Anaesth. 2024 May 21.
      PURPOSE: Guidelines recommend that health-related information for patients should be written at or below the sixth-grade level. We sought to evaluate the readability level and quality of online patient education materials regarding epidural and spinal anesthesia.METHODS: We evaluated webpages with content written specifically regarding either spinal or epidural anesthesia, identified using 11 relevant search terms, with seven commonly used readability formulas: Flesh-Kincaid Grade Level (FKGL), Gunning Fox Index (GFI), Coleman-Liau Index (CLI), Automated Readability Index (ARI), Simple Measure of Gobbledygook (SMOG), Flesch Reading Ease (FRE), and New Dale-Chall (NDC). Two evaluators assessed the quality of the reading materials using the Brief DISCERN tool.
    RESULTS: We analyzed 261 webpages. The mean (standard deviation) readability scores were: FKGL = 8.8 (1.9), GFI = 11.2 (2.2), CLI = 10.3 (1.9), ARI = 8.1 (2.2), SMOG = 11.6 (1.6), FRE = 55.7 (10.8), and NDC = 5.4 (1.0). The mean grade level was higher than the recommended sixth-grade level when calculated with six of the seven readability formulas. The average Brief DISCERN score was 16.0.
    CONCLUSION: Readability levels of online patient education materials pertaining to epidural and spinal anesthesia are higher than recommended. When we evaluated the quality of the information using a validated tool, the materials were found to be just below the threshold of what is considered good quality. Authors of educational materials should provide not only readable but also good-quality information to enhance patient understanding.
    Keywords:  epidural anesthesia; internet; patient education; readability; spinal anesthesia
    DOI:  https://doi.org/10.1007/s12630-024-02771-9
  10. Cureus. 2024 Apr;16(4): e58488
      Introduction The National Institutes of Health and the American Medical Association recommend patient education materials (EMs) be at or below the sixth-grade reading level. The American Cancer Society, Leukemia & Lymphoma Society, and National Comprehensive Cancer Network have accurate blood cancer EMs. Methods One hundred one (101) blood cancer EMs from the above organizations were assessed using the following: Flesch Reading Ease Formula (FREF), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Simple Measure of Gobbledygook Index (SMOG), and the Coleman-Liau Index (CLI). Results Only 3.96% of patient EMs scored at or below the seventh-grade reading level in all modalities. Healthcare professional education materials (HPEMs) averaged around the college to graduate level. For leukemia and lymphoma patient EMs, there were significant differences for FKGL vs. SMOG, FKGL vs. GFI, FKGL vs. CLI, SMOG vs. CLI, and GFI vs. CLI. For HPEMs, there were significant differences for FKGL vs. GFI and GFI vs. CLI. Conclusion The majority of patient EMs were above the seventh-grade reading level. A lack of easily readable patient EMs could lead to a poor understanding of disease and, thus, adverse health outcomes. Overall, patient EMs should not replace physician counseling. Physicians must close the gaps in patients' understanding throughout their cancer treatment.
    Keywords:  blood cancer; leukemia; lymphoma; patient education materials; physician education materials; readability
    DOI:  https://doi.org/10.7759/cureus.58488
  11. Saudi J Med Med Sci. 2024 Apr-Jun;12(2):12(2): 188-193
      Background: Patients are increasingly using the internet for searching health-related information. However, the quality and readability of the information available on the internet need to be assessed. To date, no study has assessed the quality and readability of web-based Arabic health information on early childhood caries.Objectives: To evaluate the quality and readability of patient-oriented online Arabic health information regarding early childhood caries.
    Materials and Methods: For this infodemiological study, the Google and Yahoo search engines were searched using specific Arabic terms for early childhood caries, and the top 100 searches from both search engines were considered. Eligible websites were categorized in terms of affiliation as commercial, health portal, dental practice, professional, and journalism. The quality of the websites was assessed using the QUality Evaluation Scoring Tool (QUEST), and readability using the Gunning Fog index (GFI).
    Results: A total of 140 websites were included after applying the exclusion criteria, of which 50.7% websites were of journalism. The majority of the websites (70%) had an overall low-quality level, with a QUEST score <10. The quality of websites retrieved from Google searches was of significantly higher quality than those from Yahoo (P < 0.0001). More than half (51.4%) of the websites had good readability, with a GFI score ≤8. Journalism websites had a significantly higher proportion of websites with poor readability level (62%) compared with other affiliations (P = 0.0072).
    Conclusion: The web-based Arabic information regarding early childhood caries is currently of low quality and moderate readability level, thereby indicating a need for improving such patient-facing content.
    Keywords:  Arabic; caries; early childhood caries; health; health education; health information; internet; quality; web-based
    DOI:  https://doi.org/10.4103/sjmms.sjmms_443_23
  12. PLoS One. 2024 ;19(5): e0303308
      BACKGROUND: This study assesses the quality and readability of Arabic online information about orthodontic pain. With the increasing reliance on the internet for health information, especially among Arabic speakers, it's critical to ensure the accuracy and comprehensiveness of available content. Our methodology involved a systematic search using the Arabic term for (Orthodontic Pain) in Google, Bing, and Yahoo. This search yielded 193,856 results, from which 74 websites were selected based on predefined criteria, excluding duplicates, scientific papers, and non-Arabic content.MATERIALS AND METHODS: For quality assessment, we used the DISCERN instrument, the Journal of the American Medical Association (JAMA) benchmarks, and the Health on the Net (HON) code. Readability was evaluated using the Simplified Measure of Gobbledygook (SMOG), Flesch Reading Ease Score (FRES), and Flesch-Kincaid Grade Level (FKGL) scores.
    RESULTS: Results indicated that none of the websites received the HONcode seal. The DISCERN assessment showed median total scores of 14.96 (± 5.65), with low overall quality ratings. In JAMA benchmarks, currency was the most achieved aspect, observed in 45 websites (60.81%), but none met all four criteria simultaneously. Readability scores suggested that the content was generally understandable, with a median FKGL score of 6.98 and a median SMOG score of 3.98, indicating middle school-level readability.
    CONCLUSION: This study reveals a significant gap in the quality of Arabic online resources on orthodontic pain, highlighting the need for improved standards and reliability. Most websites failed to meet established quality criteria, underscoring the necessity for more accurate and trustworthy health information for Arabic-speaking patients.
    DOI:  https://doi.org/10.1371/journal.pone.0303308
  13. Surg Innov. 2024 May 24. 15533506241256827
      BACKGROUND: In the digital age, patients are increasingly turning to the Internet to seek medical information to aid in their decision-making process before undergoing medical treatments. Fluorescence imaging is an emerging technological tool that holds promise in enhancing intra-operative decision-making during surgical procedures. This study aims to evaluate the quality of patient information available online regarding fluorescence imaging in surgery and assesses whether it adequately supports informed decision-making.METHOD: The term "patient information on fluorescence imaging in surgery" was searched on Google. The websites that fulfilled the inclusion criteria were assessed using 2 scoring instruments. DISCERN was used to evaluate the reliability of consumer health information. QUEST was used to assess authorship, tone, conflict of interest and complementarity.
    RESULTS: Out of the 50 websites identified from the initial search, 10 fulfilled the inclusion criteria. Only two of these websites were updated in the last two years. The definition of fluorescence imaging was stated in only 50% of the websites. Although all websites mentioned the benefits of fluorescence imaging, none mentioned potential risks. Assessment by DISCERN showed that 30% of the websites were rated low and 70% were rated moderate. With QUEST, the websites demonstrated an average score of 62.5%.
    CONCLUSION: This study highlights the importance of providing patients with accurate and balanced information about medical technologies and procedures they may undergo. Fluorescence imaging in surgery is a promising technology that can potentially improve surgical outcomes. However, patients need to be well-informed about its benefits and limitations in order to make informed decisions about their healthcare.
    Keywords:  biomedical engineering; general surgery; image guided surgery; surgical education; the business of surgery
    DOI:  https://doi.org/10.1177/15533506241256827
  14. Aesthetic Plast Surg. 2024 May 24.
      INTRODUCTION: Patients frequently turn to online information for decision-making factors about aesthetic procedures. The quality of online medical content is an essential supplement to clinical education. These resources assist patients in understanding the risks, benefits, and appropriateness of their desired procedure. This study examines the breadth and readability of online blepharoplasty information, elucidating its educational utility.METHODS: A depersonalized Google search was conducted using the Startpage Search Engine, investigating key phrases, "blepharoplasty decision making factors", "eye lift decision making factors", and "eyelid lift decision making factors". The first three pages of results for each search term, totaling 90 links were screened. Data were extracted for various decision-making factors, subspecialty, gender, and readability.
    RESULTS: Twenty-six websites met inclusion for analysis. Thirteen websites were plastic surgery based, five otolaryngology (ENT), five ophthalmology/oculoplastic, one oral-maxillofacial (OMFS), and two mixed-based practices. Most blepharoplasty webpages identified were that of private practice and male surgeons. Half were subspecialties other than plastic surgery. Thirteen common decision-making factors were identified. The most common factors addressed across all texts were recovery followed by cosmetic and functional goals. The least discussed were genetic factors. Average Readability exceeded the 12th grade. There were no significant differences in readability means among subspecialties.
    CONCLUSION: This study examines the online blepharoplasty sphere among US-based practices providing clinical education to patients. No appreciable differences among gender, subspecialty, and readability on decision-making factors were found, highlighting a consistency among surgeons. Most websites fell short of readability standards, however, emphasizing a need for clearer information to patients.
    NO LEVEL ASSIGNED: This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
    Keywords:  Blepharoplasty; Online patient education; Readability
    DOI:  https://doi.org/10.1007/s00266-024-04083-1
  15. Hand Surg Rehabil. 2024 May 21. pii: S2468-1229(24)00114-2. [Epub ahead of print] 101723
      INTRODUCTION: ChatGPT and its application in producing patient education materials for orthopedic hand disorders has not been extensively studied. This study evaluated the quality and readability of educational information pertaining to common hand surgeries from patient education websites and information produced by ChatGPT.METHODS: Patient education information for four hand surgeries (carpal tunnel release, trigger finger release, Dupuytren's contracture, and ganglion cyst surgery) was extracted from ChatGPT (at a scientific and fourth-grade reading level), WebMD, and Mayo Clinic. In a blinded and randomized fashion, five fellowship-trained orthopaedic hand surgeons evaluated the quality of information using a modified DISCERN criteria. Readability and reading grade level were assessed using Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKGL) equations.
    RESULTS: The Mayo Clinic website scored higher in terms of quality for carpal tunnel release information (p = 0.004). WebMD scored higher for Dupuytren's contracture release (p < 0.001), ganglion cyst surgery (p = 0.003), and overall quality (p < 0.001). ChatGPT - 4th Grade Reading Level, ChatGPT - Scientific Reading Level, WebMD, and Mayo Clinic written materials on average exceeded recommended reading grade levels (4th-6th grade) by at least four grade levels (10th, 14th, 13th, and 11th grade, respectively).
    CONCLUSIONS: ChatGPT provides inferior education materials compared to patient-friendly websites. When prompted to provide more easily read materials, ChatGPT generates less robust information compared to patient-friendly websites and does not adequately simplify the educational information. ChatGPT has potential to improve the quality and readability of patient education materials but currently, patient-friendly websites provide superior quality at similar reading comprehension levels.
    Keywords:  ChatGPT; Mayo Clinic; Patient education; Quality analysis; WebMD
    DOI:  https://doi.org/10.1016/j.hansur.2024.101723
  16. Cureus. 2024 Apr;16(4): e58710
      Palpitations refer to the sensation of rapid, fluttering, or pounding heartbeats in the chest, the determinants of which may range from hormonal changes to anxiety or arrhythmias. YouTube is one of the most prevailing and accepted web-based platforms people trust to help them understand more about their health conditions. Thus, this study aims to assess whether the quality of content about palpitations on this platform is reliable and sufficient. Seventy-one YouTube videos were analyzed using criteria such as date and time of upload, type of uploader, and type of content. The Global Quality Score (GQS) and modified DISCERN score were used to analyze the quality and reliability of the information provided. Microsoft Excel (Microsoft Corporation, Redmond, WA, US) was used for data analysis, and StataCorp's 2023 Stata Statistical Software (College Station, TX, US) was used for statistical analysis and visualization. Of the 71 videos analyzed, 90.14% were uploaded more than a year ago, 80.28% described the symptomatology in detail, and 81.69% accurately described the etiological factors. Hospitals and doctors were the most common uploaders, constituting 23% and 19% of the uploaded videos, respectively, and had high GQSs (Median GQS = 4). The highest scores also belonged to videos uploaded by patients suffering from the disease (Median GQS = 5). Hospitals and news channels ranked highest on the reliability score (Median DISCERN = 4, respectively). It was determined that despite varied sources, the nature of content provided by the platform contains promotional material and content gaps; YouTube should, therefore, be used critically and as per professional sources.
    Keywords:  cross sectional study; fast heartbeat; global quality score; observational study; palpitations; youtube
    DOI:  https://doi.org/10.7759/cureus.58710
  17. Dent Traumatol. 2024 May 25.
      BACKGROUND/AIM: Root resorption occurs for various reasons and can also be seen as a treatment complication in orthodontics. This study aimed to assess the reliability and quality of YouTube™ videos on root resorption and to assess whether the videos referred to orthodontic treatment and other stimulation factors.MATERIALS AND METHODS: YouTube was searched using the keyword 'root resorption', which is the most searched term on Google Trends. The first 200 videos identified using the default filter 'sort by relevance' were used. Information such as the source, type, duration, and number of likes were recorded. Videos were analyzed using a 23-point content scale related to root resorption and divided into groups (poor, moderate, and excellent) based on the Global Quality Score.
    RESULTS: A total of 95 videos were included in the study. Most were uploaded by dentists or dental clinics (n = 64, 67.4%). The mean number of days since upload was 1536 ± 1254, and the mean duration was 5 ± 4 min. The videos had a mean of 80 ± 515 likes and 7043 ± 35,382 views, and a mean viewing rate of 1131.71 ± 8736.83. The most discussed topic was radiographic signs of root resorption. While the highest content score for the videos was 21, the average score was only 4. The mean GQS was 2 ± 1. Grouping videos by GQS showed that 55 (57.9%) were poor, 38 (40%) were moderate, and 2 (2.1%) were excellent. There was a significant relationship between videos that mentioned orthodontics (n = 62; 65.3%) and higher GQS (p = .036), and a significant difference was between GQS groups for total content levels (p < .001).
    CONCLUSIONS: YouTube videos related to root resorption lack sufficient information and clarity, and their quality needs to be improved. Oral health professionals should strive to produce higher-quality videos.
    Keywords:  YouTube; endodontics; orthodontic treatment; orthodontics; root resorption; social media
    DOI:  https://doi.org/10.1111/edt.12970
  18. SSM Popul Health. 2024 Jun;26 101677
      Background: Several pelvic area cancers exhibit high incidence rates, and their surgical treatment can result in adverse effects such as urinary and fecal incontinence, significantly impacting patients' quality of life. Post-surgery incontinence is a significant concern, with prevalence rates ranging from 25 to 45% for urinary incontinence and 9-68% for fecal incontinence. Cancer survivors are increasingly turning to YouTube as a platform to connect with others, yet caution is warranted as misinformation is prevalent.Objective: This study aims to evaluate the information quality in YouTube videos about post-surgical incontinence after pelvic area cancer surgery.
    Methods: A YouTube search for "Incontinence after cancer surgery" yielded 108 videos, which were subsequently analyzed. To evaluate these videos, several quality assessment tools were utilized, including DISCERN, GQS, JAMA, PEMAT, and MQ-VET. Statistical analyses, such as descriptive statistics and intercorrelation tests, were employed to assess various video attributes, including characteristics, popularity, educational value, quality, and reliability. Also, artificial intelligence techniques like PCA, t-SNE, and UMAP were used for data analysis. HeatMap and Hierarchical Clustering Dendrogram techniques validated the Machine Learning results.
    Results: The quality scales presented a high level of correlation one with each other (p < 0.01) and the Artificial Intelligence-based techniques presented clear clustering representations of the dataset samples, which were reinforced by the Heat Map and Hierarchical Clustering Dendrogram.
    Conclusions: YouTube videos on "Incontinence after Cancer Surgery" present a "High" quality across multiple scales. The use of AI tools, like PCA, t-SNE, and UMAP, is highlighted for clustering large health datasets, improving data visualization, pattern recognition, and complex healthcare analysis.
    Keywords:  Cancer; DISCERN; Dendrogram; GQS; HeatMap; Incontinence; Information; JAMA; MQ-VET; PCA; PEMAT; Quality; Surgery; UMAP; Youtube; t-SNE
    DOI:  https://doi.org/10.1016/j.ssmph.2024.101677
  19. Indian J Occup Environ Med. 2024 Jan-Mar;28(1):28(1): 71-76
      Background: Education is very important to prevent occupational injuries and accidents, which are almost all completely preventable. The aim of this study was to evaluate training videos on this subject on the YouTube platform.Methods: Six search terms related to occupational health and safety (OHS) were scanned on May 31, 2021. After the application of exclusion criteria, a total of 176 videos were included for final analysis using the parameters of country origin, source of the video, content, number of views, comments, likes, dislikes, and video duration. The Global Quality Scale (GQS) and modified DISCERN tools were used to evaluate the quality and reliability of the videos in this analytical cross-sectional study.
    Results: According to the GQS score, 111 (63.1%) videos were of low quality. Statistically significant differences were found between the low-, moderate-, and high-quality groups with respect to video length, likes, dislikes, comments, likes per day, dislikes per day, comments per day, video category, and the DISCERN scores (P < 0.05). The vast majority of videos contained low-quality information. A large number of videos were uploaded on OHS content from independent users and the USA.
    Conclusion: There is a clear need for professionals to play a more active role in uploading and sharing high-quality information on Internet platforms on the subject of OHS.
    Keywords:  Information; YouTube; occupational health; work
    DOI:  https://doi.org/10.4103/ijoem.ijoem_263_23
  20. Healthcare (Basel). 2024 May 17. pii: 1039. [Epub ahead of print]12(10):
      Nursing students can access massive amounts of online health data to drive cutting-edge evidence-based practice in clinical placement, to bridge the theory-practice gap. This activity requires investigation to identify the strategies nursing students apply to evaluate online health information. Online Think-Aloud sessions enabled 14 participants to express their cognitive processes in navigating various educational resources, including online journals and databases, and determining the reliability of sources, indicating their strategies for information-seeking, which helped to create this scoring system. Easy access and user convenience were clearly the instrumental factors in this behavior, which has troubling implications for the lack of use of higher-quality resources (e.g., from peer-reviewed academic journals). The identified challenges encountered during resource access included limited skills in the critical evaluation of information credibility and reliability, signaling a requirement for improved information literacy skills. Participants acknowledged the importance of evidence-based, high-quality information, but faced numerous barriers, such as restricted access to professional and specialty databases, and a lack of academic skills training. This paper develops and critiques a Performative Tool for assessing the process of seeking health information using an online Think-Aloud method, and explores factors and strategies contributing to evidence-based health information access and utilization in clinical practice, aiming to provide insight into individuals' information-seeking behaviors in online health contexts.
    Keywords:  clinical practice; nursing students; performance tool; seeking health information
    DOI:  https://doi.org/10.3390/healthcare12101039