bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒01‒14
twenty-one papers selected by
Thomas Krichel, Open Library Society



  1. Health Info Libr J. 2024 Jan 10.
      BACKGROUND: The emergence of the artificial intelligence chatbot ChatGPT in November 2022 has garnered substantial attention across diverse disciplines. Despite widespread adoption in various sectors, the exploration of its application in libraries, especially within the medical domain, remains limited.AIMS/OBJECTIVES: Many areas of interest remain unexplored like ChatGPT in medical libraries and this review aims to synthesise what is currently known about it to identify gaps and stimulate further research.
    METHODS: Employing Cooper's integrative review method, this study involves a comprehensive analysis of existing literature on ChatGPT and its potential implementations within library contexts.
    RESULTS: A systematic literature search across various databases yielded 166 papers, with 30 excluded for irrelevance. After abstract reviews and methodological assessments, 136 articles were selected. Critical Appraisal Skills Programme qualitative checklist further narrowed down to 29 papers, forming the basis for the present study. The literature analysis reveals diverse applications of ChatGPT in medical libraries, including aiding users in finding relevant medical information, answering queries, providing recommendations and facilitating access to resources. Potential challenges and ethical considerations associated with ChatGPT in this context are also highlighted.
    CONCLUSION: Positioned as a review, our study elucidates the applications of ChatGPT in medical libraries and discusses relevant considerations. The integration of ChatGPT into medical library services holds promise for enhancing information retrieval and user experience, benefiting library users and the broader medical community.
    Keywords:  AI in medical libraries; ChatGPT; ChatGPT in libraries; GPT; artificial intelligence (AI); review
    DOI:  https://doi.org/10.1111/hir.12518
  2. Med Educ Online. 2024 Dec 31. 29(1): 2302233
      When clinician-educators and medical education researchers use and discuss medical education research, they can advance innovation in medical education as well as improve its quality. To facilitate the use and discussions of medical education research, we created a prefatory visual representation of key medical education research topics and associated experts. We conducted one-on-one virtual interviews with medical education journal editorial board members to identify what they perceived as key medical education research topics as well as who they associated, as experts, with each of the identified topics. We used content analysis to create categories representing key topics and noted occurrences of named experts. Twenty-one editorial board members, representing nine of the top medical education journals, participated. From the data we created a figure entitled, Medical Education Research Library. The library includes 13 research topics, with assessment as the most prevalent. It also notes recognized experts, including van der Vleuten, ten Cate, and Norman. The key medical education research topics identified and included in the library align with what others have identified as trends in the literature. Selected topics, including workplace-based learning, equity, diversity, and inclusion, physician wellbeing and burnout, and social accountability, are emerging. Once transformed into an open educational resource, clinician-educators and medical education researchers can use and contribute to the functional library. Such continuous expansion will generate better awareness and recognition of diverse perspectives. The functional library will help to innovate and improve the quality of medical education through evidence-informed practices and scholarship.
    Keywords:  Evidence-informed practices; evidence-informed scholarship; medical education; medical education research; research use
    DOI:  https://doi.org/10.1080/10872981.2024.2302233
  3. Arthroscopy. 2023 Sep 30. pii: S0749-8063(23)00736-3. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.arthro.2023.08.069
  4. Health Info Libr J. 2024 Jan 11.
      Reflections on the recent increase in the number of cross-sectional surveys received by the editorial team of the journal indicated that potential contributors might consider other research techniques, in addition to, or instead of a survey. In this article, Christine Urquhart discusses some different research designs, and different research methods that may help students and practitioners find useful answers to questions about professional practice beyond the standard survey. Researchers could consider research designs such as quasi-experimental techniques, controlled before-after studies, and interrupted time series. The basic principles of such methods are outlined and some examples cited. Other research techniques outlined include those that research subjects might find more interesting to do, such as conjoint analysis and vignettes.
    Keywords:  evaluation; library and information professionals; research design; students
    DOI:  https://doi.org/10.1111/hir.12520
  5. Digit Health. 2024 Jan-Dec;10:10 20552076231224603
      Introduction: Artificial intelligence has presented exponential growth in medicine. The ChatGPT language model has been highlighted as a possible source of patient information. This study evaluates the reliability and readability of ChatGPT-generated patient information on chronic diseases in Spanish.Methods: Questions frequently asked by patients on the internet about diabetes mellitus, heart failure, rheumatoid arthritis (RA), chronic kidney disease (CKD), and systemic lupus erythematosus (SLE) were submitted to ChatGPT. Reliability was assessed by rating responses as (1) comprehensive, (2) correct but inadequate, (3) some correct and some incorrect, (4) completely incorrect, and divided between "good" (1 and 2) and "bad" (3 and 4). Readability was evaluated with the adapted Flesch and Szigriszt formulas.
    Results: And 71.67% of the answers were "good," with none qualified as "completely incorrect." Better reliability was observed in questions on diabetes and RA versus heart failure (p = 0.02). In readability, responses were "moderately difficult" (54.73, interquartile range (IQR) 51.59-58.58), with better results for CKD (median 56.1, IQR 53.5-59.1) and RA (56.4, IQR 53.7-60.7), than for heart failure responses (median 50.6, IQR 46.3-53.8).
    Conclusion: Our study suggests that the ChatGPT tool can be a reliable source of information in spanish for patients with chronic diseases with different reliability for some of them, however, it needs to improve the readability of its answers to be recommended as a useful tool for patients.
    Keywords:  Artificial intelligence; ChatGPT; chronic diseases; readability; reliability
    DOI:  https://doi.org/10.1177/20552076231224603
  6. J Am Dent Assoc. 2024 Jan 08. pii: S0002-8177(23)00681-5. [Epub ahead of print]
      BACKGROUND: ChatGPT (OpenAI) is a large language model. This model uses artificial intelligence and machine learning techniques to generate humanlike language and responses, even to complex questions. The authors aimed to assess the reliability of responses provided via ChatGPT and evaluate its trustworthiness as a means of obtaining information about third-molar surgery.METHODS: The authors assessed the 10 most frequently asked questions about mandibular third-molar extraction. A validated questionnaire (Chatbot Usability Questionnaire) was used and 2 oral and maxillofacial surgeons compared the answers provided with the literature.
    RESULTS: Most of the responses (90.63%) provided via the ChatGPT platform were considered safe and accurate and followed what was the stated in the English-language literature.
    CONCLUSIONS: The ChatGPT platform offers accurate and scientifically backed answers to inquiries about third-molar surgical extraction, making it a dependable and easy-to-use resource for both patients and the general public. However, the platform should provide references with the responses to validate the information.
    PRACTICAL IMPLICATIONS: Patients worldwide are exposed to reliable information sources. Oral surgeons and health care providers should always advise patients to be aware of the information source and that the ChatGPT platform offers a reliable solution.
    Keywords:  Artificial intelligence; oral surgery; third molar
    DOI:  https://doi.org/10.1016/j.adaj.2023.11.004
  7. Hum Reprod. 2024 Jan 10. pii: dead272. [Epub ahead of print]
      The internet is the primary source of infertility-related information for most people who are experiencing fertility issues. Although no longer shrouded in stigma, the privacy of interacting only with a computer provides a sense of safety when engaging with sensitive content and allows for diverse and geographically dispersed communities to connect and share their experiences. It also provides businesses with a virtual marketplace for their products. The introduction of ChatGPT, a conversational language model developed by OpenAI to understand and generate human-like text in response to user input, in November 2022, and other emerging generative artificial intelligence (AI) language models, has changed and will continue to change the way we interact with large volumes of digital information. When it comes to its application in health information seeking, specifically in relation to fertility in this case, is ChatGPT a friend or foe in helping people make well-informed decisions? Furthermore, if deemed useful, how can we ensure this technology supports fertility-related decision-making? After conducting a study into the quality of the information provided by ChatGPT to people seeking information on fertility, we explore the potential benefits and pitfalls of using generative AI as a tool to support decision-making.
    Keywords:  ChatGPT; decision support; generative artificial intelligence; infertility treatment; online information
    DOI:  https://doi.org/10.1093/humrep/dead272
  8. Cardiol Ther. 2024 Jan 09.
      INTRODUCTION: The advent of generative artificial intelligence (AI) dialogue platforms and large language models (LLMs) may help facilitate ongoing efforts to improve health literacy. Additionally, recent studies have highlighted inadequate health literacy among patients with cardiac disease. The aim of the present study was to ascertain whether two freely available generative AI dialogue platforms could rewrite online aortic stenosis (AS) patient education materials (PEMs) to meet recommended reading skill levels for the public.METHODS: Online PEMs were gathered from a professional cardiothoracic surgical society and academic institutions in the USA. PEMs were then inputted into two AI-powered LLMs, ChatGPT-3.5 and Bard, with the prompt "translate to 5th-grade reading level". Readability of PEMs before and after AI conversion was measured using the validated Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook Index (SMOGI), and Gunning-Fog Index (GFI) scores.
    RESULTS: Overall, 21 PEMs on AS were gathered. Original readability measures indicated difficult readability at the 10th-12th grade reading level. ChatGPT-3.5 successfully improved readability across all four measures (p < 0.001) to the approximately 6th-7th grade reading level. Bard successfully improved readability across all measures (p < 0.001) except for SMOGI (p = 0.729) to the approximately 8th-9th grade level. Neither platform generated PEMs written below the recommended 6th-grade reading level. ChatGPT-3.5 demonstrated significantly more favorable post-conversion readability scores, percentage change in readability scores, and conversion time compared to Bard (all p < 0.001).
    CONCLUSION: AI dialogue platforms can enhance the readability of PEMs for patients with AS but may not fully meet recommended reading skill levels, highlighting potential tools to help strengthen cardiac health literacy in the future.
    Keywords:  Aortic stenosis; Artificial intelligence; ChatGPT; Chatbots; Health literacy; Heart valve disease; Large language models; Patient education material; Readability
    DOI:  https://doi.org/10.1007/s40119-023-00347-0
  9. Patient Educ Couns. 2024 Jan 03. pii: S0738-3991(24)00002-8. [Epub ahead of print]121 108135
      OBJECTIVES: This study aimed to portray available information on cancer-related fatigue on German health care institution websites considering the idea of patient empowerment.METHODS: Based on website quality criteria, we developed a website-rating tool comprising 18 items. Descriptive analyses, a KruskalWallis test, and corresponding post hoc tests comparing rating sum scores between institution groups were performed.
    RESULTS: Websites of 283 systematically compiled health care institutions were included in the rating. Cancer-related fatigue was introduced on 21.9% and detailed information was provided on 27.9% of the websites. Information material was offered on 9.2% of the websites, while fatigue treatment offers were presented on 21.6% of the websites. The rating sum scores differed between institution groups (p < 0.001), with Comprehensive Cancer Centers scoring significantly higher than the others.
    CONCLUSION: The rating revealed an overall sparse provision of information, with fatigue being addressed on less than half of the websites.
    PRACTICE IMPLICATIONS: For patients who have access to at least one introduction about fatigue, institutions need to extend their websites. Patients could further be referred to external institutions or information booklets. The naming of contact persons may help linking patients to providers.
    Keywords:  Cancer; Cancer-related fatigue; Health care institutions; Patient education; Patient involvement; Website quality
    DOI:  https://doi.org/10.1016/j.pec.2024.108135
  10. J Med Internet Res. 2024 Jan 10. 26 e48243
      BACKGROUND: eHealth websites are increasingly being used by community members to obtain information about endometriosis. Additionally, clinicians can use these websites to enhance their understanding of the condition and refer patients to these websites. However, poor-quality information can adversely impact users. Therefore, a critical evaluation is needed to assess and recommend high-quality endometriosis websites.OBJECTIVE: This study aimed to evaluate the quality and provide recommendations for high-quality endometriosis eHealth websites for the community and clinicians.
    METHODS: PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 guidelines informed 2 Google searches of international and Australian eHealth websites. The first search string used the terms "endometriosis," "adenomyosis," or "pelvic pain," whereas "Australia" was added to the second search string. Only free eHealth websites in English were included. ENLIGHT, a validated tool, was used to assess the quality across 7 domains such as usability, visual design, user engagement, content, therapeutic persuasiveness, therapeutic alliance, and general subjective evaluation. Websites with a total score of 3.5 or more were classified as "good" according to the ENLIGHT scoring system and are recommended as high-quality eHealth websites for information on endometriosis.
    RESULTS: In total, 117 eHealth websites were screened, and 80 were included in the quality assessment. Four high-quality eHealth websites (ie, those that scored 3.5 or more) were identified (Endometriosis Australia Facebook Page, Endometriosis UK, National Action Plan for Endometriosis on EndoActive, and Adenomyosis by the Medical Republic). These websites provided easily understood, engaging, and accurate information. Adenomyosis by the Medical Republic can be used as a resource in clinical practice. Most eHealth websites scored well, 3.5 or more in the domains of usability (n=76, 95%), visual design (n=64, 80%), and content (n=63, 79%). However, of the 63 websites, only 25 provided references and 26 provided authorship details. Few eHealth websites scored well on user engagement (n=18, 23%), therapeutic persuasiveness (n=2, 3%), and therapeutic alliance (n=22, 28%). In total, 30 (38%) eHealth websites scored well on general subjective evaluation.
    CONCLUSIONS: Although geographical location can influence the search results, we identified 4 high-quality endometriosis eHealth websites that can be recommended to the endometriosis community and clinicians. To improve quality, eHealth websites must provide evidence-based information with appropriate referencing and authorship. Factors that enhance usability, visual design, user engagement, therapeutic persuasiveness, and therapeutic alliance can lead to the successful and long-term uptake of eHealth websites. User engagement, therapeutic persuasiveness, and therapeutic alliance can be strengthened by sharing lived experiences and personal stories and by cocreating meaningful content for both the community and clinicians. Reach and discoverability can be improved by leveraging search engine optimization tools.
    TRIAL REGISTRATION: PROSPERO CRD42020185475; https://www.crd.york.ac.uk/PROSPERO/display_record.php?RecordID=185475&VersionID=2124365.
    Keywords:  adenomyosis; digital health; eHealth; eHealth websites; endometriosis; pelvic pain
    DOI:  https://doi.org/10.2196/48243
  11. Eur J Vasc Endovasc Surg. 2024 Jan 05. pii: S1078-5884(24)00015-7. [Epub ahead of print]
      OBJECTIVE: This study aimed to assess the quality of patient information material regarding elective abdominal aortic aneurysm (AAA) repair on the internet using the Modified Ensuring Quality Information for Patients (MEQIP) tool.METHODS: A qualitative assessment of internet based patient information was performed. The 12 most used search terms relating to AAA repair were identified using Google Trends, with the first 10 pages of websites retrieved for each term searched. Duplicates were removed, and information for patients undergoing elective AAA were selected. Further exclusion criteria were marketing material, academic journals, videos, and non-English language sites. The remaining websites were then MEQIP scored independently by two reviewers, producing a final score by consensus.
    RESULTS: A total of 1 297 websites were identified, with 235 (18.1%) eligible for analysis. The median MEQIP score was 18 (interquartile range [IQR] 14, 21) out of a possible 36. The highest score was 33. The 99th percentile MEQIP scoring websites scored > 27, with four of these six sites representing online copies of hospital patient information leaflets, however hospital sites overall had lower median MEQIP scores than most other institution types. MEQIP subdomain median scores were: content, 8 (IQR 6, 11); identification, 3 (IQR 1, 3); and structure, 7 (IQR 6, 9). Of the analysed websites, 77.9% originated from the USA (median score 17) and 12.8% originated in the UK (median score 22). Search engine ranking was related to website institution type but had no correlation with MEQIP.
    CONCLUSION: When assessed by the MEQIP tool, most websites regarding elective AAA repair are of questionable quality. This is in keeping with studies in other surgical and medical fields. Search engine ranking is not a reliable measure of quality of patient information material regarding elective AAA repair. Health practitioners should be aware of this issue as well as the whereabouts of high quality material to which patients can be directed.
    Keywords:  AAA; Abdominal aortic aneurysm; EQIP; EVAR; MEQIP; Patient information
    DOI:  https://doi.org/10.1016/j.ejvs.2024.01.013
  12. Medicine (Baltimore). 2023 Dec 29. 102(52): e36636
      Most women hesitate to seek help from healthcare providers as they find it difficult to share complaints of involuntary leakage or vaginal prolapse. Hence, they often refer to the websites of national and/or international bodies' patient education materials (PEMs), which are considered the most reliable sources. The crucial factor that determines their usefulness is their readability level, which makes them "easy" or "difficult" to read, and is recommended, not to exceed the sixth grade level. In this study, we aimed to assess the readability levels of Turkish translated PEMs from the websites of the International Urogynecological Association and the European Association of Urology and the PEMs originally written in Turkish from the website of the Society of Urological Surgery in Turkey. All the PEMs (n = 52) were analyzed by online calculators using the Atesman formula, Flesch-Kincaid grade level, and Gunning Fog index. The readability parameters, number of sentences, words, letters, syllables, and readability intervals of these methods were compared among the groups using the Kruskal-Wallis test, or ANOVA test, with post hoc comparisons where appropriate. The readability level of all PEMs is at least at an "averagely difficult" interval, according to both assessment methods. No significant differences were found among the PEM groups in terms of readability parameters and assessment methods (P > .05). Whether original or translated, international or national societies' PEMs' readability scores were above the recommended level of sixth grade. Thus, the development of PEMs needs to be revised accordingly by relevant authorities.
    DOI:  https://doi.org/10.1097/MD.0000000000036636
  13. Transl Androl Urol. 2023 Dec 31. 12(12): 1827-1833
      Background: Transurethral resection of the prostate (TURP) is a widespread, effective way to treat benign prostatic hyperplasia (BPH). Many medical students and junior clinicians increasingly turn to easily accessible online resources to learn this technique, such as videos on YouTube. This study assessed the educational value of YouTube videos about TURP, which are popular among many young surgeons.Methods: We searched YouTube as of August 2, 2022 for videos fulfilling the search terms "transurethral resection of the prostate", "benign prostatic hyperplasia", "BPH", "TURP", "benign prostatic enlargement", "bladder outlet obstruction" and "lower urinary tract symptom". We assessed the educational value of the identified videos using a custom-designed checklist.
    Results: We identified 47 relevant videos, 20 of which were posted after July 1, 2020. The average number of views was 576,379±208,535 (range, 54-1,385,713). The average quality score of the videos was 7.38±2.53 (range, 4-12) on a 15-point scale, and 20 were judged to be of low educational quality. Quality scores correlated positively with the number of likes (R=0.596, P<0.01).
    Conclusions: The educational value of most TURP videos on YouTube appears to be low, with most lacking detailed explanations of preoperative preparations and the surgical procedure. High-quality video resources about TURP need to be developed for medical students and junior surgeons. Standard quality criteria should also be developed and disseminated to ensure the production of accurate learning resources for junior clinicians.
    Keywords:  YouTube videos; surgical education; transurethral resection of the prostate (TURP)
    DOI:  https://doi.org/10.21037/tau-23-394
  14. Arch Esp Urol. 2023 Dec;76(10): 764-771
      BACKGROUND: YouTube is the second most popular website worldwide. It features numerous videos about radical prostatectomy. The aim of this study was to assess the quality of these videos and screen their benefit for patients and doctors.METHODS: All videos on YouTube about radical prostatectomy were analysed using a specially developed software (python 2.7, numpy). According to a predefined selection process most relevant videos were analyzed for quality and reliability using Suitability Assessment of Materials (SAM)-Score, Global Quality Score and others.
    RESULTS: Out of 3520 search results, 179 videos were selected and analysed. Videos were watched a median of 5836 times (interquartile range (IQR): 11945.5; 18-721546). The median duration was 7.2 minutes (min). 125 of the videos were about robotic prostatectomy. 69 videos each were directly addressed to patients and doctors. Medical content generally was of low quality, while technical quality and total quality were at a high level. Reliability was good.
    CONCLUSIONS: Videos on radical prostatectomy on YouTube allow for patient information. While technical quality and reliability are classified as acceptable, medical content was low and warranted preselection. In contrast to Loeb et al. we did not observe a negative correlation between number of views and scientific quality in different scores. Our findings support the need for preselection of videos on YouTube as the potential benefit may vary between videos with the significant risk of low medical quality.
    Keywords:  YouTube; prostate cancer; prostatectomy; quality; social media
    DOI:  https://doi.org/10.56434/j.arch.esp.urol.20237610.92
  15. Arthroscopy. 2024 Jan 05. pii: S0749-8063(24)00002-1. [Epub ahead of print]
      PURPOSE: This study aimed to assess the validity and informational value of teaching material regarding anterior cruciate ligament reconstruction (ACL-R) using quadriceps tendon (QT) autograft provided on the YouTubeTM video platform.METHODS: An extensive systematic search of the YouTubeTM video platform was performed, and all videos that met the criteria were included in the analysis. The analysis of the video content was performed using the DISCERN instrument, Journal of American Medical Association (JAMA) benchmark criteria and Global Quality Score (GQS). The duration of the videos, the date of publication, and the number of likes and views were recorded. Furthermore, videos were categorized based on the source (physicians, companies, patients), the subject (surgical technique, patient experience and overview [overview videos were videos in which multiple aspects were analyzed]) and the type of content (educational or subjective patient experience).
    RESULTS: A total of 88 videos were included in the analysis. Seventy-one (80.7%) videos were published by physicians, 15 (17.0%) by patients and 2 (2.3%) by companies. The majority of the videos described various surgical techniques (59 - 67.0%), 80.7% of the videos (72 - 81.8%) had an educational nature, and the remaining 18.2% described patient experiences. The mean length of the videos was 8.21 ±7.88 minutes. The mean number of views was 3988.51 ±9792.98 (range: 9-56047), while the mean numbers of comments and likes were 30.07±70.07 (range: 0-493) and 4.48 ±14.22 (range: 0-82), respectively. The mean DISCERN score, JAMA score, and GQS were 27.43 ±11.56 (95% CI: 25.01-29.85; range: 17-68), 1.22 ±0.85 (95% CI: 1.04-1.40; range: 0-3), and 1.82 ±0.93 (95% CI: 1.63-2.01; range: 1-4), respectively. For all scores, videos published by physicians had higher quality (DISCERN score, JAMA score, and GQS) (p<0.05). Among all of the analyzed videos, overview videos were of the highest quality (p<0.05).
    CONCLUSIONS: YouTube™ is a fast and open-access source of mass information. The overall quality of the videos on ACL reconstruction performed using QT autograft was unsatisfactory, demonstrating low educational quality and reliability. Currently, YouTube™ cannot be recommended as a reliable source of information on ACL reconstruction with the quadriceps tendon.
    Keywords:  ACL; YouTube; anterior cruciate ligament; quadriceps; social media
    DOI:  https://doi.org/10.1016/j.arthro.2024.01.002
  16. J Prosthet Dent. 2024 Jan 11. pii: S0022-3913(23)00821-1. [Epub ahead of print]
      STATEMENT OF PROBLEM: Rehabilitation of complete edentulous arches by using the all-on-4 dental implant treatment concept is a well-established procedure. Considering the popularity of YouTube as a source for health-related information, a thorough investigation of the content-quality and reliability of videos regarding the all-on-4 concept is lacking.PURPOSE: The purpose of this cross-sectional analysis was to critically appraise the content-quality and reliability of YouTube videos regarding the all-on-4 dental implant treatment concept as a source of information for patients, students, and dentists.
    MATERIAL AND METHODS: A comprehensive search was performed on the YouTube website using the specific keyword "All-on-4," which was identified as the most appropriate search term by the Google Trends website. Only English language videos regarding the all-on-4 dental implant treatment concept were included for systematic analyses. Following the eligibility criteria, the included videos were assessed for their demographic characteristics and quality-content. Based on the content score, the videos were categorized as low content (LC) and moderate + high content (MHC) groups. Further, qualitative analyses were performed by using the DISCERN tool and a global quality (GQ) scale. Statistical analyses were conducted by using the Mann-Whitney U test and the Spearman correlation analysis (α=.05).
    RESULTS: Of 250 screened videos, only 73 were eligible for final analyses. The included videos presented an average 123 846 (range, 4 to 3 182 404) views with a mean duration of 528 (range, 12 to 1699) seconds. In addition, the average number of likes was 1122 (range, 0 to 3300), but, remarkably, none of the included videos received any dislikes. Overall, the mean content-quality score was 6.2 ±3.8, thus indicating low-quality content. The average DISCERN and GQ scores were 47.73 ±9.94 and 3.41 ±0.95, with the Spearman rank correlation test showing a strong positive correlation (r=.732; P<.001) among the total obtained scores. Moreover, statistically significant differences were reported between the LC and MHC groups for both DISCERN and GQ scores (P<.001).
    CONCLUSIONS: The reliability of YouTube videos regarding the all-on-4 dental implant treatment concept is questionable, as they exhibit poor content-quality, thus making them an unreliable source for patients, students, and dentists seeking accurate information.
    DOI:  https://doi.org/10.1016/j.prosdent.2023.12.008
  17. Orthop J Sports Med. 2024 Jan;12(1): 23259671231219815
      Background: Videos uploaded to YouTube do not go through a review process, and therefore, videos related to medial meniscal ramp lesions may have little educational value.Purpose: To assess the educational quality of YouTube videos regarding ramp lesions of the meniscus.
    Study Design: Cross-sectional study.
    Methods: A standard search was performed on the YouTube website using the following terms: "ramp lesion" and "posterior meniscal detachment" and "ramp" and "meniscocapsular" and "meniscotibial detachment," and the top 100 videos based on the number of views were included for analysis. The video duration, publication data, and number of likes and views were retrieved, and the videos were categorized based on video source (health professionals, orthopaedic company, private user), the type of information (anatomy, biomechanics, clinical examination, overview, radiologic, surgical technique), and video content (education, patient support, patient experience/testimony).The content analysis of the information on the videos was evaluated with the use of the DISCERN instrument (score range, 16-80), the Journal of the American Medical Association (JAMA) benchmark criteria (score range, 0-4), and the Global Quality Score (GQS; score range, 1-5).
    Results: A total of 74 videos were included. Of these videos, 70 (94.6%) were published by health professionals, while the remaining 4 (5.4%) were published by orthopaedic companies. Most of the videos were about surgical technique (n = 36; 48.6%) and all had an educational aim (n = 74; 100%). The mean length of the videos was 10.35 ± 17.65 minutes, and the mean online period was 18.64 ± 13.85 months. The mean DISCERN score, JAMA benchmark score, and GQS were 31.84 ± 17.14 (range, 16-72), 1.65 ± 0.87 (range, 1-4), and 2.04 ± 1.21 (range, 1-5), respectively. Videos that reported an overview about ramp lesions were the best in terms of quality for DISCERN and JAMA benchmark score, while biomechanics videos were the best according to GQS. The worst category of videos was about surgical technique, with all having lower scores.
    Conclusion: The educational content of YouTube regarding medial meniscal ramp lesions showed low quality and validity based on DISCERN score, JAMA benchmark score, and GQS.
    Keywords:  YouTube; knee; meniscotibial detachment; meniscus; ramp
    DOI:  https://doi.org/10.1177/23259671231219815
  18. Cureus. 2023 Dec;15(12): e50210
      BACKGROUND: This study aims to assess the quality and reliability of the information for patients from YouTube videos on transforaminal interbody fusion (TLIF).MATERIAL AND METHODS: One hundred videos were listed by inputting "TLIF," "TLIF surgery," and "transforaminal interbody fusion" in the YouTube search engine. The top 50 most popular videos based on video power index (VPI), view ratio, and exclusion criteria were selected for review. One orthopedic consultant surgeon and one neurosurgeon consultant analyzed the videos together. The modified DISCERN score, the Global Quality Score (GQS), the Journal of the American Medical Association (JAMA) score, and a novel interbody fusion score were used to evaluate videos. Data of video length, view count, number of likes and dislikes, like ratio (like x 100/(like+dislike)), video source, and comment rate were collected.
    RESULTS: The quality of the videos could have been better according to all scoring systems, regardless of the video source. The scores of the videos published by patients and commercials were significantly lower than those of physicians and allied professionals (p <0.05). VPI and view ratios were similar in all sources.  Conclusion: The study demonstrates that YouTube videos providing information related to TLIF surgery are available and accessed by the public. The results of this study would suggest that YouTube is not currently an appropriate source of information on TLIF surgery for patients. Most of the YouTube videos about TLIF surgery contain information about the surgical technique and have limited information about the post-operative condition of the patients.
    Keywords:  fusion; gqs score; interbody; lumbar; quality; reliability; tlif; transforaminal; videos; youtube
    DOI:  https://doi.org/10.7759/cureus.50210
  19. J Orthop Surg (Hong Kong). 2024 Jan-Apr;32(1):32(1): 10225536231224833
      BACKGROUND: Information about orthopedics diseases on the Internet has not been extensively assessed. Our purpose was to evaluate the quality of online information of osteosarcoma on current video-sharing platforms in mainland China.METHOD: TikTok and Bilibili were independently queried from June to July 2023 by four independent researchers using the Microsoft Edge web browser. Information about the videos and creators was recorded, and descriptive analyses were conducted.
    RESULTS: After data extraction, a total of 95 videos were included, in which 43 videos were uploaded by certified doctors (45.3%), with 35 videos (36.8%) uploaded by certified orthopedic surgeons. Of the content of these videos, 78.9% were introduction (n = 75), 64.2% were on professional knowledge (n = 61), 28.4% were on treatment (n = 27), while 5.3% were on surgical techniques (n = 5). The mean DISCERN total score was 43.8 ± 13.4, and the mean JAMA score was 3.8 ± 0.3.
    CONCLUSIONS: Videos about osteosarcoma on current video-sharing platforms were extensive, but were not comprehensive and professional. Although current online videos have the potential to improve public awareness on osteosarcoma, due to their quality and content, were not assessed to be good sources for medical education.
    Keywords:  TikTok; cross-sectional study; internet; level II; orthopedics; osteosarcoma; video platform
    DOI:  https://doi.org/10.1177/10225536231224833
  20. Nurs Res. 2024 Jan 08.
      BACKGROUND: The outbreak of coronavirus disease 2019 (COVID-19) caused severe damage to public health globally and served as a stark reminder of the potential for future pandemics. Promoting protective behaviors to prevent the spread of any contagious disease thus remains a priority. While research has shown that health beliefs can affect protective behaviors, few studies have examined the role of information-seeking in this relationship.OBJECTIVE: Based on the health belief model, this research focused on whether health beliefs affect personal protective behaviors through health information-seeking behaviors.
    METHODS: This cross-sectional study with a causal-comparative design used an online questionnaire to investigate the Taiwanese public's health beliefs, protective behaviors, and information-seeking behaviors during the COVID-19 pandemic. Data were analyzed using descriptive statistics and multiple regression analysis.
    RESULTS: Between September 2021 and January 2022, 322 valid questionnaires were collected. The results revealed that the effects of two health beliefs-self-efficacy and perceived benefits-on handwashing, social distancing, practicing good cough etiquette, and keeping one's environment clean and well-ventilated were partially mediated by the frequency of official information-seeking.
    DISCUSSION: Results of this study support the regular and timely promotion of pandemic prevention measures through official sites. Promoting official information-seeking can help enhance protective behaviors.
    DOI:  https://doi.org/10.1097/NNR.0000000000000712
  21. Ann Plast Surg. 2024 Feb 01. 92(2): 148-155
      BACKGROUND: Patient education materials are commonly reported to be difficult to understand.OBJECTIVES: We aimed to use crowdsourcing to improve patient education materials at our institution.
    METHODS: This was a department-wide quality improvement project to increase organizational health literacy. There are 6 phases of this pilot study: (1) evaluating preexisting patient education materials, (2) evaluating online patient education materials at the society (the American Society of Plastic Surgeon) and government level (Medline Plus), (3) redesigning our patient education material and reevaluating the education material, (4) crowdsourcing to evaluate understandability of the new patient education material, (5) data analysis, and (6) incorporating crowdsourcing suggestions to the patient education material.
    RESULTS: Breast-related patient education materials are not easy to read at the institution level, the society level, and the government level. Our new implant-based breast reconstruction patient education material is easy to read as demonstrated by the crowdsourcing evaluation. More than 90% of the participants reported our material is "very easy to understand" or "easy to understand." The crowdsourcing process took 1.5 days, with 700 workers responding to the survey. The total cost was $9. After incorporating participants' feedback into the finalized material, the readability of the material is at the recommended reading level. The material also had the recommended length (between 400 and 800 words).
    DISCUSSION: Our study demonstrated a pathway for clinicians to efficiently obtain a large amount of feedback to improve patient education materials. Crowdsourcing is an effective tool to improve organizational health literacy.
    DOI:  https://doi.org/10.1097/SAP.0000000000003777