bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒04‒28
fourteen papers selected by
Thomas Krichel, Open Library Society



  1. J Hosp Librariansh. 2024 ;24(1): 1-9
      This column explains ways to optimize the PubMed search features: Computed Author Sort, PubMed Identifier, PubMed Phrase Index, and proximity search. Two case studies show how to find every citation in PubMed, and how to retrieve comprehensive citations to systematic reviews. The article concludes with why PubMed ignores some search terms.
    DOI:  https://doi.org/10.1080/15323269.2023.2291284
  2. Clin Cosmet Investig Dermatol. 2024 ;17 853-862
      Purpose: Hidradenitis suppurativa (HS) is a complex disease with the vast burden to patients. The aim of the study was to evaluate readability of online electronic materials dedicated to HS.Patients and Methods: The terms "hidradenitis suppurativa" and "acne inversa" translated into 23 official European Union languages were searched with Google. For each language, first 50 results were assessed for suitability. Included materials were focused on patient's education, had no barriers and were not advertisements. If both terms generated the same results, duplicated materials were excluded from the analysis. Origin of the article was categorized into non-profit, online-shop, dermatology clinic or pharmaceutical company class. Readability was evaluated with Lix score.
    Results: A total of 458 articles in 22 languages were evaluated. The overall mean Lix score was 57 ± 9. This classified included articles as very hard to comprehend. Across all included languages significant differences in Lix score were revealed (P < 0.001). No significant differences across all origin categories and Lix scores were observed (all P > 0.05).
    Conclusion: Despite the coverage of HS on the Internet, its complexity made it hard to comprehend. Dermatologist should ensure readable, barrier-free online educational materials. With adequate Google promotion, these would be beneficial for both physicians and patients.
    Keywords:  acne inversa; hidradenitis suppurativa; online education; readability
    DOI:  https://doi.org/10.2147/CCID.S463861
  3. Hand (N Y). 2024 Apr 25. 15589447241247332
      BACKGROUND: ChatGPT, an artificial intelligence technology, has the potential to be a useful patient aid, though the accuracy and appropriateness of its responses and recommendations on common hand surgical pathologies and procedures must be understood. Comparing the sources referenced and characteristics of responses from ChatGPT and an established search engine (Google) on carpal tunnel surgery will allow for an understanding of the utility of ChatGPT for patient education.METHODS: A Google search of "carpal tunnel release surgery" was performed and "frequently asked questions (FAQs)" were recorded with their answer and source. ChatGPT was then asked to provide answers to the Google FAQs. The FAQs were compared, and answer content was compared using word count, readability analyses, and content source.
    RESULTS: There was 40% concordance among questions asked by the programs. Google answered each question with one source per answer, whereas ChatGPT's answers were created from two sources per answer. ChatGPT's answers were significantly longer than Google's and multiple readability analysis algorithms found ChatGPT responses to be statistically significantly more difficult to read and at a higher grade level than Google's. ChatGPT always recommended "contacting your surgeon."
    CONCLUSION: A comparison of ChatGPT's responses to Google's FAQ responses revealed that ChatGPT's answers were more in-depth, from multiple sources, and from a higher proportion of academic Web sites. However, ChatGPT answers were found to be more difficult to understand. Further study is needed to understand if the differences in the responses between programs correlate to a difference in patient comprehension.
    Keywords:  carpal tunnel syndrome; diagnosis; hand; psychosocial; research and health outcomes
    DOI:  https://doi.org/10.1177/15589447241247332
  4. J Pediatr Ophthalmol Strabismus. 2024 Apr 25. 1-7
      PURPOSE: To evaluate the understandability, actionability, and readability of responses provided by the website of the American Association for Pediatric Ophthalmology and Strabismus (AAPOS), ChatGPT-3.5, Bard, and Bing Chat about amblyopia and the appropriateness of the responses generated by the chatbots.METHOD: Twenty-five questions provided by the AAPOS website were directed three times to fresh ChatGPT-3.5, Bard, and Bing Chat interfaces. Two experienced pediatric ophthalmologists categorized the responses of the chatbots in terms of their appropriateness. Flesch Reading Ease (FRE), Flesch Kincaid Grade Level (FKGL), and Coleman-Liau Index (CLI) were used to evaluate the readability of the responses of the AAPOS website and chatbots. Furthermore, the understandability scores were evaluated using the Patient Education Materials Assessment Tool (PEMAT).
    RESULTS: The appropriateness of the chatbots' responses was 84.0% for ChatGPT-3.5 and Bard and 80% for Bing Chat (P > .05). For understandability (mean PEMAT-U score AAPOS website: 81.5%, Bard: 77.6%, ChatGPT-3.5: 76.1%, and Bing Chat: 71.5%, P < .05) and actionability (mean PEMAT-A score AAPOS website: 74.6%, Bard: 69.2%, ChatGPT-3.5: 67.8%, and Bing Chat: 64.8%, P < .05), the AAPOs website scored better than the chat-bots. Three readability analyses showed that Bard had the highest mean score, followed by the AAPOS website, Bing Chat, and ChatGPT-3.5, and these scores were more challenging than the recommended level.
    CONCLUSIONS: Chatbots have the potential to provide detailed and appropriate responses at acceptable levels. The AAPOS website has the advantage of providing information that is more understandable and actionable. The AAPOS website and chatbots, especially Chat-GPT, provided difficult-to-read data for patient education regarding amblyopia. [J Pediatr Ophthalmol Strabismus. 20XX;X(X):XXX-XXX.].
    DOI:  https://doi.org/10.3928/01913913-20240409-01
  5. Eur Arch Otorhinolaryngol. 2024 Apr 23.
      PURPOSE: As online health information-seeking surges, concerns mount over the quality and safety of accessible content, potentially leading to patient harm through misinformation. On one hand, the emergence of Artificial Intelligence (AI) in healthcare could prevent it; on the other hand, questions raise regarding the quality and safety of the medical information provided. As laryngeal cancer is a prevalent head and neck malignancy, this study aims to evaluate the utility and safety of three large language models (LLMs) as sources of patient information about laryngeal cancer.METHODS: A cross-sectional study was conducted using three LLMs (ChatGPT 3.5, ChatGPT 4.0, and Bard). A questionnaire comprising 36 inquiries about laryngeal cancer was categorised into diagnosis (11 questions), treatment (9 questions), novelties and upcoming treatments (4 questions), controversies (8 questions), and sources of information (4 questions). The population of reviewers consisted of 3 groups, including ENT specialists, junior physicians, and non-medicals, who graded the responses. Each physician evaluated each question twice for each model, while non-medicals only once. Everyone was blinded to the model type, and the question order was shuffled. Outcome evaluations were based on a safety score (1-3) and a Global Quality Score (GQS, 1-5). Results were compared between LLMs. The study included iterative assessments and statistical validations.
    RESULTS: Analysis revealed that ChatGPT 3.5 scored highest in both safety (mean: 2.70) and GQS (mean: 3.95). ChatGPT 4.0 and Bard had lower safety scores of 2.56 and 2.42, respectively, with corresponding quality scores of 3.65 and 3.38. Inter-rater reliability was consistent, with less than 3% discrepancy. About 4.2% of responses fell into the lowest safety category (1), particularly in the novelty category. Non-medical reviewers' quality assessments correlated moderately (r = 0.67) with response length.
    CONCLUSIONS: LLMs can be valuable resources for patients seeking information on laryngeal cancer. ChatGPT 3.5 provided the most reliable and safe responses among the models evaluated.
    Keywords:  Artificial intelligence; Bard; ChatGPT; Laryngeal cancer; Oncology; Patient education
    DOI:  https://doi.org/10.1007/s00405-024-08643-8
  6. Phys Sportsmed. 2024 Apr 26. 1-7
      OBJECTIVES: This study investigates the most common online patient questions pertaining to posterior cruciate ligament (PCL) injuries and the quality of the websites providing information.METHODS: Four PCL search queries were entered into the Google Web Search. Questions under the 'People also ask' tab were expanded in order and 100 results for each query were included (400 total). Questions were categorized based on Rothwell's Classification of Questions (Fact, Policy, Value). Websites were categorized by source (Academic, Commercial, Government, Medical Practice, Single Surgeon Personal, Social Media). Website quality was evaluated based on the Journal of the American Medical Association (JAMA) Benchmark Criteria. Pearson's chi-squared was used to assess categorical data. Cohen's kappa was used to assess inter-rater reliability.
    RESULTS: Most questions fell into the Rothwell Fact category (54.3%). The most common question topics were Diagnosis/Evaluation (18.0%), Indications/Management (15.5%), and Timeline of Recovery (15.3%). The least common question topics were Technical Details of Procedure (1.5%), Cost (0.5%), and Longevity (0.5%). The most common websites were Medical Practice (31.8%) and Commercial (24.3%), while the least common were Government (8.5%) and Social Media (1.5%). The average JAMA score for websites was 1.49 ± 1.36. Government websites had the highest JAMA score (3.00 ± 1.26) and constituted 42.5% of all websites with a score of 4/4. Comparatively, Single Surgeon Personal websites had the lowest JAMA score (0.76 ± 0.87, range [0-2]). PubMed articles constituted 70.6% (24/34) of Government websites, 70.8% (17/24) had a JAMA score of 4 and 20.8% (5/24) had a score of 3.
    CONCLUSION: Patients search the internet for information regarding diagnosis, treatment, and recovery of PCL injuries and are less interested in the details of the procedure, cost, and longevity of treatment. The low JAMA score reflects the heterogenous quality and transparency of online information. Physicians can use this information to help guide patient expectations pre- and post-operatively.
    Keywords:  PCL; Posterior cruciate ligament; athletics; injury; knee; sports; trauma
    DOI:  https://doi.org/10.1080/00913847.2024.2346462
  7. Cancer Med. 2024 May;13(9): e7167
      BACKGROUND: Gynaecological cancer symptoms are often vague and non-specific. Quality health information is central to timely cancer diagnosis and treatment. The aim of this study was to identify and evaluate the quality of online text-based patient information resources regarding gynaecological cancer symptoms.METHODS: A targeted website search and Google search were conducted to identify health information resources published by the Australian government and non-government health organisations. Resources were classified by topic (gynaecological health, gynaecological cancers, cancer, general health); assessed for reading level (Simple Measure of Gobbledygook, SMOG) and difficulty (Flesch Reading Ease, FRE); understandability and actionability (Patient Education Materials Assessment Tool, PEMAT, 0-100), whereby higher scores indicate better understandability/actionability. Seven criteria were used to assess cultural inclusivity specific for Aboriginal and Torres Strait Islander people; resources which met 3-5 items were deemed to be moderately inclusive and 6+ items as inclusive.
    RESULTS: A total of 109 resources were identified and 76% provided information on symptoms in the context of gynaecological cancers. The average readability was equivalent to a grade 10 reading level on the SMOG and classified as 'difficult to read' on the FRE. The mean PEMAT scores were 95% (range 58-100) for understandability and 13% (range 0-80) for actionability. Five resources were evaluated as being moderately culturally inclusive. No resource met all the benchmarks.
    CONCLUSIONS: This study highlights the inadequate quality of online resources available on pre-diagnosis gynaecological cancer symptom information. Resources should be revised in line with the recommended standards for readability, understandability and actionability and to meet the needs of a culturally diverse population.
    Keywords:  cultural inclusion; gynaecological cancer; gynaecological symptoms; health literacy; indigenous health; internet; patient information
    DOI:  https://doi.org/10.1002/cam4.7167
  8. J Cosmet Dermatol. 2024 Apr 26.
      BACKGROUND: Pattern hair loss, the most common form of hair loss, affects millions in the United States. Americans are increasingly seeking health information from social media. It would appear that healthcare professionals contribute relatively minimally to pattern hair loss content, thereby posing serious concerns for credibility and quality of information available to the general public.OBJECTIVES: This study evaluates popular pattern hair loss-related content on Instagram, TikTok, and YouTube, aiming to understand effective engagement strategies for healthcare professionals on social media.
    METHODS: The top 60 short-form videos were extracted from Instagram, TikTok, and YouTube, using the search term "pattern hair loss" and inclusion of USA-based accounts only. Videos were categorized by creator type (healthcare vs. non-healthcare professional), content type (informational, interactional, and transactional), and analyzed for user engagement and quality, using engagement ratios and DISCERN scores, respectively.
    CONCLUSIONS: Healthcare professionals, especially dermatologists, play a crucial role in delivering credible information on social media, supported by higher DISCERN scores. Multi-platform presence, frequent activity, and strategic content creation contributes to increased reach and engagement. Duration of short-form videos does not impact engagement. The "Duet" or "Remix" options on TikTok, Instagram, and YouTube serve as a valuable tool for healthcare professionals to counter misinformation. Our study underscores the importance of optimizing educational impact provided by health care professionals at a time when the public increasingly relies on social media for medical information.
    Keywords:  Instagram; TikTok; YouTube; hair loss; social media
    DOI:  https://doi.org/10.1111/jocd.16352
  9. Healthcare (Basel). 2024 Apr 14. pii: 830. [Epub ahead of print]12(8):
      The interest in the potential therapeutic use of cannabis, especially cannabidiol (CBD), has increased significantly in recent years. On the Internet, users can find lots of articles devoted to its medical features such as reducing seizure activity in epilepsy. The aim of our work was to evaluate the information contained on the websites, including social media, in terms of the credibility and the reliability of current knowledge about the usage of products containing cannabidiol in epilepsy treatment. We used online available links found using the Newspointtool. The initial database included 38,367 texts, but after applying the inclusion and exclusion criteria, 314 texts were taken into consideration. Analysis was performed using the DISCERN scale and the set of questions created by the authors. In the final assessment, we observed that most of the texts (58.9%) were characterized by a very poor level of reliability and the average DISCERN score was 26.97 points. Additionally, considering the form of the text, the highest average score (35.73) came from entries on blog portals, whereas the lowest average score (18.33) came from comments and online discussion forums. Moreover, most of the texts do not contain key information regarding the indications, safety, desired effects, and side effects of CBD therapy. The study highlights the need for healthcare professionals to guide patients towards reliable sources of information and cautions against the use of unverified online materials, especially as the only FDA-approved CBD medication, Epidiolex, differs significantly from over-the-counter CBD products.
    Keywords:  CBD; Epidiolex; Internet; cannabidiol; epilepsy; social media
    DOI:  https://doi.org/10.3390/healthcare12080830
  10. J Pain Res. 2024 ;17 1509-1518
      Introduction: Acupuncture is commonly used to treat chronic pain. Patients often access public social media platforms for healthcare information when querying acupuncture. Our study aims to appraise the utility, accuracy, and quality of information available on YouTube, a popular social media platform, on acupuncture for chronic pain treatment.Methods: Using search terms such as "acupuncture for chronic pain" and "acupuncture pain relief", the top 54 videos by view count were selected. Included videos were >1 minute duration, contained audio in English, had >7000 views, and was related to acupuncture. One primary outcome of interest was categorizing each video's usefulness as useful, misleading, or neither. Another primary outcome of interest was the quality and reliability of each video using validated instruments, including the modified DISCERN (mDISCERN) tool and the Global Quality Scale (GQS). The means were calculated for the video production characteristics, production sources, and mDISCERN and GQS scores. Continuous and categorical outcomes were compared using Student's t-test and chi-square test, respectively.
    Results: Of the 54 videos, 57.4% were categorized as useful, 14.8% were misleading, and 27.8% were neither. Useful videos had a mean GQS and mDISCERN score of 3.77±0.67 and 3.48±0.63, respectively, while misleading videos had mean GQS and mDISCERN score of 2.50±0.53 and 2.38±0.52, respectively. 41.8% of the useful videos were produced by a healthcare institution while none of the misleading videos were produced by a healthcare institution. However, 87.5% of the misleading videos were produced by health media compared to only 25.8% of useful videos from health media.
    Discussion: As patients increasingly depend on platforms like YouTube for trustworthy information on complementary health practices such as acupuncture, our study emphasizes the critical need for more higher-quality videos from unbiased healthcare institutions and physicians to ensure patients are receiving reliable information regarding this topic.
    Keywords:  acupuncture; anesthesia; chronic pain; information dissemination; internet; social media
    DOI:  https://doi.org/10.2147/JPR.S459475
  11. World Neurosurg. 2024 Apr 18. pii: S1878-8750(24)00628-4. [Epub ahead of print]
      OBJECTIVE: This study aimed to evaluate the quality and reliability of YouTube videos focusing on Unilateral Biportal Endoscopic(UBE) spine surgery, a novel technique for spinal decompression in degenerative spinal disease.METHODS: This cross-sectional study, conducted in February 2023, involved an online search on YouTube using the term "unilateral biportal endoscopic spine surgery".Video popularity was assessed using the Video Power Index. Video reliability and quality were measured using the Global Quality Scale, the Journal of American Medical Association benchmark criteria, and the modified-DISCERN instrument.
    RESULTS: Ninety-three videos were included for evaluation.Uploader profiles were categorised by continent, with 61.3% from Asia, 35.5% from the USA, 2.2% from Africa, and 1.1% from Australia. When comparing three groups as South Korea, USA, and other countries, no significant differences were observed in the technical characteristics of the videos.However, the educational quality and reliability of the videos were higher in those uploaded from South Korea (p<0.001).When the videos were divided into two groups according to their educational quality, significant difference were noted in video duration, loading time, video quality, and reliability(p<0.001).
    CONCLUSIONS: The YouTube videos on UBE spine surgery showed high quality and reliability. However, videos from South Korea were found to have higher educational quality and reliability, while other specifications were similar for all videos. Furthermore, it was determined that videos uploaded more recently and with longer duration were of higher quality.
    Keywords:  Biportal; YouTube; endoscopic; quality; reliability
    DOI:  https://doi.org/10.1016/j.wneu.2024.04.063
  12. Int Ophthalmol. 2024 Apr 23. 44(1): 192
      BACKGROUND: To determine the quality and reliability of DCR YouTube videos as patient education resources and identify any associated factors predictive of video quality.METHODS: A YouTube search was conducted using the terms "Dacryocystorhinostomy, DCR, surgery" on 12th of January 2022, with the first 50 relevant videos selected for inclusion. For each video, the following was collected: video hyperlink, title, total views, months since the video was posted, video length, total likes/dislikes, authorship (i.e. surgeon, patient experience or media companies) and number of comments. The videos were graded independently by a resident, a registrar and an oculoplastic surgeon using three validated scoring systems: the Journal of the American Medical Association (JAMA), DISCERN, and Health on the Net (HON).
    RESULTS: The average number of video views was 22,992, with the mean length being 488.12 s and an average of 18 comments per video. The consensus JAMA, DISCERN and HON scores were 2.1 ± 0.6, 29.1 ± 8.8 and 2.7 ± 1.0, respectively. This indicated that the included videos were of a low quality, however, only DISCERN scores had good interobserver similarity. Videos posted by surgeons were superior to non-surgeons when considering mean JAMA and HON scores. No other factors were associated with the quality of educational content.
    CONCLUSION: The quality and reliability of DCR related content for patient education is relatively low. Based on this study's findings, patients should be encouraged to view videos created by surgeons or specialists in preference to other sources on YouTube.
    Keywords:  Dacryocystorhinostomy; Health education; Ophthalmology; Videos; YouTube
    DOI:  https://doi.org/10.1007/s10792-024-03139-0
  13. J Gerontol Soc Work. 2024 Apr 25. 1-16
      Caregivers of people living with dementia (PLWD) are often tasked with making decisions about their loved one's daily care and healthcare treatment, causing stress and decision-making fatigue. Many caregivers engage in health information seeking to improve their health literacy for optimal decision-making, though there is limited knowledge about the strategies used to increase their health literacy. This study involved a survey of caregivers in Alabama, most of whom were African American and/or living in rural communities that have historically underserved. The findings shed light on caregivers' experiences in seeking out health-related information and their perceptions of various sources of information.
    Keywords:  Caregivers; dementia; health literacy; technology
    DOI:  https://doi.org/10.1080/01634372.2024.2339960