bims-librar Biomed News
on Biomedical librarianship
Issue of 2024–09–01
fourteen papers selected by
Thomas Krichel, Open Library Society



  1. F1000Res. 2024 ;13 652
       Background: This study aims to review the extant literature on talent management with the objective of influencing library and information management by addressing the key facets of talent management, such as talent management strategies, importance of career development, evaluation of talented employees, and organizational resilience.
    Methodology: Literature on the development of talent and career management was retrieved from various scholarly papers indexed in Scopus and Web of Science to have a meticulous literature review serving as the platform of the present study. In light of the authors' observations, two models were developed. The literature provides precise information that talent management plays a decisive role in promoting organizational excellence invariably in all kinds of organizations in general and libraries in particular.
    Results: This study provides constructive recommendations for the implementation of effective talent management and retention policies for library and information professionals. Moreover, this study adds immense value to the corpus of existing literature to set a platform for the augmentation of library management in futuristic vision.
    Conclusion: This study provides constructive recommendations to policy makers and library administrators to foster talented employees for excelling library and information services for the next several decades.
    Keywords:  Talent management; career growth; employment strategy; organizational resilience; retention policy; talent pool
    DOI:  https://doi.org/10.12688/f1000research.151301.2
  2. Indian J Med Ethics. 2024 Jul-Sep;IX(3):IX(3): 257-258
      We chanced upon a number of errors in a PubMed entry (PMID: 24727622) of the abstract of an article published in your journal a decade ago. This prompted us to think how PubMed entries are rectified and whether it may be important to publish an erratum in a forthcoming issue of the journal when the original source on the journal's website has no error.
    DOI:  https://doi.org/10.20529/IJME.2024.037
  3. Anat Sci Educ. 2024 Aug 26.
      Systematic reviews and meta-analyses aggregate research findings across studies and populations, making them a valuable form of research evidence. Over the past decade, studies in medical education using these methods have increased by 630%. However, many manuscripts are not publication-ready due to inadequate planning and insufficient analyses. These guidelines aim to improve the clarity and comprehensiveness of reporting methodologies and outcomes, ensuring high quality and comparability. They align with existing standards like PRISMA, providing examples and best practices. Adhering to these guidelines is crucial for publication consideration in Anatomical Sciences Education.
    Keywords:  anatomy education; education; educational methodology
    DOI:  https://doi.org/10.1002/ase.2500
  4. World J Clin Cases. 2024 Aug 26. 12(24): 5452-5455
      Case reports, often overlooked in evidence-based medicine (EBM), play a pivotal role in healthcare research. They provide unique insights into rare conditions, novel treatments, and adverse effects, serving as valuable educational tools and generating new hypothesis. Despite their limitations in generalizability, case reports contribute significantly to evidence-based practice by offering detailed clinical information and fostering critical thinking among healthcare professionals. By acknowledging their limitations and adhering to reporting guidelines, case reports can contribute significantly to medical knowledge and patient care within the evolving landscape of EBM. This editorial explores the intrinsic value of case reports in EBM and patient care.
    Keywords:  Case reports; Clinical cases; Editorial; Evidence based medicine; Healthcare research
    DOI:  https://doi.org/10.12998/wjcc.v12.i24.5452
  5. Appl Hum Factors Ergon Conf. 2023 ;115 499-506
      Engaging students in research is a high-impact practice shown to increase graduation outcomes and sustain their pursuit of careers in science, technology, engineering, and mathematics (STEM). However, research opportunities for students early in their undergraduate studies are not widely available at most colleges or for most students. To overcome this barrier, we developed three online resources designed to introduce students to what research is and direct them on how to get started with the search for research opportunities. These resources consist of (a) two introductory videos to inspire students to learn more about research, (b) online modules on the topics of getting started with research, transferable research skills, and publications and presentations, and (c) a searchable faculty research mentor directory. We found these online resources to be an effective ways to reach and engage a large number of undergraduate students who are accustomed to obtaining information on the web. These online resources can also serve as useful supplemental resources for advising staff and faculty who wish to introduce students to research.
    Keywords:  High impact practices; Human-computer interaction; Online education; Online research training; Undergraduate research
    DOI:  https://doi.org/10.54941/ahfe1004349
  6. BMC Urol. 2024 Aug 23. 24(1): 177
       PURPOSE: The diagnosis and management of prostate cancer (PCa), the second most common cancer in men worldwide, are highly complex. Hence, patients often seek knowledge through additional resources, including AI chatbots such as ChatGPT and Google Bard. This study aimed to evaluate the performance of LLMs in providing education on PCa.
    METHODS: Common patient questions about PCa were collected from reliable educational websites and evaluated for accuracy, comprehensiveness, readability, and stability by two independent board-certified urologists, with a third resolving discrepancy. Accuracy was measured on a 3-point scale, comprehensiveness was measured on a 5-point Likert scale, and readability was measured using the Flesch Reading Ease (FRE) score and Flesch-Kincaid FK Grade Level.
    RESULTS: A total of 52 questions on general knowledge, diagnosis, treatment, and prevention of PCa were provided to three LLMs. Although there was no significant difference in the overall accuracy of LLMs, ChatGPT-3.5 demonstrated superiority over the other LLMs in terms of general knowledge of PCa (p = 0.018). ChatGPT-4 achieved greater overall comprehensiveness than ChatGPT-3.5 and Bard (p = 0.028). For readability, Bard generated simpler sentences with the highest FRE score (54.7, p < 0.001) and lowest FK reading level (10.2, p < 0.001).
    CONCLUSION: ChatGPT-3.5, ChatGPT-4 and Bard generate accurate, comprehensive, and easily readable PCa material. These AI models might not replace healthcare professionals but can assist in patient education and guidance.
    Keywords:  Artificial intelligence; ChatGPT; Chatbot; Large language models; Prostate cancer
    DOI:  https://doi.org/10.1186/s12894-024-01570-0
  7. R I Med J (2013). 2024 Sep 01. 107(9): 38-44
       BACKGROUND: Assessment of readability and reliability of online resources for orthopedic patients is an area of growing interest, but there is currently limited reporting on this topic for patellar instability (PI) and medial patellofemoral ligament reconstruction (MPFLR).
    METHODS: Utilizing the Searchresponse.io dataset, we analyzed inquiries related to PI and MPFLR. Readability and reliability were assessed using the Automated Reading Index, Flesch Reading Ease, and the JAMA benchmark criteria.
    RESULTS: Analysis of 363 frequently asked questions from 130 unique websites revealed a predominant interest in fact-based information. Readability assessments indicated that the average grade level of the resources was significantly higher than the 6th grade level and reliability varied between resources.
    CONCLUSION: Although the internet is an easily accessible resource, we demonstrate that PI and MPFLR resources are written at a significantly higher reading level than is recommended, and there is inconsistent reliability amongst resources with medical practice websites demonstrating the lowest reliability.
    Keywords:  MPFL reconstruction; medial patellofemoral ligament; patella; patellar instability
  8. Eur Arch Otorhinolaryngol. 2024 Aug 28.
       PURPOSE: Oral mucositis affects 90% of patients receiving chemotherapy or radiation for head and neck malignancies. Many patients use the internet to learn about their condition and treatments; however, the quality of online resources is not guaranteed. Our objective was to determine the most common Google searches related to "oral mucositis" and assess the quality and readability of available resources compared to ChatGPT-generated responses.
    METHODS: Data related to Google searches for "oral mucositis" were analyzed. People Also Ask (PAA) questions (generated by Google) related to searches for "oral mucositis" were documented. Google resources were rated on quality, understandability, ease of reading, and reading grade level using the Journal of the American Medical Association benchmark criteria, Patient Education Materials Assessment Tool, Flesch Reading Ease Score, and Flesh-Kincaid Grade Level, respectively. ChatGPT-generated responses to the most popular PAA questions were rated using identical metrics.
    RESULTS: Google search popularity for "oral mucositis" has significantly increased since 2004. 78% of the Google resources answered the associated PAA question, and 6% met the criteria for universal readability. 100% of the ChatGPT-generated responses answered the prompt, and 20% met the criteria for universal readability when asked to write for the appropriate audience.
    CONCLUSION: Most resources provided by Google do not meet the criteria for universal readability. When prompted specifically, ChatGPT-generated responses were consistently more readable than Google resources. After verification of accuracy by healthcare professionals, ChatGPT could be a reasonable alternative to generate universally readable patient education resources.
    Keywords:  Artificial intelligence; Google analytics; Head and neck cancer; Information quality; Oral mucositis; Patient education
    DOI:  https://doi.org/10.1007/s00405-024-08913-5
  9. Angle Orthod. 2024 Aug 14.
       OBJECTIVES: To evaluate the reliability of information produced by the artificial intelligence-based program ChatGPT in terms of accuracy and relevance, as assessed by orthodontists, dental students, and individuals seeking orthodontic treatment.
    MATERIALS AND METHODS: Frequently asked and curious questions in four basic areas related to orthodontics were prepared and asked in ChatGPT (Version 4.0), and answers were evaluated by three different groups (senior dental students, individuals seeking orthodontic treatment, orthodontists). Questions asked in these basic areas of orthodontics were about: clear aligners (CA), lingual orthodontics (LO), esthetic braces (EB), and temporomandibular disorders (TMD). The answers were evaluated by the Global Quality Scale (GQS) and Quality Criteria for Consumer Health Information (DISCERN) scale.
    RESULTS: The total mean DISCERN score for answers on CA for students was 51.7 ± 9.38, for patients was 57.2 ± 10.73 and, for orthodontists was 47.4 ± 4.78 (P = .001). Comparison of GQS scores for LO among groups: students (3.53 ± 0.78), patients (4.40 ± 0.72), and orthodontists (3.63 ± 0.72) (P < .001). Intergroup comparison of ChatGPT evaluations about TMD was examined in terms of the DISCERN scale, with the highest value given in the patients group (57.83 ± 11.47) and lowest value in the orthodontist group (45.90 ± 11.84). When information quality evaluation about EB was examined, it GQS scores were >3 in all three groups (students: 3.50 ± 0.78; patients: 4.17 ± 0.87; orthodontists: 3.50 ± 0.82).
    CONCLUSIONS: ChatGPT has significant potential in terms of usability for patient information and education in the field of orthodontics if it is developed and necessary updates are made.
    Keywords:  Artificial intelligence; ChatGPT; Clear aligners; Lingual orthodontics; Patient information
    DOI:  https://doi.org/10.2319/031224-207.1
  10. J Med Internet Res. 2024 Aug 28. 26 e54072
       BACKGROUND: Halitosis, characterized by an undesirable mouth odor, represents a common concern.
    OBJECTIVE: This study aims to assess the quality and readability of web-based Arabic health information on halitosis as the internet is becoming a prominent global source of medical information.
    METHODS: A total of 300 Arabic websites were retrieved from Google using 3 commonly used phrases for halitosis in Arabic. The quality of the websites was assessed using benchmark criteria established by the Journal of the American Medical Association, the DISCERN tool, and the presence of the Health on the Net Foundation Code of Conduct (HONcode). The assessment of readability (Flesch Reading Ease [FRE], Simple Measure of Gobbledygook, and Flesch-Kincaid Grade Level [FKGL]) was conducted using web-based readability indexes.
    RESULTS: A total of 127 websites were examined. Regarding quality assessment, 87.4% (n=111) of websites failed to fulfill any Journal of the American Medical Association requirements, highlighting a lack of authorship (authors' contributions), attribution (references), disclosure (sponsorship), and currency (publication date). The DISCERN tool had a mean score of 34.55 (SD 7.46), with the majority (n=72, 56.6%) rated as moderate quality, 43.3% (n=55) as having a low score, and none receiving a high DISCERN score, indicating a general inadequacy in providing quality health information to make decisions and treatment choices. No website had HONcode certification, emphasizing the concern over the credibility and trustworthiness of these resources. Regarding readability assessment, Arabic halitosis websites had high readability scores, with 90.5% (n=115) receiving an FRE score ≥80, 98.4% (n=125) receiving a Simple Measure of Gobbledygook score <7, and 67.7% (n=86) receiving an FKGL score <7. There were significant correlations between the DISCERN scores and the quantity of words (P<.001) and sentences (P<.001) on the websites. Additionally, there was a significant relationship (P<.001) between the number of sentences and FKGL and FRE scores.
    CONCLUSIONS: While readability was found to be very good, indicating that the information is accessible to the public, the quality of Arabic halitosis websites was poor, reflecting a significant gap in providing reliable and comprehensive health information. This highlights the need for improving the availability of high-quality materials to ensure Arabic-speaking populations have access to reliable information about halitosis and its treatment options, tying quality and availability together as critical for effective health communication.
    Keywords:  Arabic mouth medical information; bad breath; halitosis; health information; infodemiological study; infodemiology; malodor, Arabic web-based; odor treatment; oral malodor; readability; reliable information
    DOI:  https://doi.org/10.2196/54072
  11. J Clin Med. 2024 Aug 09. pii: 4691. [Epub ahead of print]13(16):
      Background: Fractures of the distal radius are among the most common bone injuries, and their frequency is constantly increasing, leading to an elevated need for subsequent rehabilitation. This growing need has led to the emergence of online content aimed at providing guidance on rehabilitation. Nonetheless, unreviewed online content raises concerns about its reliability; therefore, the objective of this study was to evaluate the quality, reliability, and comprehensiveness of online videos concerning rehabilitation following a distal radius fracture. Methods: A total of 240 YouTube videos were screened, identifying 33 videos that met the inclusion criteria. These selected videos were evaluated by five independent experts from various professional groups, using the Global Quality Scale, the DISCERN reliability tool, and the JAMA Benchmark Score, as well as a structured set of questions to assess their comprehensiveness and coverage of pertinent aspects. Results: The observers' assessment of the Global Quality Scale exhibited a broad spectrum of viewpoints, indicating considerable variability in evaluations. In most cases, therapy aligned well with the diagnosed condition, and most raters deemed the indication and instruction in the videos acceptable. A proportion of 87% of the videos was deemed suitable for home training by at least three raters. However, a concerning trend emerged, as potential risks and pitfalls were scarcely addressed. Conclusions: The moderate overall quality of the videos and the divergence in expert opinions highlight the need for a regulatory authority to ensure adherence to guidelines and maintain high-quality content. Additionally, our results raise concerns about the applicability of established assessment tools in this context.
    Keywords:  distal radius fracture; guidance; online content; quality assessment; rehabilitation
    DOI:  https://doi.org/10.3390/jcm13164691
  12. Medicine (Baltimore). 2024 Aug 23. 103(34): e39330
      The short-video application TikTok shows great potential for disseminating health information. We assessed the content, sources, and quality of information in videos related to nonalcoholic fatty liver disease (NAFLD) on TikTok. Our study aims to identify upload sources, content, and characteristic information for NAFLD videos on TikTok and further evaluate factors related to video quality. We investigated the top 100 videos related to NAFLD on TikTok and analyzed the upload sources, content, and characteristics of these videos. Evaluate video quality using the DISCERN tool and Global Quality Score (GQS). In addition, the correlation between video quality and video characteristics is further studied. In terms of video sources, the majority of NAFLD videos on TikTok (85/100, 85%) were posted by doctors, ensuring the professionalism of the content, and among the video content, disease knowledge was the most dominant video content, accounting for 57% (57/100) of all videos, and the average DISCERN and GQS scores of all 100 videos were 39.59 (SD 3.31) and 2.99 (SD 0.95), respectively. DISCERN and GQS data show that videos related to NAFLD do not have high-quality scores on TikTok, mainly fair (68/100, 68%) and moderate (49/100, 49%). In general, the quality of NAFLD video information from professional content and professional sources was higher than that of nonprofessional sources and nonprofessional content, the video quality of general surgeons was better than that of other department physicians, and the video quality of junior physicians was better than that of senior physicians. In terms of video correlation, durations, the number of fans, and the total number of works were negatively correlated with DISCERN scores (R < 0, P < .05), while likes, comments, collections, shares, and days since upload were not significantly correlated with DISCERN and GQS scores (P > .05). The medical information on TikTok is not rigorous enough to guide patients to make accurate judgments, platforms should monitor and guide publishers to help promote and disseminate quality content.
    DOI:  https://doi.org/10.1097/MD.0000000000039330