bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒08‒11
38 papers selected by
Thomas Krichel, Open Library Society



  1. J Med Libr Assoc. 2024 Apr 01. 112(2): 145-147
      The Generalized Overview of the NIH Data Management and Sharing Policy Effective 2023.01.15 (Generalized Overview) is an instructional material that provides a basic, clear, and linear understanding of the NIH policy and its requirements. While not developing or utilizing new technology, the Generalized Overview is innovative and notable for creatively using a freely available graphic design tool to translate government policy language into an accessible and understandable infographic that can disseminate important information about the NIH DMS Policy needed by researchers and by those who support them. Shared via a Creative Commons license, others may fully adapt the infographic or may simply add their own institutional contact information. The Generalized Overview can be used by any who find themselves responsible for publicizing and/or teaching the NIH Data Management and Sharing Policy at their respective libraries and institutions. It is intended for educational purposes only and should not be used as a substitute for official guidance from the NIH.
    Keywords:  Canva; Decision Tree; Flow-chart; Infographic; NIH Data Management and Sharing Policy
    DOI:  https://doi.org/10.5195/jmla.2024.1867
  2. J Med Libr Assoc. 2024 Apr 01. 112(2): 142-144
      The DMPTool NIH Data Management and Sharing Plan (DMSP) Templates Project was launched in response to the 2023 NIH Data Management and Sharing (DMS) Policy. This new policy introduced a more structured framework for DMS Plans, featuring six key elements, a departure from the 2003 NIH DMS policy. The project aimed to simplify the process for data librarians, research administrators, and researchers by providing a template with curated guidance, eliminating the need to navigate various policies and guidelines. The template breaks out each Plan section and subsection and provides related guidance and examples at the point of need. This effort has resulted in two NIH DMSP Templates. The first is a generic template (NIH-Default) for all ICs, complying with NOT-OD-21-013 and NOT-OD-22-198. More recently, an NIMH-specific template (NIH-NIMH) was added based on NOT-MH-23-100. As of October 2023, over 5,000 DMS Plans have been written using the main NIH-Default template and the NIH-NIMH alternative template.
    Keywords:  DMPTool; Data management and sharing plans; NIH data management and sharing plans; NIH grant compliance; Research data management
    DOI:  https://doi.org/10.5195/jmla.2024.1871
  3. J Med Libr Assoc. 2024 Apr 01. 112(2): 158-163
      The twin pandemics of COVID-19 and structural racism brought into focus health disparities and disproportionate impacts of disease on communities of color. Health equity has subsequently emerged as a priority. Recognizing that the future of health care will be informed by advanced information technologies including artificial intelligence (AI), machine learning, and algorithmic applications, the authors argue that to advance towards states of improved health equity, health information professionals need to engage in and encourage the conduct of research at the intersections of health equity, health disparities, and computational biomedical knowledge (CBK) applications. Recommendations are provided with a means to engage in this mobilization effort.
    Keywords:  Algorithms; Artificial Intelligence; Computing Methodologies; Health Equity; Health Inequalities; Health Status Disparities; Information Science; Library Science; Machine Learning
    DOI:  https://doi.org/10.5195/jmla.2024.1836
  4. J Med Libr Assoc. 2024 Apr 01. 112(2): 95-106
      Objective: This article describes the evolution of academic public health library services from standalone academic public health libraries in 2004 to centralized services by 2021.Methods: Five public health libraries serving public health graduate programs (SPH) at public and private institutions were visited in 2006-07. Visits comprised tours, semi-structured interviews with librarians and local health department staff, and collecting of contemporary print documents. We compiled and compared visit notes across libraries. In 2022, we reviewed online materials announcing library closure or transition for timing and how services were to be subsequently provided.
    Results: Libraries and SPH were co-located and most librarians maintained public health expertise though they did not have faculty appointments in their SPHs. Specialized statistical and geographic information systems (GIS) software and data were provided in partnership, often with other system libraries. Only two libraries had strong connections to health departments-one with direct service agreements and another engaged in public health training.
    Conclusion: Academic public health libraries' relationships with SPHs and health departments did not ensure their existence as standalone entities. Following a national trend for branch libraries, public health information services were centralized into larger health or science libraries. The scope and specialization of librarian expertise continues to be valued with several institutions having librarians dedicated to public health.
    Keywords:  Academic libraries; Branch libraries; Health departments; Library spaces; Public health; Public health librarianship
    DOI:  https://doi.org/10.5195/jmla.2024.1804
  5. J Med Libr Assoc. 2024 Apr 01. 112(2): 125-132
      Background: Academic libraries play a significant role in the student learning process. However, student needs and preferences as well as new paradigms of learning are driving libraries to transition from quiet book repositories to places of collaboration and open information. This descriptive, mixed methods case presentation explores the transition of one library, the United States Air Force School of Aerospace Medicine Franzello Aeromedical Library, in three key areas: collection, capability, and facility. Due to the niche subject matter and audience the library serves, this case also describes how the Franzello Aeromedical Library's distinct collection and capability remained intact throughout modernization.Case Presentation: The Franzello Aeromedical Library's modernization project aimed to augment the library as a cutting-edge resource supporting USAFSAM's education, consultation, and research mission to equip Aerospace Medicine Airmen with the skills and knowledge for healthcare delivery in austere environments. This project was approached using five phases: 1) best practices baseline, 2) baseline evaluation of library visitor needs, 3) collection weeding, 4) capability, and 5) space design and construction.
    Conclusion: As a result of this complex two-year project, several recommendations were gleaned. Use the effort as an opportunity to market library services to new audiences. Ensure all stakeholders are at the table from day one and in perpetuity to save time, and consider using purposeful decision-making models, such as Courses of Action, to make tough calls. Be prepared for delays by padding your timeline and compromise where necessary to keep the project alive. Finally, the authors recommend using in-project discovery and findings to plan for future need justification.
    Keywords:  Library modernization; aerospace medicine; collection weeding; library capability; library redesign
    DOI:  https://doi.org/10.5195/jmla.2024.1792
  6. J Med Libr Assoc. 2024 Apr 01. 112(2): 153-157
      Medical librarians work collaboratively across all units and missions of academic medical centers. One area where librarians can provide key expertise is in the building and maintenance of Research Information Management Systems (RIMS). At Penn State, the RIMS implementation team has included a medical librarian, research administrators and marketing staff from the College of Medicine (CoM) since its inception in 2016. As our peer institutions implemented or expanded their own RIMS systems, the CoM team has responded to their questions regarding details about the Penn State RIMS instance. The goal of this commentary is to describe how the CoM team has worked collaboratively within Penn State to address questions related to research output, with special emphasis on details pertaining to questions from other institutions.
    Keywords:  Research networking; biomedical research; collaboration
    DOI:  https://doi.org/10.5195/jmla.2024.1887
  7. J Med Libr Assoc. 2024 Apr 01. 112(2): 133-139
      Background: Libraries provide access to databases with auto-cite features embedded into the services; however, the accuracy of these auto-cite buttons is not very high in humanities and social sciences databases.Case Presentation: This case compares two biomedical databases, Ovid MEDLINE and PubMed, to see if either is reliable enough to confidently recommend to students for use when writing papers. A total of 60 citations were assessed, 30 citations from each citation generator, based on the top 30 articles in PubMed from 2010 to 2020.
    Conclusions: Error rates were higher in Ovid MEDLINE than PubMed but neither database platform provided error-free references. The auto-cite tools were not reliable. Zero of the 60 citations examined were 100% correct. Librarians should continue to advise students not to rely solely upon citation generators in these biomedical databases.
    Keywords:  Auto-citation generator; Ovid MEDLINE; PubMed; biomedical databases; citation on demand; information literacy; librarians
    DOI:  https://doi.org/10.5195/jmla.2024.1718
  8. J Med Libr Assoc. 2024 Apr 01. 112(2): 148-149
      
    Keywords:  Artificial Intelligence; Public health; rapid review
    DOI:  https://doi.org/10.5195/jmla.2024.1868
  9. BMJ Evid Based Med. 2024 Aug 06. pii: bmjebm-2023-112798. [Epub ahead of print]
      
    Keywords:  Information Science; Information Storage and Retrieval; Methods; Publishing; Systematic Reviews as Topic
    DOI:  https://doi.org/10.1136/bmjebm-2023-112798
  10. J Med Libr Assoc. 2024 Apr 01. 112(2): 73-80
      Objectives: This study aims to explore how health science faculty publication patterns at a large public research university have changed over time and examine how productivity relates to their information-seeking behavior and perception of the academic library.Methods: Two datasets were utilized: one consisted of publication records of health sciences faculty spanning a 15-year period, while the other was from a faculty survey exploring faculty's perception of and satisfaction with library resources and services related to their research.
    Results: Health sciences faculty publication patterns have changed over time, characterized by greater productivity, collaboration, and use of literature in their publications. Faculty's literature use correlates with productivity, as evidenced by both datasets. The survey revealed that faculty with more publications tend to rely more on online journals and Interlibrary Loan (ILL). Similarly, the publication data indicated that less productive faculty tended to use fewer references in their publications.
    Discussion: The publication data and survey results offer valuable insights into the health sciences faculty's information-seeking behavior and productivity. Online access to information has been effective in facilitating use of information, as indicated by the greater incorporation of references in publications.
    Conclusion: The study highlights the changing publication patterns and productivity of health sciences faculty, as well as the role academic libraries play in supporting their research and publishing activities. Although multiple variables influence faculty access to and use of information, faculty attitudes towards the library and use of the library are related to faculty research and productivity.
    Keywords:  Academic Libraries; Faculty; Faculty publications; Research Practices
    DOI:  https://doi.org/10.5195/jmla.2024.1789
  11. J Med Libr Assoc. 2024 Apr 01. 112(2): 150-152
      The Data Policy Finder is a searchable database containing librarian-curated information, links, direct quotes from relevant policy sections, and notes to help the researcher search, verify, and plan for their publication data requirements. The Memorial Sloan Kettering Cancer Center Library launched this new resource to help researchers navigate the ever-growing, and widely varying body of publisher policies regarding data, code, and other supplemental materials. The project team designed this resource to encourage growth and collaboration with other librarians and information professionals facing similar challenges supporting their research communities. This resource creates another access point for researchers to connect with data management services and, as an open-source tool, it can be integrated into the workflows and support services of other libraries.
    Keywords:  Editorial Policies; Research Data Management
    DOI:  https://doi.org/10.5195/jmla.2024.1865
  12. J Med Libr Assoc. 2024 Apr 01. 112(2): 164-168
      The five-year rule must die. Despite an extensive literature search, the origins of the five-year rule remain unknown. In an era when the nursing profession is so focused on evidence-based practice, any approach that arbitrarily limits literature searches to articles published in the previous five years lacks scientific basis. We explore some reasons for the pervasiveness of the practice and suggest that librarians need to engage with nursing faculty, who are well-positioned to be change agents in this practice.
    Keywords:  5-year Rule; Date Limits; Date Range; Literature Searches; Nurses; Nursing Education; Nursing Faculty; Nursing Research; Search Limits
    DOI:  https://doi.org/10.5195/jmla.2024.1768
  13. J Med Libr Assoc. 2024 Apr 01. 112(2): 140-141
      Electronic resource reviews written by librarians are a valuable way to identify potential content platforms and stay current on new resources. Resource-focused articles can also assist with learning about useful features, training others, and marketing to potential user groups. However, articles evaluating or highlighting innovative uses of resources may be published in disparate journals or online platforms and are not collocated. Small or solo-staffed libraries may not subscribe to library and information sciences databases or journals that contain reviews of electronic resources. And many of these reviews or other useful articles are open access. With this in mind, the main aim of the LERRN citation database was to create a freely available citation database that brings together electronic resource reviews and other content that can assist librarians in appraising and using electronic resources.
    Keywords:  Comparisons; Databases; ERM; Electronic Resources; Overviews; Reviews
    DOI:  https://doi.org/10.5195/jmla.2024.1862
  14. J Med Libr Assoc. 2024 Apr 01. 112(2): 107-116
      Objective: Health literacy and its potential impacts on the wellbeing of patrons remain a highly regarded objective among health science and medical librarians when considering learning outcomes of patron communities. Librarians are positioned to champion literacy instruction activities. This study aimed to examine health information seeking attitudes and behaviors in an academic-based employee wellness program before and after health literacy workshops were developed and facilitated by an academic health sciences librarian.Methods: The intervention included instruction informed by Don Nutbeam's Health Literacy Framework and the Research Triangle Institute's Health Literacy Conceptual Framework. Sixty-five participants obtained through convenience sampling attended workshops and were invited to respond to pre- and post-session surveys. Using a quantitative quasi-experimental methodology, surveys collected health literacy indicators including preferred sources and handling practices of in-person and online health information.
    Results: Findings indicated workshops influenced information seeking behaviors as participants documented a decrease in social media use for health and wellness information (-36%) and medical information (-13%). An increase in the usage of consumer health databases (like Medline Plus) was also indicated post-workshop for health and wellness information (18%) and medical information (31%).
    Conclusion: Favorable impacts are evident in this small-scale study; however, more research is needed to confirm the influence of these methods on larger and more diverse populations. Librarians should continue to develop and disseminate theory-informed tools and methods aimed at engaging various communities in constructive health information seeking practices.
    Keywords:  Health literacy; academic library; employee wellness; health promotion; instruction techniques; library partnership
    DOI:  https://doi.org/10.5195/jmla.2024.1775
  15. J Med Libr Assoc. 2024 Apr 01. 112(2): 88-94
      Objective: Wikipedia is the most frequently accessed online health information resource and is well positioned as a valuable tool for public health communication and knowledge translation. The authors aimed to explore their institution's health and medical research reach by analyzing its presence in Wikipedia articles.Methods: In October 2022, a comprehensive database search was constructed in PubMed to retrieve clinical evidence syntheses published by at least one author affiliated with McMaster University from 2017 to 2022, inclusive. Altmetric Explorer was queried using PubMed Identifiers and article titles to access metadata and Wikipedia citation data. 3,582 health evidence syntheses from at least one McMaster University affiliated author were analyzed.
    Results: Six percent (n=219) of health evidence syntheses from the authors' institution were cited 568 times in 524 unique Wikipedia articles across 28 different language editions. 45% of citations appeared in English Wikipedia, suggesting a broad global reach for the institutions' research outputs. When adjusted for open access publications, 8% of McMaster University's health evidence syntheses appear in Wikipedia.
    Conclusion: Altmetric Explorer is a valuable tool for exploring the reach of an institution's research outputs. Isolating Altmetric data to focus on Wikipedia citations has value for any institution wishing to gain more insight into the global, community-level reach of its contributions to the latest health and medical evidence.
    Keywords:  Citations; Wikipedia; research reach
    DOI:  https://doi.org/10.5195/jmla.2024.1730
  16. J Crit Care Med (Targu Mures). 2024 Jan;10(1): 85-95
      Introduction: Healthcare-associated infections have a significant impact on public health, and many patients and their next-of-kin are seeking information on the internet. The study aimed to assess the quality of online written content about healthcare-associated infections available in English, Romanian, and Hungarian languages.Materials and methods: The study sample included 75 websites, 25 for each language subgroup. The assessment involved examining the general characteristics, adherence to established credibility criteria, and the completeness and accuracy of informational content. The evaluation was conducted using a topic-specific, evidence-based benchmark. Two evaluators independently graded completeness and accuracy; scores were recorded on a scale from 0 to 10. A comparative analysis of websites was performed, considering pertinent characteristics, and potential factors influencing information quality were subjected to testing. The statistical significance was set at 0.05.
    Results: For the overall study sample, the average credibility, completeness, and accuracy scores were 5.1 (SD 1.7), 2.4 (SD 1.5), and 5.9 (SD 1.0), respectively. Pairwise comparison tests revealed that English websites rated significantly higher than Romanian and Hungarian websites on all three quality measures (P<0.05). Website specialization, ownership, and main goal were not associated with credibility or content ratings. However, conventional medicine websites consistently scored higher than alternative medicine and other websites across all three information quality measures (P<0.05). Credibility scores were positively but weakly correlated with completeness (rho=0.273; P=0.0176) and accuracy scores (rho=0.365; P=0.0016).
    Conclusions: The overall quality ratings of information about healthcare-associated infections on English, Romanian, and Hungarian websites ranged from intermediate to low. The description of information regarding the symptoms and prevention of healthcare-associated infections was notably unsatisfactory. The study identified website characteristics possibly associated with higher-quality online sources about healthcare-associated infections, but additional research is needed to establish robust evidence.
    Keywords:  consumer health informatics; health-related information; infodemiology; internet; misinformation; nosocomial infections; quality assessment
    DOI:  https://doi.org/10.2478/jccm-2024-0011
  17. Plast Surg (Oakv). 2024 Aug;32(3): 452-459
      Background: Lower extremity reconstructive surgery is an evolving field wherein patients rely on accessible online materials to engage with their perioperative care. This study furthers existing research in this area by evaluating the readability, understandability, actionability, and cultural sensitivity of online health materials for lower extremity reconstruction. Methods: We identified the 10 first-appearing, educational sites found by searching the phrases "leg saving surgery", "limb salvage surgery," and "leg reconstruction surgery". Readability analysis was conducted with validated tools, including Simple Measure of Gobbledygook (SMOG). Understandability and actionability were assessed with Patient Education and Materials Assessment Tool (PEMAT), while cultural sensitivity was measured with Cultural Sensitivity Assessment Tool (CSAT). A Cohen's κ value was calculated (PEMAT and CSAT analyses) for inter-rater agreement. Results: The mean SMOG reading level for websites was 13.12 (college-freshman reading level). The mean PEMAT understandability score was 61.8% and actionability score was 26.0% (κ = 0.8022), both below the 70% acceptability threshold. The mean CSAT score was 2.6 (κ = 0.73), exceeding the 2.5 threshold for cultural appropriateness. Conclusion: Online PEM for lower extremity reconstruction continue to fall below standards of readability, understandability, and actionability; however, they meet standards of cultural appropriateness. As patients rely on these materials, creators can use validated tools and positive examples from existing PEM for greater patient accessibility.
    Keywords:  health literacy; lower extremity reconstruction; patient education material
    DOI:  https://doi.org/10.1177/22925503221120548
  18. Medicine (Baltimore). 2024 Aug 09. 103(32): e39229
      This study aimed to investigate quality and readability of online rhinoplasty information provided on Turkish websites. We searched for the terms "rhinoplasty" (rinoplasti) and "nose job" (burun estetiği) in Turkish using the Google search engine in May 2023. The first 30 sites for each term were included in the evaluation. We used the DISCERN tool to evaluate quality and the Atesman and Cetinkaya-Uzun formulas to assess readability. According to the Atesman formula, the readability scores of all the websites were moderately difficult. According to the Cetinkaya-Uzun formula, the readability scores of websites were at the instructional reading level. The mean total DISCERN score was 2.33 ± 0.60, indicating poor quality. No statistically significant correlations were found between the Atesman or Cetinkaya-Uzun readability scores and the DISCERN scores across all websites (P > .05). Our analysis revealed key areas in which Turkish websites can improve the quality and readability of rhinoplasty information to support decision-making.
    DOI:  https://doi.org/10.1097/MD.0000000000039229
  19. Cureus. 2024 Jul;16(7): e63800
      Introduction The internet is increasingly the first port of call for patients introduced to new treatments. Unfortunately, many websites are of poor quality, thereby limiting patients' ability to make informed health decisions. Within thoracic surgery, the treatment options for pneumothoraces may be less intuitive for patients to understand compared to procedures such as lobectomies and wedge resections. Therefore, patients must receive high-quality information to make informed treatment decisions. No study to date has evaluated online information regarding pneumothorax surgery. Knowledge regarding the same may allow physicians to recommend appropriate websites to patients and supplement remaining knowledge gaps. Objective This study aims to evaluate the content, readability, and reliability of online information regarding pneumothorax surgery. Methods A total of 11 search terms including "pneumothorax surgery," "pleurectomy," and "pleurodesis" were each entered into Google, Bing, and Yahoo. The top 20 websites found through each search were screened, yielding 660 websites. Only free websites designed for patient consumption that provided information on pneumothorax surgery were included. This criterion excluded 581 websites, leaving 79 websites to be evaluated. To evaluate website reliability, the Journal of American Medical Association (JAMA) and DISCERN benchmark criteria were applied. To evaluate the readability, 10 standardized tools were utilized including the Flesch-Kincaid Reading Ease Score. To evaluate website content, a novel, self-designed 10-part questionnaire was utilized to assess whether information deemed essential by the authors was included. It evaluated whether websites comprehensively described the surgery process for patients, including pre- and post-operative care. Website authorship and year of publication were also noted. Results The mean JAMA score was 1.69 ± 1.29 out of 4, with only nine websites achieving all four reliability criteria. The median readability score was 13.42 (IQR: 11.48-16.23), which corresponded to a 13th-14th school grade standard. Only four websites were written at a sixth-grade reading level. In the novel content questionnaire, 31.6% of websites (n = 25) did not mention any side effects of pneumothorax surgery. Similarly, 39.2% (n = 31) did not mention alternative treatment options. There was no correlation between the date of website update and JAMA (r = 0.158, p = 0.123), DISCERN (r = 0.098, p = 0.341), or readability (r = 0.053, p = 0.606) scores. Conclusion Most websites were written above the sixth-grade reading level, as recommended by the US Department of Health and Human Services. Furthermore, the exclusion of essential information regarding pneumothorax surgery from websites highlights the current gaps in online information. These findings emphasize the need to create and disseminate comprehensive, reliable websites on pneumothorax surgery that enable patients to make informed health decisions.
    Keywords:  pleurectomy; pleurodesis; pneumothorax; readability; reliability; web-based information
    DOI:  https://doi.org/10.7759/cureus.63800
  20. Eur J Ophthalmol. 2024 Aug 07. 11206721241272251
      INTRODUCTION: The rise in popularity of chatbots, particularly ChatGPT by OpenAI among the general public and its utility in the healthcare field is a topic of present controversy. The current study aimed at assessing the reliability and accuracy of ChatGPT's responses to inquiries posed by parents, specifically focusing on a range of pediatric ophthalmological and strabismus conditions.METHODS: Patient queries were collected via a thematic analysis and posed to ChatGPT 3.5 version across 3 unique instances each. The questions were divided into 12 domains totalling 817 unique questions. All responses were scored on the response quality by two experienced pediatric ophthalmologists in a Likert-scale format. All questions were evaluated for readability using the Flesch-Kincaid Grade Level (FKGL) and character counts.
    RESULTS: A total of 638 (78.09%) questions were scored to be perfectly correct, 156 (19.09%) were scored correct but incomplete and only 23 (2.81%) were scored to be partially incorrect. None of the responses were scored to be completely incorrect. Average FKGL score was 14.49 [95% CI 14.4004-14.5854] and the average character count was 1825.33 [95%CI 1791.95-1858.7] with p = 0.831 and 0.697 respectively. The minimum and maximum FKGL scores were 10.6 and 18.34 respectively. FKGL predicted character count, R²=.012, F(1,815) = 10.26, p = .001.
    CONCLUSION: ChatGPT provided accurate and reliable information for a majority of the questions. The readability of the questions was much above the typically required standards for adults, which is concerning. Despite these limitations, it is evident that this technology will play a significant role in the healthcare industry.
    Keywords:  Pediatric ophthalmology; artificial intelligence; chatbot; health information; strabismus
    DOI:  https://doi.org/10.1177/11206721241272251
  21. Hand Surg Rehabil. 2024 Aug 03. pii: S2468-1229(24)00172-5. [Epub ahead of print] 101757
      Popular artificial intelligence systems, like ChatGPT, may be used by anyone to generate humanlike answers to questions. This study assessed whether ChatGPT version 3.5 (ChatGPTv3.5) or the first five results from a Google search provide more accurate, complete, and concise answers to the most common questions patients have about carpal tunnel syndrome. Three orthopedic hand surgeons blindly graded the answers using Likert scales to assess accuracy, completeness, and conciseness. ChatGPTv3.5 and the first five Google results provide answers to carpal tunnel syndrome questions that are similar in accuracy and completeness, but ChatGPTv3.5 answers are more concise. ChatGPTv3.5, being freely accessible to the public, is therefore a good resource for patients seeking concise, Google-equivalent answers to specific medical questions regarding carpal tunnel syndrome. ChatGPTv3.5, given its lack of updated sourcing and risk of presenting false information, should not replace frequently updated academic websites as the primary online medical resource for patients.
    Keywords:  Artificial intelligence; ChatGPT; carpal tunnel; internet
    DOI:  https://doi.org/10.1016/j.hansur.2024.101757
  22. J Nurs Educ. 2024 Aug;63(8): 556-559
      BACKGROUND: Artificial intelligence (AI)-based text generators, such as ChatGPT (OpenAI) and Google Bard (now Google Gemini), have demonstrated proficiency in predicting words and providing responses to various questions. However, their performance in answering clinical queries has not been well assessed. This comparative analysis aimed to assess the capabilities of ChatGPT and Google Gemini in addressing clinical questions.METHOD: Separate interactions with ChatGPT and Google Gemini were conducted to obtain responses to the clinical question, posed in a PICOT (patient, intervention, comparison, outcome, time) format. To ascertain the accuracy of the information provided by the AI chat bots, a thorough examination of full-text articles was conducted.
    RESULTS: Although ChatGPT exhibited relative accuracy in generating bibliographic information, it displayed some inconsistencies in clinical content. Conversely, Google Gemini generated citations and summaries that were entirely fabricated.
    CONCLUSION: Despite generating responses that may appear credible, both AI-based tools exhibited factual inaccuracies, raising substantial concerns about their reliability as potential sources of clinical information. [J Nurs Educ. 2024;63(8):556-559.].
    DOI:  https://doi.org/10.3928/01484834-20240426-01
  23. Foot Ankle Surg. 2024 Aug 06. pii: S1268-7731(24)00181-4. [Epub ahead of print]
      BACKGROUND: This study evaluates the accuracy and readability of Google, ChatGPT-3.5, and 4.0 (two versions of an artificial intelligence model) responses to common questions regarding bunion surgery.METHODS: A Google search of "bunionectomy" was performed, and the first ten questions under "People Also Ask" were recorded. ChatGPT-3.5 and 4.0 were asked these ten questions individually, and their answers were analyzed using the Flesch-Kincaid Reading Ease and Gunning-Fog Level algorithms.
    RESULTS: When compared to Google, ChatGPT-3.5 and 4.0 had a larger word count with 315 ± 39 words (p < .0001) and 294 ± 39 words (p < .0001), respectively. A significant difference was found between ChatGPT-3.5 and 4.0 compared to Google using Flesch-Kincaid Reading Ease (p < .0001).
    CONCLUSIONS: Our findings demonstrate that ChatGPT provided significantly lengthier responses than Google and there was a significant difference in reading ease. Both platforms exceeded the seventh to eighth-grade reading level of the U.S.
    POPULATION:
    LEVEL OF EVIDENCE: N/A.
    Keywords:  Automated intelligence; Bunion; Bunionectomy; ChatGPT; Hallux valgus; Quality improvement
    DOI:  https://doi.org/10.1016/j.fas.2024.08.002
  24. Otolaryngol Head Neck Surg. 2024 Aug 06.
      OBJECTIVE: To use an artificial intelligence (AI)-powered large language model (LLM) to improve readability of patient handouts.STUDY DESIGN: Review of online material modified by AI.
    SETTING: Academic center.
    METHODS: Five handout materials obtained from the American Rhinologic Society (ARS) and the American Academy of Facial Plastic and Reconstructive Surgery websites were assessed using validated readability metrics. The handouts were inputted into OpenAI's ChatGPT-4 after prompting: "Rewrite the following at a 6th-grade reading level." The understandability and actionability of both native and LLM-revised versions were evaluated using the Patient Education Materials Assessment Tool (PEMAT). Results were compared using Wilcoxon rank-sum tests.
    RESULTS: The mean readability scores of the standard (ARS, American Academy of Facial Plastic and Reconstructive Surgery) materials corresponded to "difficult," with reading categories ranging between high school and university grade levels. Conversely, the LLM-revised handouts had an average seventh-grade reading level. LLM-revised handouts had better readability in nearly all metrics tested: Flesch-Kincaid Reading Ease (70.8 vs 43.9; P < .05), Gunning Fog Score (10.2 vs 14.42; P < .05), Simple Measure of Gobbledygook (9.9 vs 13.1; P < .05), Coleman-Liau (8.8 vs 12.6; P < .05), and Automated Readability Index (8.2 vs 10.7; P = .06). PEMAT scores were significantly higher in the LLM-revised handouts for understandability (91 vs 74%; P < .05) with similar actionability (42 vs 34%; P = .15) when compared to the standard materials.
    CONCLUSION: Patient-facing handouts can be augmented by ChatGPT with simple prompting to tailor information with improved readability. This study demonstrates the utility of LLMs to aid in rewriting patient handouts and may serve as a tool to help optimize education materials.
    LEVEL OF EVIDENCE: Level VI.
    Keywords:  ChatGPT; Flesch‐Kincaid; artificial intelligence; handout information; large language model; natural language processing; patient education materials; readability
    DOI:  https://doi.org/10.1002/ohn.927
  25. Cureus. 2024 Jul;16(7): e63865
      BACKGROUND: Artificial intelligence (AI) is a burgeoning new field that has increased in popularity over the past couple of years, coinciding with the public release of large language model (LLM)-driven chatbots. These chatbots, such as ChatGPT, can be engaged directly in conversation, allowing users to ask them questions or issue other commands. Since LLMs are trained on large amounts of text data, they can also answer questions reliably and factually, an ability that has allowed them to serve as a source for medical inquiries. This study seeks to assess the readability of patient education materials on cardiac catheterization across four of the most common chatbots: ChatGPT, Microsoft Copilot, Google Gemini, and Meta AI.METHODOLOGY: A set of 10 questions regarding cardiac catheterization was developed using website-based patient education materials on the topic. We then asked these questions in consecutive order to four of the most common chatbots: ChatGPT, Microsoft Copilot, Google Gemini, and Meta AI. The Flesch Reading Ease Score (FRES) was used to assess the readability score. Readability grade levels were assessed using six tools: Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman-Liau Index (CLI), Simple Measure of Gobbledygook (SMOG) Index, Automated Readability Index (ARI), and FORCAST Grade Level.
    RESULTS: The mean FRES across all four chatbots was 40.2, while overall mean grade levels for the four chatbots were 11.2, 13.7, 13.7, 13.3, 11.2, and 11.6 across the FKGL, GFI, CLI, SMOG, ARI, and FORCAST indices, respectively. Mean reading grade levels across the six tools were 14.8 for ChatGPT, 12.3 for Microsoft Copilot, 13.1 for Google Gemini, and 9.6 for Meta AI. Further, FRES values for the four chatbots were 31, 35.8, 36.4, and 57.7, respectively.
    CONCLUSIONS: This study shows that AI chatbots are capable of providing answers to medical questions regarding cardiac catheterization. However, the responses across the four chatbots had overall mean reading grade levels at the 11th-13th-grade level, depending on the tool used. This means that the materials were at the high school and even college reading level, which far exceeds the recommended sixth-grade level for patient education materials. Further, there is significant variability in the readability levels provided by different chatbots as, across all six grade-level assessments, Meta AI had the lowest scores and ChatGPT generally had the highest.
    Keywords:  artificial intelligence; cardiac catheterization; chatgpt; google gemini; meta ai; microsoft copilot; patient education materials; readability
    DOI:  https://doi.org/10.7759/cureus.63865
  26. Cureus. 2024 Jul;16(7): e63820
      Background Millions of individuals every day turn to the internet for assistance in understanding their hand conditions and potential treatments. While online educational resources appear abundant, there are concerns about whether resources meet the readability recommendations agreed upon by the American Medical Association (AMA) and the National Institutes of Health (NIH). Identifying educational resources that are readable for the majority of patients could improve a patient's understanding of their medical condition, subsequently improving their health outcomes. Methods The readability of the top five websites for the 10 most common hand conditions was examined using the Flesch-Kincaid (FK) analysis, comprising the FK reading ease and FK grade level. The FK reading ease score is an indicator of how difficult a text is to comprehend, while the FK grade level score is the grade level an individual reading a particular text would need to fully understand the text. Results The average FK reading ease was 56.00, which correlates with "fairly difficult (high school)". The average FK corresponded to an eighth-grade reading level, far above the sixth-grade reading level recommendation set by the AMA and NIH. Conclusion Patient education, satisfaction, and the patient-physician relationship can all be improved by providing patients with more readable educational materials. Our study shows there is an opportunity for drastic improvement in the readability of online educational materials. Guiding patients with effective search techniques, advocating for the creation of more readable materials, and having a better understanding of the health literacy barriers patients face will allow hand surgeons to provide more comprehensive care to patients.
    Keywords:  flesch-kincaid; google; hand conditions; readbility; reading level
    DOI:  https://doi.org/10.7759/cureus.63820
  27. J Surg Res. 2024 Aug 03. pii: S0022-4804(24)00409-8. [Epub ahead of print]302 200-207
      INTRODUCTION: Presenting health information at a sixth-grade reading level is advised to accommodate the general public's abilities. Breast cancer (BC) is the second-most common malignancy in women, but the readability of online BC information in English and Spanish, the two most commonly spoken languages in the United States, is uncertain.METHODS: Three search engines were queried using: "how to do a breast examination," "when do I need a mammogram," and "what are the treatment options for breast cancer" in English and Spanish. Sixty websites in each language were studied and classified by source type and origin. Three readability frameworks in each language were applied: Flesch Kincaid Reading Ease, Flesch Kincaid Grade Level, and Simple Measure of Gobbledygook (SMOG) for English, and Fernández-Huerta, Spaulding, and Spanish adaptation of SMOG for Spanish. Median readability scores were calculated, and corresponding grade level determined. The percentage of websites requiring reading abilities >sixth grade level was calculated.
    RESULTS: English-language websites were predominantly hospital-affiliated (43.3%), while Spanish websites predominantly originated from foundation/advocacy sources (43.3%). Reading difficulty varied across languages: English websites ranged from 5th-12th grade (Flesch Kincaid Grade Level/Flesch Kincaid Reading Ease: 78.3%/98.3% above sixth grade), while Spanish websites spanned 4th-10th grade (Spaulding/Fernández-Huerta: 95%/100% above sixth grade). SMOG/Spanish adaptation of SMOG scores showed lower reading difficulty for Spanish, with few websites exceeding sixth grade (1.7% and 0% for English and Spanish, respectively).
    CONCLUSIONS: Online BC resources have reading difficulty levels that exceed the recommended sixth grade, although these results vary depending on readability framework. Efforts should be made to establish readability standards that can be translated into Spanish to enhance accessibility for this patient population.
    Keywords:  Breast cancer information; English websites; Healthcare communication; Information accessibility; Online health information; Readability; Spanish websites
    DOI:  https://doi.org/10.1016/j.jss.2024.07.026
  28. Cureus. 2024 Jul;16(7): e64114
      INTRODUCTION: ChatGPT (OpenAI, San Francisco, CA, USA) is a novel artificial intelligence (AI) application that is used by millions of people, and the numbers are growing by the day. Because it has the potential to be a source of patient information, the study aimed to evaluate the ability of ChatGPT to answer frequently asked questions (FAQs) about asthma with consistent reliability, acceptability, and easy readability.METHODS: We collected 30 FAQs about asthma from the Global Initiative for Asthma website. ChatGPT was asked each question twice, by two different users, to assess for consistency. The responses were evaluated by five board-certified internal medicine physicians for reliability and acceptability. The consistency of responses was determined by the differences in evaluation between the two answers to the same question. The readability of all responses was measured using the Flesch Reading Ease Scale (FRES), the Flesch-Kincaid Grade Level (FKGL), and the Simple Measure of Gobbledygook (SMOG).
    RESULTS: Sixty responses were collected for evaluation. Fifty-six (93.33%) of the responses were of good reliability. The average rating of the responses was 3.65 out of 4 total points. 78.3% (n=47) of the responses were found acceptable by the evaluators to be the only answer for an asthmatic patient. Only two (6.67%) of the 30 questions had inconsistent answers. The average readability of all responses was determined to be 33.50±14.37 on the FRES, 12.79±2.89 on the FKGL, and 13.47±2.38 on the SMOG.
    CONCLUSION: Compared to online websites, we found that ChatGPT can be a reliable and acceptable source of information for asthma patients in terms of information quality. However, all responses were of difficult readability, and none followed the recommended readability levels. Therefore, the readability of this AI application requires improvement to be more suitable for patients.
    Keywords:  artificial intelligence; asthma; chatgpt; large language models; medical education; patient information; readability; reliability
    DOI:  https://doi.org/10.7759/cureus.64114
  29. Cureus. 2024 Jul;16(7): e63857
      BACKGROUND:  Online video hosting websites such as YouTube have been increasingly used by medical institutions to spread information about new and exciting topics. However, due to the large number of videos uploaded daily and the lack of peer review, few attempts have been made to assess the quantity and quality of information that is uploaded on YouTube. For this study, our team assessed the available content on the transoral robotic surgery (TORS) procedure.METHODS: A qualitative case study model was employed. Videos related to TORS were collected using a unified search protocol. Each video was then analyzed, and metrics of the following data points were collected: views, likes, comments, upload date, length of video, author type, author, and region of origin. Each dataset was analyzed by two distinct authors, and interrater reliability was calculated. Quantitative and qualitative statistics were curated.
    RESULTS: A total of 124 videos were analyzed for this review. The breakdown of videos was as follows: 15.32% (19) in the educational for patients category, 16.94% (21) in the educational for trainees category, 30.65% (38) in the procedural overview category, 8.87% (11) in the patient experience (PE) category, 10.48% (13) in the promotional category, 12.10% (15) in the other category, and 5.65% (7) in the irrelevant (IR) category. The total number of views across all videos analyzed was 2,589,561. The total number of likes was 14,827, and the total number of comments was 2,606. The average video length was 8.63 minutes. The most viewed category was the PE category at 1,014,738 and the most liked at 1,714. The least viewed category was IR at 21,082. The PE category had the most engagement based on combined comments and likes. The most watched video, with 774,916 views, was in the PE category under the "TORS for Thyroidectomy" search term and was titled "Thyroid Surgery (Thyroidectomy)."
    CONCLUSION: As the prevalence of online videos regarding medical devices, procedures, and treatments increases, patients and trainees alike will look toward resources such as YouTube to augment their understanding. Patients, providers, and medical education platforms should take heed of the promise and pitfalls of medical content on YouTube.
    Keywords:  adult education; otolaryngology education; sleep apnea surgery; transoral robotic surgery; youtube study
    DOI:  https://doi.org/10.7759/cureus.63857
  30. Cureus. 2024 Jul;16(7): e63769
      INTRODUCTION: The Magnetic Resonance Imaging (MRI) machine is a subset of nuclear magnetic resonance imaging technology that produces images of the body using magnetic field gradients. The MRI Machine has two components: the computer-based control centre room and the adjacent MRI machine room where the patient undergoes the scan.AIMS: This study aimed to assess the quality and reliability of YouTube videos about MRI machines, MRI scans, and MRI claustrophobia and compare the quality and reliability of the videos among different types of uploaders.  Methodology: The YouTube Search Algorithm and a Google Sheets questionnaire were used to evaluate 10 videos that satisfied the inclusion criteria of the study. The video analytics included were title, number of views, likes and dislikes, comments, duration, source, and content. The quality of each video was established using the Global Quality Score (GQS), Reliability Score, and Video Power Index (VPI), where each quantifier went through statistical analysis using SPSS software, version 21.0 (IBM Corp., Armonk, NY) to determine if there was any significance.
    RESULTS: In order to determine statistical differences between the groups, the Kruskal-Wallis test was used on the quantifiers GQS, reliability score, and VPI to generate p-values. The p-value for VPI is 0.467, GQS is 0.277, and reliability is 0.316. All the p-values are greater than 0.05, showing that there is no statistical support for any significant difference between the groups in their VPI, GQS and reliability scores.
    CONCLUSIONS: YouTube videos with high-quality and reliable information on MRI machines, MRI procedures, and claustrophobia, especially those uploaded by clinicians and hospitals, can provide correct information, helping patients decide to undergo these procedures and alleviate claustrophobia.
    Keywords:  global quality score; machine phobia; mri machines; mri process phobia; video power index; youtube
    DOI:  https://doi.org/10.7759/cureus.63769
  31. Comput Inform Nurs. 2024 Aug 05.
      This study, conducted using the descriptive-correlational model, aims to evaluate the content, reliability, and quality of insulin pen injection videos on YouTube. The video-sharing platform YouTube was searched with the keyword "insulin pen injection." Of the 101 relevant videos, 49 were included in the study. Video contents were evaluated independently by the "Insulin Pen Injection Guide Form," their reliability by the "DISCERN Questionnaire," and their quality by the "Global Quality Scale." Of the 49 videos that met the inclusion criteria, 55.1% contained useful information, and 44.8% contained misleading information. The videos that were found to be useful were longer and had higher DISCERN and content scores. A statistically significant positive correlation was determined between the videos' DISCERN and content scores (r = 0.772, P < .001). More than half of the insulin pen injection videos available on YouTube are helpful, but the number of misleading videos is close to the helpful ones. Thus, it may be recommended that the insulin pen injection videos be evaluated by experts in line with evidence-based guidelines before sharing them on YouTube.
    DOI:  https://doi.org/10.1097/CIN.0000000000001182
  32. Medicine (Baltimore). 2024 Aug 09. 103(32): e39254
      Due to the lengthy and challenging nature of traumatic brain injury (TBI) rehabilitation, patients and carers increasingly rely on YouTube for information. However, no previous research has assessed the quality and reliability of these TBI rehabilitation videos on this platform. This study aims to assess the quality and reliability of YouTube videos on TBI rehabilitation. In this cross-sectional study, a YouTube search with the keyword "traumatic brain injury rehabilitation" was performed, and the first 100 videos were listed according to relevancy. After applying exclusion criteria, a total of 72 videos were included in the analysis. DISCERN, Journal of the American Medical Association, and Global Quality Score were used to evaluate the quality and reliability of the videos. Video characteristics, including the number of likes, dislikes, duration, and source of upload, were recorded. The mean DISCERN total score was determined to be 39.56 ± 8.4. Additionally, the mean Journal of the American Medical Association score was 1.93 ± 0.57, the Global Quality Score was 2.6 ± 0.81, and the DISCERN quality score was 2.55 ± 0.79. Analysis showed that videos with a longer duration (P < .001) and those uploaded earlier (P = .002) were more likely to be of higher quality. Videos produced by healthcare professionals had higher DISCERN scores (P = .049) than those uploaded by non-healthcare professionals. Examination of YouTube videos on TBI rehabilitation indicates a moderate overall quality. The study revealed that videos uploaded by healthcare professionals have higher quality. For obtaining reliable information on TBI rehabilitation, it is also advisable to prioritize videos with longer durations and earlier upload dates. Given the significant role of social media platforms in educational outreach for rehabilitation, it is crucial to enhance the quality of these videos through appropriate measures.
    DOI:  https://doi.org/10.1097/MD.0000000000039254
  33. Int J Gynaecol Obstet. 2024 Aug 09.
      OBJECTIVE: To assess the quality, reliability, and level of misinformation in TikTok videos about hysteroscopy.METHODS: A cross-sectional analysis of TikTok videos retrieved using "hysteroscopy" as search term was performed. Patient education materials assessment tool for audio-visual content (PEMAT A/V), the modified DISCERN (mDISCERN), global quality scale (GQS), video information and quality index (VIQI) and misinformation assessment were used.
    RESULTS: Of three hundred videos captured, 156 were excluded and 144 were included. Most videos were partially accurate or uninformative (43.8% and 34.7%, respectively). Non-healthcare providers produced more inaccurate or uninformative videos than healthcare workers (51.1% vs 4.0%; P < 0.001). Compared to content by professionals, content by patients showed increased distrust towards gynecologists (11.7% vs 0%; P = 0.012) and increased incidence of anxiety and concern towards hysteroscopy (25.5% vs 2%; P < 0.001). PEMAT A/V scores for understandability and actionability were low at 42.9% (interquartile range [IQR]: 11.1-70) and 0% (IQR: 0-0), respectively. Understandability (P < 0.001) and actionability (P = 0.001) were higher for professionals' created content relative to patients' videos. Similarly, median mDISCERN score was low (1 [IQR 0-2]), with significantly higher score for healthcare professionals compared to patients (P < 0.001). Overall video quality was also low, with median VIQI and GQS score of 7 (IQR 4-11) and 1 (IQR 1-3), respectively, and significantly higher scores for healthcare workers' captions compared to patients' for both (P < 0.001 and P = 0.001, respectively).
    CONCLUSION: TikTok videos' quality on hysteroscopy seems unsatisfactory and misinformative, with low understandability and actionability scores. Videos recorded by healthcare workers show higher quality and less misinformation than those by patients. Raising the awareness regarding the low quality of medical information on social media is crucial to increase future reliability and trustworthiness.
    Keywords:  TikTok; healthcare professionals; hysteroscopy; internet; misinformation; patients; quality; reliability; social media; video
    DOI:  https://doi.org/10.1002/ijgo.15846
  34. J Med Libr Assoc. 2024 Apr 01. 112(2): 117-124
      Background: Health literacy outreach is commonplace within public and hospital libraries but less so in academic libraries, where it is often viewed as not integral. Academic health science libraries may collaborate with public libraries to provide public health information literacy programming or "train the trainer" sessions, but examples of academic health science librarians leading community health initiatives are still limited.Case Presentation: This case report discusses a collaborative project between Gonzaga's Foley Center Library, the School of Nursing and Human Physiology, and a local elementary school to promote health literacy for students and their families, led by an Academic Health Sciences Librarian. The project scope included delivering nutrition education to elementary school students and their families, but pandemic closures limited plans for in-person programming. Conversations with stakeholders led to additional project opportunities, including tabling at the local block party, collaborating on a campus visit for 5th and 6th graders, supporting middle school cooking classes, and the creation of a toolkit for elementary and middle school teachers to support curriculum about healthy body image and potential disordered eating.
    Conclusion: This project demonstrates one example of how academic libraries can partner with other campus departments to support health literacy outreach in their local communities. The pandemic made planning for in-person programming tenuous, but by expanding meetings to include staff from other areas of the university, the project team was able to tap into additional outreach opportunities. This work fostered close relationships with the local elementary school, providing the groundwork for collaborative health programming in the future, though more thorough assessment is suggested for future projects.
    Keywords:  Academic libraries; Community Engagement; Community Outreach; Health Information Literacy; Nutrition; children's health
    DOI:  https://doi.org/10.5195/jmla.2024.1678
  35. J Med Libr Assoc. 2024 Apr 01. 112(2): 81-87
      Background: NYU Langone Health offers a collaborative research block for PGY3 Primary Care residents that employs a secondary data analysis methodology. As discussions of data reuse and secondary data analysis have grown in the data library literature, we sought to understand what attitudes internal medicine residents at a large urban academic medical center had around secondary data analysis. This case report describes a novel survey on resident attitudes around data sharing.Methods: We surveyed internal medicine residents in three tracks: Primary Care (PC), Categorical, and Clinician-Investigator (CI) tracks as part of a larger pilot study on implementation of a research block. All three tracks are in our institution's internal medicine program. In discussions with residency directors and the chief resident, the term "secondary data analysis" was chosen over "data reuse" due to this being more familiar to clinicians, but examples were given to define the concept.
    Results: We surveyed a population of 162 residents, and 67 residents responded, representing a 41.36% response rate. Strong majorities of residents exhibited positive views of secondary data analysis. Moreover, in our sample, those with exposure to secondary data analysis research opined that secondary data analysis takes less time and is less difficult to conduct compared to the other residents without curricular exposure to secondary analysis.
    Discussion: The survey reflects that residents believe secondary data analysis is worthwhile and this highlights opportunities for data librarians. As current residents matriculate into professional roles as clinicians, educators, and researchers, libraries have an opportunity to bolster support for data curation and education.
    Keywords:  GME; Graduate Medical Education; data curation; data reuse; data services; residents; secondary data analysis; surveys
    DOI:  https://doi.org/10.5195/jmla.2024.1772
  36. Health Info Libr J. 2024 Aug 05.
      BACKGROUND: The COVID-19 pandemic has compelled governments globally to formulate policies addressing the unique needs of their populations. These policies are critical in disseminating accurate information and enhancing health literacy during crises.OBJECTIVE: This narrative review aims to identify and assess effective information and health literacy policies implemented during pandemics.
    METHODS: A comprehensive literature search was performed across five electronic information sources (PubMed, Science Direct, ProQuest, Emerald Insight, Scopus), supplemented by Google Scholar. The analysis employed Walt and Gilson's health policy triangle framework to categorize and evaluate the findings.
    RESULTS: The review revealed that the policies could be grouped into several key categories: educational programs, laws and regulations, knowledge sharing, national programs, and different information sources. The development of these policies involved multifaceted processes influenced by political, scientific, economic, cultural and social factors, as well as the involvement of multiple stakeholders.
    CONCLUSIONS: This review offers significant insights and actionable recommendations for policymakers and stakeholders. By understanding the dimensions and components of effective information and health literacy policies, stakeholders can better prepare for and respond to future pandemics and similar health crises.
    Keywords:  access to information; health literacy; health policy; pandemic
    DOI:  https://doi.org/10.1111/hir.12544
  37. J Med Libr Assoc. 2024 Apr 01. 112(2): 64-66
      The Journal of the Medical Library Association (JMLA) has made the decision to change our "revise-at-will" policy to instead adopt firmer deadlines for manuscript resubmissions. Beginning with this issue, manuscripts returned to authors with a "revise and resubmit" decision must be resubmitted within two months of the editorial decision. Likewise, manuscripts returned to authors with a "revisions required" decision must be resubmitted within one month of the editorial decision. This editorial discusses JMLA's experience using a "revise-at-will" policy and outlines some anticipated benefits of the new resubmission deadlines.
    DOI:  https://doi.org/10.5195/jmla.2024.1902