bims-skolko Biomed News
on Scholarly communication
Issue of 2024‒06‒23
thirteen papers selected by
Thomas Krichel, Open Library Society



  1. Front Sociol. 2024 ;9 1157514
      In September 2021 I made a collection of interview transcripts available for public use under a CreativeCommons license through the Princeton DataSpace. The interviews include 39 conversations I had with gig workers at AmazonFlex, Uber, and Lyft in 2019 as part of a study on automation efforts within these organizations. I made this decision because (1) I was required to contribute to a publicly available data set as a requirement of my funding and (2) I saw it as an opportunity to engage in the collaborative qualitative science experiments emerging in Science and Technology studies. This article documents my thought process and step-by-step design decisions for designing a study, gathering data, masking it, and publishing it in a public archive. Importantly, once I decided to publish these data, I determined that each choice about how the study would be designed and implemented had to be assessed for risk to the interviewee in a very deliberate way. It is not meant to be comprehensive and cover every possible condition a researcher may face while producing qualitative data. I aimed to be transparent both in my interview data and the process it took to gather and publish these data. I use this article to illustrate my thought process as I made each design decision for this study in hopes that it could be useful to a future researcher considering their own data publishing process.
    Keywords:  archival data; interview data; open source; qualitative methods; secondary data
    DOI:  https://doi.org/10.3389/fsoc.2024.1157514
  2. J Clin Epidemiol. 2024 Jun 14. pii: S0895-4356(24)00182-3. [Epub ahead of print] 111427
      OBJECTIVES: Retraction is intended to be a mechanism to correct the published body of knowledge when necessary due to fraudulent, fatally flawed or ethically unacceptable publications. However, the success of this mechanism requires that retracted publications be consistently identified as such and that retraction notices contain sufficient information to understand what is being retracted and why. Our study investigated how clearly and consistently retracted publications in public health are being presented to researchers.STUDY DESIGN AND SETTING: This is a cross-sectional study, using 441 retracted research publications in the field of public health. Records were retrieved for each of these publications from 11 resources, while retraction notices were retrieved from publisher websites and full-text aggregators. The identification of the retracted status of the publication was assessed using criteria from the Council on Publication Ethics (COPE) and the National Library of Medicine (NLM). The completeness of the associated retraction notices was assessed using criteria from COPE and Retraction Watch.
    RESULTS: 2841 records for retracted publications were retrieved, of which less than half indicated that the article had been retracted. Less than 5% of publications were identified as retracted through all resources through which they were available. Within single resources, if and how retracted publications were identified varied. Retraction notices were frequently incomplete, with no notices meeting all criteria.
    CONCLUSIONS: The observed inconsistencies and incomplete notices pose a threat to the integrity of scientific publishing and highlight the need to better align with existing best practices to ensure more effective and transparent dissemination of information on retractions.
    Keywords:  information dissemination; research misconduct; retraction; retraction notices; scholarly publishing; scientific misconduct
    DOI:  https://doi.org/10.1016/j.jclinepi.2024.111427
  3. Cureus. 2024 May;16(5): e60461
      INTRODUCTION:  The utility of ChatGPT has recently caused consternation in the medical world. While it has been utilized to write manuscripts, only a few studies have evaluated the quality of manuscripts generated by AI (artificial intelligence).OBJECTIVE:  We evaluate the ability of ChatGPT to write a case report when provided with a framework. We also provide practical considerations for manuscript writing using AI.
    METHODS: We compared a manuscript written by a blinded human author (10 years of medical experience) with a manuscript written by ChatGPT on a rare presentation of a common disease. We used multiple iterations of the manuscript generation request to derive the best ChatGPT output. Participants, outcomes, and measures: 22 human reviewers compared the manuscripts using parameters that characterize human writing and relevant standard manuscript assessment criteria, viz., scholarly impact quotient (SIQ). We also compared the manuscripts using the "average perplexity score" (APS), "burstiness score" (BS), and "highest perplexity of a sentence" (GPTZero parameters to detect AI-generated content).
    RESULTS: The human manuscript had a significantly higher quality of presentation and nuanced writing (p<0.05). Both manuscripts had a logical flow. 12/22 reviewers were able to identify the AI-generated manuscript (p<0.05), but 4/22 reviewers wrongly identified the human-written manuscript as AI-generated. GPTZero software erroneously identified four sentences of the human-written manuscript to be AI-generated.
    CONCLUSION:  Though AI showed an ability to highlight the novelty of the case report and project a logical flow comparable to the human manuscript, it could not outperform the human writer on all parameters. The human manuscript showed a better quality of presentation and more nuanced writing. The practical considerations we provide for AI-assisted medical writing will help to better utilize AI in manuscript writing.
    Keywords:  average perplexity score; burstiness score; chatgpt; comparison with human writing; hypothyroidism
    DOI:  https://doi.org/10.7759/cureus.60461
  4. J Am Med Inform Assoc. 2024 Jun 14. pii: ocae139. [Epub ahead of print]
      OBJECTIVE: Investigate the use of advanced natural language processing models to streamline the time-consuming process of writing and revising scholarly manuscripts.MATERIALS AND METHODS: For this purpose, we integrate large language models into the Manubot publishing ecosystem to suggest revisions for scholarly texts. Our AI-based revision workflow employs a prompt generator that incorporates manuscript metadata into templates, generating section-specific instructions for the language model. The model then generates revised versions of each paragraph for human authors to review. We evaluated this methodology through 5 case studies of existing manuscripts, including the revision of this manuscript.
    RESULTS: Our results indicate that these models, despite some limitations, can grasp complex academic concepts and enhance text quality. All changes to the manuscript are tracked using a version control system, ensuring transparency in distinguishing between human- and machine-generated text.
    CONCLUSIONS: Given the significant time researchers invest in crafting prose, incorporating large language models into the scholarly writing process can significantly improve the type of knowledge work performed by academics. Our approach also enables scholars to concentrate on critical aspects of their work, such as the novelty of their ideas, while automating tedious tasks like adhering to specific writing styles. Although the use of AI-assisted tools in scientific authoring is controversial, our approach, which focuses on revising human-written text and provides change-tracking transparency, can mitigate concerns regarding AI's role in scientific writing.
    Keywords:  Manubot; artificial intelligence; large language models; scholarly publishing
    DOI:  https://doi.org/10.1093/jamia/ocae139
  5. Am J Emerg Med. 2024 Jun 08. pii: S0735-6757(24)00267-5. [Epub ahead of print]82 105-106
      Large Language Models (LLMs) represent a transformative advancement in the preparation of medical scientific manuscripts, offering significant benefits such as reducing drafting time, enhancing linguistic precision, and aiding non-native English speakers. These models, which generate text by learning from extensive datasets, can streamline the publication process and maintain consistency across collaborative projects. However, their limitations, including the risk of generating plausible yet incorrect text and the potential for biases, necessitate careful oversight. Ethical concerns about accuracy, authorship, and transparency need to be carefully considered. The American Journal of Emergency Medicine has adopted a policy permitting LLM use with full disclosure and author responsibility, emphasizing the need for ongoing policy evolution in response to technological advancements.
    DOI:  https://doi.org/10.1016/j.ajem.2024.06.002
  6. Lancet. 2024 Jun 15. pii: S0140-6736(24)01032-8. [Epub ahead of print]403(10444): 2592-2593
      
    DOI:  https://doi.org/10.1016/S0140-6736(24)01032-8
  7. An Acad Bras Cienc. 2024 ;pii: S0001-37652024000201801. [Epub ahead of print]96(2): e20231068
      Open access (OA) publishing provides free online access to research articles without subscription fees. In Brazil, absence of financial support from academic institutions and limited government policies pose challenges to OA publication. Here, we used data from the Web of Science and Scopus to compare with global trends in journal accessibility and scientific quality metrics. Brazilian authors publish more OA articles, particularly in Global South journals. While OA correlates with quality for global authors, it had no impact on Brazilian science. To maximize impact, Brazilian authors should prioritize Q1 journals regardless of OA status. High-impact or Global North journal publication seems more relevant for Brazilian science than OA. Our findings indicate that the present open access policy has been ineffective to improve the impact of Brazilian science, providing insights to guide the formulation of scientific public policies.
    DOI:  https://doi.org/10.1590/0001-3765202420231068
  8. Adv Health Sci Educ Theory Pract. 2024 Jun 20.
      This column is intended to address the kinds of knotty problems and dilemmas with which many scholars grapple in studying health professions education. In this article, the authors address the challenges in proofreading a manuscript. Emerging researchers might think that someone in the production team will catch any errors. This may not always be the case. We emphasize the importance of guiding mentees to take the process of preparing a manuscript for submission seriously.
    DOI:  https://doi.org/10.1007/s10459-024-10352-0
  9. Ecol Evol. 2024 Jun;14(6): e11543
      Many journals have strict word limits, and authors therefore spend considerable time shortening manuscripts. Here, we provide pointers for efficiently doing so while retaining key content. We include general guidance, tips for condensing the different parts of a scientific paper, and advice on what to avoid when shortening manuscripts. We hope that readers will find our guidance helpful.
    Keywords:  conciseness; revision; scientific writing; shortening; writing guidelines
    DOI:  https://doi.org/10.1002/ece3.11543