bims-skolko Biomed News
on Scholarly communication
Issue of 2024–07–14
23 papers selected by
Thomas Krichel, Open Library Society



  1. Ecol Evol. 2024 Jul;14(7): e11698
      Open science (OS) awareness and skills are increasingly becoming an essential part of everyday scientific work as e.g., many journals require authors to share data. However, following an OS workflow can seem challenging at first. Thus, instructions by journals and other guidelines are important. But how comprehensive are they in the field of ecology and evolutionary biology (Ecol Evol)? To find this out, we reviewed 20 published OS guideline articles aimed for ecologists or evolutionary biologists, together with the data policies of 17 Ecol Evol journals to chart the current landscape of OS guidelines in the field, find potential gaps, identify field-specific barriers for OS and discuss solutions to overcome these challenges. We found that many of the guideline articles covered similar topics, despite being written for a narrow field or specific target audience. Likewise, many of the guideline articles mentioned similar obstacles that could hinder or postpone a transition to open data sharing. Thus, there could be a need for a more widely known, general OS guideline for Ecol Evol. Following the same guideline could also enhance the uniformity of the OS practices carried on in the field. However, some topics, like long-term experiments and physical samples, were mentioned surprisingly seldom, although they are typical issues in Ecol Evol. Of the journals, 15 out of 17 expected or at least encouraged data sharing either for all articles or under specific conditions, e.g. for registered reports and 10 of those required data sharing at the submission phase. The coverage of journal data policies varied greatly between journals, from practically non-existing to very extensive. As journals can contribute greatly by leading the way and making open data useful, we recommend that the publishers and journals would invest in clear and comprehensive data policies and instructions for authors.
    Keywords:  FAIR; Open Science; data policy; data sharing; guideline
    DOI:  https://doi.org/10.1002/ece3.11698
  2. Account Res. 2024 Jul 07. 1-19
      The exponential growth of MDPI and Frontiers over the last decade has been powered by their extensive use of special issues. The "special issue-ization" of journal publishing has been particularly associated with new publishers and seen as potentially "questionable." Through an extended case-study analysis of three journals owned by one of the "big five" commercial publishers, this paper explores the risks that this growing use of special issues presents to research integrity. All three case-study journals show sudden and marked changes in their publication patterns. An analysis of special issue editorials and retraction notes was used to determine the specifics of special issues and reasons for retractions. Descriptive statistics were used to analyse data. Findings suggest that these commercial publishers are also promoting special issues and that article retractions are often connected to guest editor manipulation. This underlies the threat that "special issue-ization" presents to research integrity. It highlights the risks posed by the guest editor model, and the importance of extending this analysis to long-existing commercial publishers. The paper emphasizes the need for an in-depth examination of the underlying structures and political economy of science, and a discussion of the rise of gaming and manipulation within higher education systems.
    Keywords:  Research integrity; academic publishing; higher education; special issues
    DOI:  https://doi.org/10.1080/08989621.2024.2374567
  3. J Gen Intern Med. 2024 Jul 09.
      The Open Access movement has transformed the landscape of medical publishing. Federal regulations regarding Open Access have expanded in the USA, and journals have adapted by offering a variety of Open Access models that range widely in cost and accessibility. For junior faculty with little to no funding, navigating this ever-changing landscape while simultaneously balancing the pressures of publication and promotion may present a particular challenge. Open Access provides the opportunity to amplify the reach and impact of scientific research, yet it often comes at a cost that may not be universally affordable. In this perspective, we discuss the impact of Open Access through the lens of junior faculty in general internal medicine. We describe the potential benefits and pitfalls of Open Access on junior faculty with a focus on research dissemination and cost. Finally, we propose sustainable solutions at the individual and systems-level to help navigate the world of Open Access to promote career growth and development.
    DOI:  https://doi.org/10.1007/s11606-024-08921-5
  4. Naunyn Schmiedebergs Arch Pharmacol. 2024 Jul 11.
      There is a substantial body of scientific literature on the use of third-party services (TPS) by academics to assist as "publication consultants" in scholarly publishing. TPS provide a wide range of scholarly services to research teams that lack the equipment, skills, motivation, or time to produce a paper without external assistance. While services such as language editing, statistical support, or graphic design are common and often legitimate, some TPS also provide illegitimate services and send unsolicited e-mails (spam) to academics offering these services. Such illegitimate types of TPS have the potential to threaten the integrity of the peer-reviewed scientific literature. In extreme cases, for-profit agencies known as "paper mills" even offer fake scientific publications or authorship slots for sale. The use of such illegitimate services as well as the failure to acknowledge their use is an ethical violation in academic publishing, while the failure to declare support for a TPS can be considered a form of contract fraud. We discuss some literature on TPS, highlight services currently offered by ten of the largest commercial publishers and expect authors to be transparent about the use of these services in their publications. From an ethical/moral (i.e., non-commercial) point of view, it is the responsibility of editors, journals, and publishers, and it should be in their best interest to ensure that illegitimate TPS are identified and prohibited, while publisher-employed TPS should be properly disclosed in their publications.
    Keywords:  English; Ethics; Language editing; Outsourcing; Support; Translation; Unethical behavior
    DOI:  https://doi.org/10.1007/s00210-024-03177-6
  5. PeerJ Comput Sci. 2024 ;10 e2066
      Data-driven computational analysis is becoming increasingly important in biomedical research, as the amount of data being generated continues to grow. However, the lack of practices of sharing research outputs, such as data, source code and methods, affects transparency and reproducibility of studies, which are critical to the advancement of science. Many published studies are not reproducible due to insufficient documentation, code, and data being shared. We conducted a comprehensive analysis of 453 manuscripts published between 2016-2021 and found that 50.1% of them fail to share the analytical code. Even among those that did disclose their code, a vast majority failed to offer additional research outputs, such as data. Furthermore, only one in ten articles organized their code in a structured and reproducible manner. We discovered a significant association between the presence of code availability statements and increased code availability. Additionally, a greater proportion of studies conducting secondary analyses were inclined to share their code compared to those conducting primary analyses. In light of our findings, we propose raising awareness of code sharing practices and taking immediate steps to enhance code availability to improve reproducibility in biomedical research. By increasing transparency and reproducibility, we can promote scientific rigor, encourage collaboration, and accelerate scientific discoveries. We must prioritize open science practices, including sharing code, data, and other research products, to ensure that biomedical research can be replicated and built upon by others in the scientific community.
    Keywords:  Accessibility; Code sharing; Data sharing; Open-access; Open-source; Reproducibility; Transparency
    DOI:  https://doi.org/10.7717/peerj-cs.2066
  6. Cas Lek Cesk. 2024 ;162(7-8): 294-297
      The advent of large language models (LLMs) based on neural networks marks a significant shift in academic writing, particularly in medical sciences. These models, including OpenAI's GPT-4, Google's Bard, and Anthropic's Claude, enable more efficient text processing through transformer architecture and attention mechanisms. LLMs can generate coherent texts that are indistinguishable from human-written content. In medicine, they can contribute to the automation of literature reviews, data extraction, and hypothesis formulation. However, ethical concerns arise regarding the quality and integrity of scientific publications and the risk of generating misleading content. This article provides an overview of how LLMs are changing medical writing, the ethical dilemmas they bring, and the possibilities for detecting AI-generated text. It concludes with a focus on the potential future of LLMs in academic publishing and their impact on the medical community.
    Keywords:  large language models (LLMs), neural networks, academic writing, artificial intelligence, transformer architecture, scientific research automation, publishing ethics, detection of AI-generated text
  7. Development. 2024 Jul 01. pii: dev204202. [Epub ahead of print]151(13):
      
    DOI:  https://doi.org/10.1242/dev.204202
  8. Naunyn Schmiedebergs Arch Pharmacol. 2024 Jul 06.
      Scientific fake papers, containing manipulated or completely fabricated data, are a problem that has reached dramatic dimensions. Companies known as paper mills (or more bluntly as "criminal science publishing gangs") produce and sell such fake papers on a large scale. The main drivers of the fake paper flood are the pressure in academic systems and (monetary) incentives to publish in respected scientific journals and sometimes the personal desire for increased "prestige." Published fake papers cause substantial scientific, economic, and social damage. There are numerous information sources that deal with this topic from different points of view. This review aims to provide an overview of these information sources until June 2024. Much more original research with larger datasets is needed, for example on the extent and impact of the fake paper problem and especially on how to detect them, as many findings are based more on small datasets, anecdotal evidence, and assumptions. A long-term solution would be to overcome the mantra of publication metrics for evaluating scientists in academia.
    Keywords:  Fabricated data; Fake paper; Manipulated data; Paper mill; Research fraud; Scientific integrity
    DOI:  https://doi.org/10.1007/s00210-024-03272-8
  9. Sci Data. 2024 Jul 11. 11(1): 760
      Scientific data are essential to advancing scientific knowledge and are increasingly valued as scholarly output. Understanding what drives dataset downloads is crucial for their effective dissemination and reuse. Our study, analysing 55,473 datasets from 69 data repositories, identifies key factors driving dataset downloads, focusing on interpretability, reliability, and accessibility. We find that while lengthy descriptive texts can deter users due to complexity and time requirements, readability boosts a dataset's appeal. Reliability, evidenced by factors like institutional reputation and citation counts of related papers, also significantly increases a dataset's attractiveness and usage. Additionally, our research shows that open access to datasets increases their downloads and amplifies the importance of interpretability and reliability. This indicates that easy access enhances the overall attractiveness and usage of datasets in the scholarly community. By emphasizing interpretability, reliability, and accessibility, this study offers a comprehensive framework for future research and guides data management practices toward ensuring clarity, credibility, and open access to maximize the impact of scientific datasets.
    DOI:  https://doi.org/10.1038/s41597-024-03591-8
  10. Naunyn Schmiedebergs Arch Pharmacol. 2024 Jul 10.
      So-called "middle authors," being neither the first, last, nor corresponding author of an academic paper, have made increasing relative contributions to academic scholarship over recent decades. No work has specifically and explicitly addressed the roles, rights, and responsibilities of middle authors, an authorship position which we believe is particularly vulnerable to abuse via growing phenomena such as paper mills. Responsible middle authorship requires transparent declarations of intellectual and other scientific contributions that journals can and should require of co-authors and established guidelines and criteria to achieve this already exist (ICMJE/CRediT). Although publishers, editors, and authors need to collectively uphold a situation of shared responsibility for appropriate co-authorship, current models have failed science since verification of authorship is impossible, except through blind trust in authors' statements. During the retraction of a paper, while the opinion of individual co-authors might be noted in a retraction notice, the retraction itself practically erases the relevance of co-author contributions and position/status (first, leading, senior, last, co-corresponding, etc.). Paper mills may have successfully proliferated because individual authors' roles and responsibilities are not tangibly verifiable and are thus indiscernible. We draw on a historical example of manipulated research to argue that authors and editors should publish publicly available, traceable contributions to the intellectual content of an article-both classical authorship or technical contributions-to maximize both visibility of individual contributions and accountability. To make our article practically more relevant to this journal's readership, we reviewed the top 50 Q1 journals in the fields of biochemistry and pharmacology, as ranked by the SJR, to appreciate which journals adopted the ICMJE or CRediT schools of authorship contribution, finding significant variation in adhesion to ICMJE guidelines nor the CRediT criteria and wording of author guidelines.
    Keywords:  Accountability; CRediT; ICMJE; Paper mills; Roles, rights, and responsibilities; Transparency; Trust
    DOI:  https://doi.org/10.1007/s00210-024-03277-3
  11. J Hand Ther. 2024 Jul 09. pii: S0894-1130(24)00052-8. [Epub ahead of print]
    Sex and Gender Research in Orthopaedic Journals Group
      
    DOI:  https://doi.org/10.1016/j.jht.2024.05.005
  12. Arthroscopy. 2024 Jul 09. pii: S0749-8063(24)00495-X. [Epub ahead of print]
       PURPOSE: To evaluate the extent to which experienced reviewers can accurately discern between AI-generated and original research abstracts published in the field of shoulder and elbow surgery and compare this to the performance of an AI-detection tool.
    METHODS: Twenty-five shoulder and elbow-related articles published in high-impact journals in 2023 were randomly selected. ChatGPT was prompted with only the abstract title to create an AI-generated version of each abstract. The resulting 50 abstracts were randomly distributed to and evaluated by 8 blinded peer reviewers with at least 5 years of experience. Reviewers were tasked with distinguishing between original and AI-generated text. A Likert scale assessed reviewer confidence for each interpretation and the primary reason guiding assessment of generated text was collected. AI output detector (0-100%) and plagiarism (0-100%) scores were evaluated using GPTZero.
    RESULTS: Reviewers correctly identified 62% of AI-generated abstracts and misclassified 38% of original abstracts as being AI-generated. GPTZero reported a significantly higher probability of AI output among generated abstracts (median 56%, IQR 51-77%) compared to original abstracts (median 10%, IQR 4-37%; p < 0.01). Generated abstracts scored significantly lower on the plagiarism detector (median 7%, IQR 5-14%) relative to original abstracts (median 82%, IQR 72-92%; p < 0.01). Correct identification of AI-generated abstracts was predominately attributed to the presence of unrealistic data/values. The primary reason for misidentifying original abstracts as AI was attributed to writing style.
    CONCLUSIONS: Experienced reviewers faced difficulties in distinguishing between human and AI-generated research content within shoulder and elbow surgery. The presence of unrealistic data facilitated correct identification of AI abstracts, whereas misidentification of original abstracts was often ascribed to writing style.
    DOI:  https://doi.org/10.1016/j.arthro.2024.06.045
  13. PLoS One. 2024 ;19(7): e0304807
      The rapid advances in Generative AI tools have produced both excitement and worry about how AI will impact academic writing. However, little is known about what norms are emerging around AI use in manuscript preparation or how these norms might be enforced. We address both gaps in the literature by conducting a survey of 271 academics about whether it is necessary to report ChatGPT use in manuscript preparation and by running GPT-modified abstracts from 2,716 published papers through a leading AI detection software to see if these detectors can detect different AI uses in manuscript preparation. We find that most academics do not think that using ChatGPT to fix grammar needs to be reported, but detection software did not always draw this distinction, as abstracts for which GPT was used to fix grammar were often flagged as having a high chance of being written by AI. We also find disagreements among academics on whether more substantial use of ChatGPT to rewrite text needs to be reported, and these differences were related to perceptions of ethics, academic role, and English language background. Finally, we found little difference in their perceptions about reporting ChatGPT and research assistant help, but significant differences in reporting perceptions between these sources of assistance and paid proofreading and other AI assistant tools (Grammarly and Word). Our results suggest that there might be challenges in getting authors to report AI use in manuscript preparation because (i) there is not uniform agreement about what uses of AI should be reported and (ii) journals might have trouble enforcing nuanced reporting requirements using AI detection tools.
    DOI:  https://doi.org/10.1371/journal.pone.0304807
  14. JDR Clin Trans Res. 2024 Jul 12. 23800844241247029
    Task Force on Design and Analysis in Oral Health Research
      Adequate and transparent reporting is necessary for critically appraising research. Yet, evidence suggests that the design, conduct, analysis, interpretation, and reporting of oral health research could be greatly improved. Accordingly, the Task Force on Design and Analysis in Oral Health Research-statisticians and trialists from academia and industry-empaneled a group of authors to develop methodological and statistical reporting guidelines identifying the minimum information needed to document and evaluate observational studies and clinical trials in oral health: the OHstat Guidelines. Drafts were circulated to the editors of 85 oral health journals and to Task Force members and sponsors and discussed at a December 2020 workshop attended by 49 researchers. The final version was subsequently approved by the Task Force in September 2021, submitted for journal review in 2022, and revised in 2023. The checklist consists of 48 guidelines: 5 for introductory information, 17 for methods, 13 for statistical analysis, 6 for results, and 7 for interpretation; 7 are specific to clinical trials. Each of these guidelines identifies relevant information, explains its importance, and often describes best practices. The checklist was published in multiple journals. The article was published simultaneously in JDR Clinical and Translational Research, the Journal of the American Dental Association, and the Journal of Oral and Maxillofacial Surgery. Completed checklists should accompany manuscripts submitted for publication to these and other oral health journals to help authors, journal editors, and reviewers verify that the manuscript provides the information necessary to adequately document and evaluate the research.
    Keywords:  comparative studies; publishing/*standards; research design/standards; retrospective studies; statistical data interpretation
    DOI:  https://doi.org/10.1177/23800844241247029