bims-skolko Biomed News
on Scholarly communication
Issue of 2026–03–29
thirty-one papers selected by
Thomas Krichel, Open Library Society



  1. Science. 2026 Mar 26. 391(6792): 1311
      ArXiv splits from Cornell, aiming to raise funds to cope with rapid growth.
    DOI:  https://doi.org/10.1126/science.aeh4839
  2. Clin Transl Sci. 2026 Apr;19(4): e70532
      
    Keywords:  author; author order; director; last author; paper
    DOI:  https://doi.org/10.1111/cts.70532
  3. PLoS Med. 2026 Mar;23(3): e1005037
      In this Formal Comment, representatives from PLOS, Nature and JAMA call for action on adopting a principle-based approach for a responsible authorship culture.
    DOI:  https://doi.org/10.1371/journal.pmed.1005037
  4. J Bioeth Inq. 2026 Mar 23.
      In the digital age, where physical space constraints in scholarly publishing have largely diminished, the recurring editorial justification of "limited space" for manuscript rejection appears increasingly indefensible. This commentary critically examines the ethical and economic dimensions of this rationale, particularly when it results in redirection to fee-based open access (OA) sister journals. While peer-reviewed publication decisions are ideally grounded in scientific merit, novelty, and methodological rigor, the authors argue that the space limitation rationale may function as a pretext to curate journal branding or steer submissions toward revenue-generating platforms. The expansion of OA publishing, though intended to democratize knowledge dissemination, has in some cases evolved into a profit-driven model that imposes article processing charges on authors. Such practices risk compromising editorial transparency, fairness, and academic equity. The authors advocate for clearer editorial policies, increased transparency in decision-making, and the adoption of mechanisms such as dynamic submission dashboards to inform authors of real-time publication capacity. Rejections should be based on scholarly merit rather than logistical or commercial interests. This article calls for reinforcing the ethical foundations of academic publishing to ensure that decisions reflect genuine scholarly standards and not economic expediency.
    Keywords:  Article processing charges; Editor; Open access; Rejection; Sister journals
    DOI:  https://doi.org/10.1007/s11673-025-10537-1
  5. Ann Biomed Eng. 2026 Mar 21.
      Biomedical engineers produce knowledge and artifacts. Across the life cycle of an idea, errors can creep in. In this letter, we propose the term linguistic, orthographic, and typographical errors in science (LOTS) to represent a category of errors that threaten the truthfulness and integrity of scientific literature and engineering projects. They include documented cases of the misuse of generative artificial intelligence (GAI). LOTS consist of four categories: (1) simple spelling errors; (2) the semantic deformation of technical terms, in the form of 'tortured phrases'; (3) letter or symbol-switching; and (4) formatting errors that impact the veracity of knowledge, or distort the precision of scientific representation, such as the absence or overuse of capitalization, the incorrect use or absence of italicization, the failure to deanonymize information, cloned template text, or GAI-generated "hallucinations." We introduce small analyses to assess the incidence of dopamine-ß-hydroxylase, ß-secretase, and ß-adrenoreceptors (erroneous Eszett formats) supposedly representing dopamine-β-hydroxylase (DBH), β-secretase, and β-adrenoreceptors, respectively, in PubMed. We suggest that virtuous biomedical engineers should address LOTS. To improve the screening of LOTS, we argue in favor of a common framework for scholarly text integrity analysis.
    Keywords:  Biomedical engineering ethics; Errors in science; Neuroscience; Precision; Research ethics; Tortured phrases; Truthfulness
    DOI:  https://doi.org/10.1007/s10439-026-04084-y
  6. Arch Argent Pediatr. 2026 Mar 26. e202510813
      Introduction. Scientific dishonesty is a persistent, increasingly sophisticated phenomenon that poses a growing challenge for editorial work in biomedical journals. Objectives. To describe the process of detecting fraudulent publication of articles submitted to a scientific journal in health sciences during 2024. Methods. A retrospective observational documentary study was conducted. All original manuscripts received during 2024 by a scientific journal in health sciences were included. Each text was evaluated using the Similarity Check software and analyzed by the editorial committee in accordance with the Committee on Publication Ethics (COPE) criteria. Results. Of the 71 manuscripts evaluated, two cases of fraud were identified. The first corresponded to duplicate publication by the same author; the second, to covert plagiarism by translation from another author. In both cases, the manuscripts were rejected, the authors were notified, and the right of reply was offered, which was not satisfactory. Conclusion. Two attempts at fraudulent publication were documented in 2024, detected using similarity tools and confirmed by editorial analysis.
    Keywords:  editorial policies; plagiarism; publications, ethics; scientific misconduct
    DOI:  https://doi.org/10.5546/aap.2025-10813.eng
  7. Res Integr Peer Rev. 2026 Mar 23. pii: 10. [Epub ahead of print]11(1):
       BACKGROUND: The integration of AI in academic publishing has raised significant ethical concerns, particularly regarding the practice of prompt injection, where hidden instructions are embedded in manuscripts to manipulate AI responses in the peer review process.
    METHODS: This study employed a mixed-methods approach, combining a comprehensive content analysis of academic integrity guidelines with a survey of 194 stakeholders, including authors, peer reviewers, and journal editors from various academic fields. The survey focused on their awareness of prompt injection, perceptions of its ethical implications, and views on AI transparency in peer review.
    RESULTS: The findings reveal that a substantial proportion of participants (80%) support greater transparency in the use of AI in peer review. Many respondents reported frustrations with the inconsistencies and inefficacies of AI-generated feedback, prompting some to consider the use of prompt injection as a strategy to secure favorable review outcomes. Importantly, the analysis identified a significant gap in current definitions of research misconduct, which do not adequately address the ethical implications of AI interventions.
    CONCLUSIONS: This study highlights the urgent need for revised ethical frameworks that incorporate AI-related issues in academic publishing, advocating for policies that promote transparency and uphold the integrity of the peer review process.
    Keywords:  Academic Integrity; Artificial Intelligence; Peer Review; Prompt Injection; Research Misconduct
    DOI:  https://doi.org/10.1186/s41073-025-00187-7
  8. Front Digit Health. 2026 ;8 1807664
      
    Keywords:  access; artificial intelligence; bias; equity; ethics; governance; large langauge models
    DOI:  https://doi.org/10.3389/fdgth.2026.1807664
  9. J Clin Med. 2026 Mar 14. pii: 2215. [Epub ahead of print]15(6):
      Peer review is the cornerstone of scholarly publishing and, in medicine, the ultimate guarantor of the reliability of clinical evidence that informs guidelines, therapeutic strategies, and patient care. However, the current peer review system is increasingly strained by bias, abuse, and reviewer overload. Favoritism toward prominent authors, editorial "nepotism," coercive citation practices, superficial evaluations, and even documented cases of idea theft from confidential manuscripts undermine the trustworthiness of the scientific literature upon which clinical decisions depend. In this paper, we argue that artificial intelligence (AI) and large language models (LLMs) offer a transformative opportunity to strengthen the integrity and efficiency of medical peer review. AI-driven tools can perform rapid consistency checks, detect statistical errors or plagiarism, and enforce compliance with ethical and methodological standards across thousands of manuscripts. Early implementations of AI-guided review platforms, plagiarism detectors, and citation-anomaly algorithms demonstrate that machine assistance can make reviews more thorough, objective, and reproducible. At the same time, we acknowledge the limitations of AI, including hallucinations, a lack of human judgment, and risks to confidentiality if misused. To address these concerns, we propose a hybrid model in which AI handles routine screening and technical tasks under strict safeguards, while human experts retain final responsibility for scientific evaluation. This human-AI partnership may represent an essential step toward improving the quality, fairness, and reliability of the clinical evidence base.
    Keywords:  AI-assisted peer review; artificial intelligence; large language models; peer review; publication ethics; research integrity; reviewer bias
    DOI:  https://doi.org/10.3390/jcm15062215
  10. Nature. 2026 Mar 25.
      
    Keywords:  Computer science; Machine learning; Peer review
    DOI:  https://doi.org/10.1038/d41586-026-00893-2
  11. J Arthroplasty. 2026 Apr;pii: S0883-5403(26)00045-8. [Epub ahead of print]41(4): e47
      
    DOI:  https://doi.org/10.1016/j.arth.2026.01.045
  12. Nature. 2026 Mar;651(8107): 914-919
      The automation of science is a long-standing ambition in artificial intelligence (AI) research1,2. Although the community has made substantial progress in automating individual components of the scientific process, a system that autonomously navigates the entire research life cycle-from conception to publication-has remained out of reach. Here we present a pipeline for automating the entire scientific process end to end. We present The AI Scientist, which creates research ideas, writes code, runs experiments, plots and analyses data, writes the entire scientific manuscript, and performs its own peer review. Its ideas, execution and presentation are of sufficient quality that the manuscript generated by this AI system passed the first round of peer review for a workshop of a top-tier machine learning conference. The workshop had an acceptance rate of 70%. Our system leverages modern foundation models3-5 within a complex agentic system. We evaluate The AI Scientist in two settings: a focused mode using human-provided code templates as an initial scaffold for conducting research on a specific topic and a template-free, open-ended mode that leverages agentic search for wider scientific exploration6,7. Both settings produce diverse ideas and automatically test, report on and evaluate them. This achievement demonstrates the growing capacity of AI for making scientific contributions and signifies a potential paradigm shift in how research is conducted. As with any impactful new technology, there could be important risks, including taxing overwhelmed review systems and adding noise to the scientific literature. However, if developed responsibly, such autonomous systems could greatly accelerate scientific discovery.
    DOI:  https://doi.org/10.1038/s41586-026-10265-5
  13. Medicine (Baltimore). 2026 Mar 27. 105(13): e48147
      To compare large language models (LLMs) and human reviewers in the peer review process of manuscripts submitted to 3 ophthalmology-related journals. This retrospective study comprised 300 randomly selected manuscripts from 3 anonymized journals under 1 editor between June 2023 and July 2024. Comments from 2 LLMs (Chat Generative Pre-Trained Transformer [ChatGPT] 4o and Gemini) and human reviewers (324 ophthalmologists) were compared. LLMs were prompted to accept, accept with major or minor revisions, or reject each manuscript in addition to providing comments. A 5-point Likert scale was used to assess the "favorability" of comments and compare manuscripts that were accepted or rejected by the editor. A 4-category quality assessment was used to compare the number of comments, detail/specificity, critical analysis, and literature support. Human reviewers rejected manuscripts more frequently (73.33% vs 2.00% ChatGPT and 2.00% Gemini; P < .001) and suggested major (22.67% vs 68.00% ChatGPT and 31.33% Gemini; P < .001) or minor revisions (3.33% vs 30.00% ChatGPT and 66.33% Gemini; P < .001) less often. Human reviewers gave more negative feedback for rejected manuscripts (-1.05 vs -0.02 ChatGPT and 0.24 Gemini; P < .015). ChatGPT repeated "novelty," "sample size," and "clarity" in 75%, 60%, and 50% of cases, respectively, while Gemini did so in 80%, 70%, and 65% of cases. Both lacked specificity, omitting line numbers and references. Although it is hoped that LLMs will one day be able to augment the role of peer reviewers, in their current state, LLMs should not be used for manuscript revision.
    Keywords:  ChatGPT; LLM; artificial intelligence; grading system; machine learning; peer review; rejection
    DOI:  https://doi.org/10.1097/MD.0000000000048147
  14. Tomography. 2026 Mar 02. pii: 31. [Epub ahead of print]12(3):
      This editorial provides insights into the common situation of paper rejection, which must be managed by the authors [...].
    DOI:  https://doi.org/10.3390/tomography12030031
  15. J Formos Med Assoc. 2026 Mar 21. pii: S0929-6646(26)00259-7. [Epub ahead of print]
      
    Keywords:  Academic writing; Letter to the Editor; Practical tips
    DOI:  https://doi.org/10.1016/j.jfma.2026.03.089
  16. Adv Health Sci Educ Theory Pract. 2026 Mar 23.
      In this editorial the editor considers the beginnings and endings of scholarly inquiry and the implications of how beginnings and endings are presented in scholarly writing.
    DOI:  https://doi.org/10.1007/s10459-026-10533-z
  17. Tunis Med. 2025 Dec 27. 103(10): 1356-1361
       INTRODUCTION: Scientific publication plays a vital role in sharing research outcomes, enhancing knowledge, and fostering academic careers. However, researchers in low-income countries like Tunisia often face significant barriers, including limited access to funding, training, mentorship, and high-impact journals. These challenges can hinder their ability to publish effectively and at the right time. This study explored strategies for successful medical publication and examined the optimal timing for manuscript submission, drawing on the experiences of Tunisian researchers.
    METHODS: This perspective-based study combines a comprehensive literature review with expert-facilitated group discussions. A research session held at the Faculty of Medicine of Sousse (Tunisia) brought together 44 participants from diverse medical specialties. The session included expert presentations, group discussions, and a review of relevant literature.
    RESULTS: Timing was highlighted as a key strategic factor: submitting a manuscript upon completion of data analysis, in response to a call for papers, ahead of a major scientific event, or when the topic is particularly relevant can significantly increase the visibility and impact of the publication. Scientific publication also plays a crucial role in academic recognition and career progression. Careful planning, strategic journal selection, and adherence to editorial and ethical standards were identified as essential elements for improving publication success.
    CONCLUSION: Knowing when to publish can make all the difference. Submitting a manuscript at the right time - whether it is shortly after completing data analysis, when the topic is gaining attention, or in response to a specific call for papers - can significantly increase a study's visibility and impact. However, timing alone is not enough. With the right training, thoughtful journal selection, and strong institutional support, researchers - especially those in low- and middle-income countries - can overcome many of the barriers they face and share their work more effectively with the global scientific community.
    Keywords:  Academic Career; Low- and Middle-Income Countries; Publication Strategy; Publication Timing; Scientific Publication
    DOI:  https://doi.org/10.62438/tunismed.v103i10.6173
  18. J Clin Epidemiol. 2026 Mar 21. pii: S0895-4356(26)00127-7. [Epub ahead of print] 112252
      The exploitation of open-access health datasets by paper mills and AI-assisted workflows represents a growing threat to the integrity of clinical evidence. Spick et al. provide a valuable scientometric quantification of this crisis; however, the downstream implications for patient care warrant further examination. Fabricated analyses of widely utilized datasets such as FAERS, NHANES, and the Global Burden of Disease Study carry the potential to contaminate systematic reviews, distort prescribing behavior, compromise clinical practice guidelines, and propagate errors through AI-trained clinical decision tools. At the same time, it is essential that the response to this crisis does not inadvertently penalize legitimate scholarship. Restricting access to open datasets would disproportionately affect clinicians, trainees, and researchers in resource-limited settings who rely on these resources as a primary avenue for contributing to the evidence base. This commentary proposes an eight-measure framework designed to safeguard the open data ecosystem through transparency, expert oversight, and infrastructure, including open peer review, mandatory domain-expert endorsements, and a centralized publication registry, while preserving the foundational principles of open science and FAIR data practices.
    Keywords:  Bias; Database; Misconduct; Open Data; Public Health; Retrospective Reviews
    DOI:  https://doi.org/10.1016/j.jclinepi.2026.112252
  19. J Clin Epidemiol. 2026 Mar 25. pii: S0895-4356(26)00126-5. [Epub ahead of print] 112251
      We thank Zil-E-Ali for engaging with our findings on the exploitation of open data resources and harms to the evidence base, and for raising the question of equity and the underlying goals of Open Science [1]. The concern that blanket restrictions on open-data research would harm researchers with limited resources is valid, and we agree that the benefits of Open Science should be preserved as much as possible. Nonetheless, we have misgivings around the robustness of the proposed framework, and our work shows that technological changes are already outpacing Zil-E-Ali's suggested measures to defend against exploitation while preserving Open Science. For these reasons, we continue to argue for more assertive publisher policies as well as controlled access to open data [2].
    DOI:  https://doi.org/10.1016/j.jclinepi.2026.112251
  20. PLoS One. 2026 ;21(3): e0345417
      The importance of open research information, particularly publication metadata, is widely recognised. Crossref is one of the most important infrastructures for registering open metadata as part of DOI record registration. It is widely known, however, that the metadata of many publications is far from complete, with many publishers making certain metadata openly available, but failing to do so for other metadata elements. Publishers' ability to register this metadata with Crossref depends on their capacity to capture and retain this data in their production workflows. Manuscript submission systems are an important, yet largely overlooked, factor in the extent to which publishers make metadata available through Crossref. In this paper, we present the results of an analysis investigating the relation between the level of metadata that publishers deposit with Crossref and the submission systems that they deploy for their journals. We have looked at the 153 publishers with the largest amounts of publications in Crossref and concentrate on the four most commonly used systems: Editorial Manager, ScholarOne, Open Journal Systems (OJS) and eJournalPress. We show that some submission systems appear better suited to capturing certain metadata elements. However, there are always cases where publishers using the same system differ widely in the level of metadata they register, suggesting that technology is not the only prohibiting factor and other considerations are at play.
    DOI:  https://doi.org/10.1371/journal.pone.0345417
  21. Nurs Manag (Harrow). 2026 Mar 24.
       RATIONALE AND KEY POINTS: A writing retreat can be described as time and space away from everyday distractions that is dedicated to writing, where participants support each other to achieve specific writing goals. Writing retreats for nurses can provide a valuable structure and supportive environment for focusing on writing projects that are often considered as lacking priority. They are typically associated with academic writing in a research context, but can also have benefits for nurse leaders, nurse managers and their teams. • Writing retreats can support nurses to develop professionally and cultivate writing skills. • Writing retreats can provide time and space away from operational pressures, thereby enhancing well-being and motivation. • Writing retreats can support nurses to work on written outputs including policies, reports, training packages and business cases as well as research articles. REFLECTIVE ACTIVITY: 'How to' articles can help to update your practice and ensure it remains evidence based. Apply this article to your practice. Reflect on and write a short account of: • How this article might improve your practice when planning and delivering a writing retreat. • How you could use this information to educate nursing students or your colleagues on the appropriate technique and evidence base for planning and delivering a writing retreat.
    Keywords:  communication; creative writing; professional development; staff welfare; workforce; workforce development; writing for publication
    DOI:  https://doi.org/10.7748/nm.2026.e2202
  22. Turk J Pediatr. 2026 Feb 27. 68(1): 18-24
       BACKGROUND: Scientific congresses are critical platforms for knowledge dissemination and collaboration. The scientific value of presented abstracts is best demonstrated through their subsequent publication as full-text articles in peer-reviewed journals. This study aimed to evaluate the publication rate and characteristics of oral abstracts presented at the Turkish National Pediatric Congresses (TNPC) between 2019 and 2023.
    METHODS: Abstract books of five consecutive congresses were reviewed. The publication status of each abstract was determined through systematic searches in Web of Science, PubMed, Scopus, Google Scholar and the TR Index utilizing the title, keywords from the title and author names. Parameters such as study design, collaboration type, index status and the impact factor of the journal, the year it was published, and time to publication were analyzed. Additionally, the subspecialty of each abstract and the publication rate for each subspecialty were evaluated.
    RESULTS: Among 268 oral abstracts, 111 (41.8%) were published as full-text articles. Of these, 66 (59.5%) were published in journals indexed in the Science Citation Index Expanded. Approximately one-third (32.4%) of the articles were published in Q1 or Q2 ranked journals. The average impact factor was 1.72 ± 1.26 and the mean time to publication was 1.6 ± 1.17 years. The most common study design published was retrospective (51.3%), and the majority were single-center studies (88.3%). The highest publication rates were observed in the fields of rheumatology, adolescent medicine, and infectious diseases.
    CONCLUSION: A significant portion of the papers presented at TNPC congresses are published in peer-reviewed scientific journals. The fact that more than one-third of the published studies appear in high-impact journals demonstrates the academic quality of the papers presented at the congresses and the effectiveness of the selective evaluation process. The findings provide valuable contributions to the monitoring and development of academic productivity in the field of pediatrics in Türkiye.
    Keywords:  Publication rate; Türkiye; bibliometric analysis; pediatric abstracts; scientific congress
    DOI:  https://doi.org/10.24953/turkjpediatr.2026.7526
  23. Acad Med. 2026 Mar 23. pii: wvag080. [Epub ahead of print]
      
    Keywords:  academic writing; artificial intelligence; ethics; large language models; scholarship
    DOI:  https://doi.org/10.1093/acamed/wvag080