bims-skolko Biomed News
on Scholarly communication
Issue of 2026–04–19
28 papers selected by
Thomas Krichel, Open Library Society



  1. Nature. 2026 Apr;652(8110): 828
      
    Keywords:  Policy; Research management; Scientific community
    DOI:  https://doi.org/10.1038/d41586-026-01216-1
  2. Nature. 2026 Apr 17.
      
    Keywords:  Funding; Government; Publishing
    DOI:  https://doi.org/10.1038/d41586-026-01251-y
  3. JMIRx Med. 2026 Apr 17. 7 e78139
       Background: Preprints-scientific manuscripts shared publicly prior to formal peer review-are gaining momentum across academic disciplines. However, their adoption in clinical and biomedical sciences remains limited, particularly in countries where traditional publishing norms prevail. Editorial ambiguity and a lack of national policy further complicate their use.
    Objective: This study aimed to assess the awareness, experiences, and attitudes of medical academics at Marmara University School of Medicine toward preprints and to explore the editorial landscape through both journal editor feedback and a review of journal-level preprint policies.
    Methods: A cross-sectional survey was conducted with 103 medical faculty members. The questionnaire included demographic questions, Likert scale items, and multiple-choice items assessing knowledge, familiarity, and attitudes toward preprints, as well as open-ended items to explore concerns. A "preprint test score" (0-4) was developed to quantify objective knowledge. Subgroup analyses were conducted by age (<40 vs ≥40 y) and academic discipline (basic vs clinical sciences). Additionally, all responses to open-ended questions from journal editors and 118 biomedical journals were manually reviewed for their stated stance on preprints and article processing charges (APCs). A convergent mixed methods design was used, combining a structured survey, thematic analysis of open-ended responses and editorial feedback, and a document-based review of biomedical journal policies.
    Results: Only 42.9% (n=34) of participants reported familiarity with the concept of preprints, and 13% (n=10) had previously published on a preprint server. Misconceptions about ethics, peer review, and compatibility with journal policies were common. Subgroup analysis revealed that older participants scored higher on the "preprint test" (mean 2.20, SD 1.31 vs mean 1.97, SD 1.60) and had more experience with preprint publishing (1/40, 2.5% of younger participants; 7/29, 24.1% of older participants). Further, younger academics expressed less openness toward future use (n=7, 17.5% in the younger group; n=8, 27.6% in the older group). Clinical faculty were generally more hesitant than basic science faculty, although both groups raised concerns about the academic recognition of preprints. Editorial responses reflected a mix of cautious endorsement and skepticism. Among the 118 biomedical journals reviewed, most lacked clear preprint policies, while a small number either explicitly prohibited or permitted them.
    Conclusions: There is limited awareness and cautious engagement with preprints among medical academics and editors in Türkiye. Generational and discipline-based differences further influence knowledge and attitudes. The lack of clear editorial guidance from biomedical journals may reinforce academic uncertainty. Tailored educational initiatives, transparent journal policies, and institutional support will be essential to foster a more open and inclusive scientific publishing environment.
    Keywords:  editorial policies; medical academics; preprint; publishing attitudes; survey
    DOI:  https://doi.org/10.2196/78139
  4. Nature. 2026 Apr 13.
      
    Keywords:  Computer science; Machine learning; Publishing
    DOI:  https://doi.org/10.1038/d41586-026-01199-z
  5. Taiwan J Ophthalmol. 2026 Jan-Mar;16(1):16(1): 68-80
      The rise of generative artificial intelligence (GenAI) has profoundly influenced medical research and academic writing, particularly in ophthalmology. Despite its growing relevance, there is a noticeable gap in the literature regarding its application in medical writing, including practical uses and associated limitations. This review seeks to fill in this gap by first systematically reviewing the current literature on GenAI in medical paper writing. It identifies and discusses nine key applications and considerations, including idea generation, literature review, institutional review board preparation, data collection, data analysis, image generation, manuscript drafting, writing refinement, and peer review. In the second part, we explore publicly available AI tools that currently assist with medical manuscript writing. We also introduce several generative AI detection tools and discuss their accuracy and reliability. Finally, the review addresses the limitations and ethical challenges associated with the use of GenAI in medical paper writing. While GenAI has streamlined many aspects of medical paper writing, and an increasing number of AI tools have been developed for research, significant model limitations and ethical concerns persist, necessitating careful human oversight and clear guidelines. By providing a comprehensive yet focused overview, this article offers valuable insights into the effective use of GenAI in medical paper writing while acknowledging its limitations and risks. It aims to support researchers in producing high-quality, AI-enhanced publications in the field of ophthalmology.
    Keywords:  Artificial intelligence; generative artificial intelligence; medical writing; ophthalmology
    DOI:  https://doi.org/10.4103/tjo.TJO-D-25-00072
  6. Cureus. 2026 Mar;18(3): e104955
      Artificial intelligence (AI) is now embedded across medical research and practice, reshaping how evidence is generated and evaluated. Rather than lowering standards, AI has intensified editorial expectations within medical journals. Editors prioritise clinically meaningful knowledge over technical novelty, demanding a clear demonstration of how AI augments clinical reasoning and improves patient outcomes. Transparency in AI use, rigorous methodology, bias assessment, and external validation are increasingly essential. As generative AI normalises polished writing, originality is judged by intellectual and clinical contribution rather than style. Ultimately, journals seek responsible, reproducible, and ethically grounded AI-enabled research that advances patient care and public trust.
    Keywords:  artificial intelligence; artificial intelligence in medicine; medical research; publishing; research & development
    DOI:  https://doi.org/10.7759/cureus.104955
  7. Front Oncol. 2026 ;16 1717048
       Background: Generative artificial intelligence (AI) is reshaping scholarly communication, yet guidance for its responsible use remains uneven across biomedical journals. We aimed to systematically assess editorial policies governing AI-assisted writing in high-impact oncology journals.
    Methods: We conducted a systematic review of publicly available editorial and normative documents, operationalized as a cross-sectional policy audit. Oncology journals with a 2023 Journal Impact Factor ≥5 (JCR 2024) were included. Author instructions, editorial policies, and publisher statements issued between January 2020 and March 2025 were analyzed across four domains: authorship, disclosure, permissible uses, and enforcement.
    Results: Sixty journals met inclusion criteria. Most journals prohibit AI systems as authors (58/60, 96.7%), reaffirming human accountability. Disclosure of AI use is mandated by 58/60 journals (96.7%), although reporting requirements vary in placement and specificity. Permissible uses are recognized by 58/60 journals (96.7%), generally limited to language editing and formatting under human supervision, while autonomous content generation or interpretation is discouraged. Enforcement provisions are present in 21/60 journals (35.0%), indicating incomplete standardization. At publisher level, disclosure adoption is universal in Elsevier (17/17), Springer Nature (20/20), AACR (6/6), Wiley (6/6), and AMA (1/1), and present in 8/10 journals in the "Other" category. Enforcement varies widely across publishers.
    Discussion: Editorial policies show strong convergence on core principles but remain heterogeneous in implementation, particularly regarding enforcement. We propose a cross-publisher "AI Policy Minimum Dataset" including standardized disclosures, defined permissible uses, and proportionate enforcement mechanisms, supported by transparent and regularly updated policy frameworks. Greater harmonization is essential to ensure integrity, accountability, and equitable use of AI in oncology publishing.
    Keywords:  artificial intelligence; editorial policy; generative AI; oncology journals; policy analysis; research integrity
    DOI:  https://doi.org/10.3389/fonc.2026.1717048
  8. Indian J Thorac Cardiovasc Surg. 2026 May;42(5): 575-576
      
    DOI:  https://doi.org/10.1007/s12055-026-02214-8
  9. J Nucl Med Technol. 2026 Apr 13. pii: jnmt.126.272579. [Epub ahead of print]
      
    DOI:  https://doi.org/10.2967/jnmt.126.272579
  10. Nurs Outlook. 2026 Mar-Apr;74(2):pii: S0029-6554(26)00090-4. [Epub ahead of print]74(2): 102767
      
    DOI:  https://doi.org/10.1016/j.outlook.2026.102767
  11. Biomol Biomed. 2026 Apr 13.
      This correspondence addresses three significant concerns regarding the current peer review process for systematic reviews and meta-analyses. First, while artificial intelligence tools can enhance language and readability, their implementation necessitates transparent disclosure and diligent human oversight, as AI-generated content may contain errors, fabricated references, or misleading interpretations. Second, an overreliance on text similarity reports may promote unnecessary paraphrasing of standardized methodological descriptions, leading to unclear or convoluted phrasing without enhancing scientific originality. Third, the verification of references has increasingly burdened reviewers due to inaccurate citations and repeated security barriers encountered during source verification, which further prolongs the review process and exacerbates reviewer fatigue. We contend that journals and publishers should enhance editorial screening, utilize responsible similarity and reference-checking tools, provide clearer guidelines for systematic review and meta-analysis methods sections, and improve access systems to facilitate efficient and reliable peer review.
    DOI:  https://doi.org/10.17305/bb.2026.14264
  12. Biomol Biomed. 2026 Apr 13.
      This response to the letter expands the discussion on the evolving demands of peer review for systematic reviews and meta-analyses. We emphasize that the main concern surrounding artificial intelligence is not its limited and disclosed use for language support, but undisclosed application and insufficient human verification, which may compromise citation accuracy, interpretation, and overall trustworthiness. We also argue that similarity reports should be interpreted contextually, particularly in evidence syntheses where standardized methodological language is unavoidable, and that low similarity does not necessarily exclude manuscript manipulation. Finally, we highlight reference verification as a central research-integrity challenge that should not rest on peer reviewers alone. Preserving the credibility of evidence synthesis requires shared responsibility across authors, reviewers, editors, and publishers.
    DOI:  https://doi.org/10.17305/bb.2026.14271
  13. Microlife. 2026 ;7 uqag011
      Many countries with lower research & innovation capacity face persistent constraints in building stable research systems. Chronic underfunding and weak science policy reduce institutional capacity and limit researchers' career prospects. These conditions encourage brain drain, particularly among early-career scientists who seek predictable funding, transparent evaluation, and merit-based advancement. As a result, research institutions lose skilled personnel, which weakens scientific training, governance, and research output. Additionally, within this environment, predatory publishing practices create further damage. These scientific outlets reward volume over quality, thus distorting evaluation criteria. They promote negative selection by favouring speed of publication at the expense of rigorous peer review. Over time, this weakens academic standards and undermines trust in the research output. The result is a decline in scientific credibility and an overall reduction in international competitiveness. Although predatory publishing is motivated by financial gain, it results in serious institutional consequences. It directly reshapes hiring, promotion, and funding decisions in ways that disadvantage high-quality research. This contributes to the erosion of both research integrity and academic communities.
    Keywords:  EU widening countries; brain-drain; fragile research systems; predatory publishing practice; research culture
    DOI:  https://doi.org/10.1093/femsml/uqag011
  14. Br J Sports Med. 2026 Apr 14. pii: bjsports-2026-111535. [Epub ahead of print]
      
    Keywords:  Ethics; Exercise; Sport
    DOI:  https://doi.org/10.1136/bjsports-2026-111535
  15. Nature. 2026 Apr 14.
      
    Keywords:  Careers; Publishing; Research management
    DOI:  https://doi.org/10.1038/d41586-025-04132-y
  16. Health Expect. 2026 Apr;29(2): e70665
       BACKGROUND: Co-production research values the lived/living experience (LE) of people navigating health challenges. Despite this, traditional academic authorship often disregards power dynamics that participatory research seeks to address.
    MAIN BODY: This critical reflection argues for centring LE collaborators as first authors in co-production research. We analyse authorship through Foucauldian power/knowledge dynamics, Derrida's deconstructive ethics and Levinasian ethics of responsibility, drawing on Critchley's synthesis of these traditions. This multiple lens reveals a tension: co-production research requires political solidarity ('us') to challenge epistemic injustice, yet demands ethical vigilance to preserve individual voices. The concept of porous solidarity is useful here. Acknowledging the value of experience-based expertise by means of first authorship embodies this framework: it redistributes epistemic authority and shifts research from studies about populations to studies by/with those populations.
    CONCLUSIONS: First author placement of experience-based experts where appropriate, plus explicit acknowledgement of all researchers' relevant LE where freely given, offers both an ethically responsive approach and a practical strategy for aligning co-production processes with publication practices. By centring LE collaborators as first authors and acknowledging the LE of academic researchers, co-production research teams can embody the principle rooted in disability activism, of 'nothing about us without us-leading' in academic literature. This supports epistemic justice, creates solidarity and builds capacity within communities conducting research.
    PATIENT OR PUBLIC CONTRIBUTION: This lived experience-led article was co-produced by two researchers with lived experience. Experiential expertise shaped the conception, argument development, drafting and critical revision of the manuscript and the practical implications presented.
    Keywords:  authorship; co‐production; epistemic injustice; epistemic justice; ethics; lived experience; porous solidarity; power
    DOI:  https://doi.org/10.1111/hex.70665
  17. J Hand Ther. 2026 Apr 13. pii: S0894-1130(26)00015-3. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.jht.2026.02.003
  18. Res Integr Peer Rev. 2026 Apr 13. pii: 8. [Epub ahead of print]11(1):
       INTRODUCTION: This study systematically investigates documentation gaps in survey translation within validation studies conducted by Iranian researchers, framing these gaps as a critical, yet overlooked, research integrity concern. Transparent reporting is foundational not only for methodological rigor but for enabling meaningful peer review and trust in cross-cultural findings.
    METHODS: Using a comprehensive framework that assesses documentation across three phases (input, translation process, and output), we analyzed the completeness of reported translation procedures. We further evaluated these practices against established professional standards for translation- specifically, the norms of accountability, communication, and fidelity- which align with core research integrity principles.
    RESULTS: The findings reveal a pronounced and systemic imbalance: while the translation process itself is frequently documented, both preparatory (input) and resultant (output) stages are largely neglected. This selective reporting constitutes a significant transparency deficit, obscuring essential information about translation validity and severely compromising the methodological scrutiny central to peer review. DISCUSSION AND CONCLUSION: The results directly inform interventions to bolster research integrity, as neglecting thorough documentation creates an unrecoverable information gap for peer reviewers. This prevents proper evaluation of translation validity- a core methodological checkpoint. Therefore, the proposed priorities (e.g., mandatory reporting templates for input briefs and output decisions) are targeted interventions to make the translation process auditable. These gaps systematically exclude evidence of translators' intellectual labor and cultural mediation, eroding the transparency necessary for reproducing or trusting cross-cultural findings. The consistent pattern in Iran, mirroring LMIC (Low- and Middle-Income Country) challenges, confirms that standardizing documentation is a prerequisite for equitable peer review, ensuring that the methodological foundations of cross-cultural research, specifically translation validity, are rendered auditable and subject to effective peer review- a core safeguard of research integrity.
    Keywords:  Documentation practice; Iran; Peer review; Research integrity; Research transparency; Survey translation
    DOI:  https://doi.org/10.1186/s41073-026-00192-4
  19. J Med Libr Assoc. 2026 Apr 01. 114(2): 173-175
      As part of an effort to seek sustainable support models for Open Access (OA) publishing, the University of Maryland, Baltimore (UMB), Health Sciences and Human Services Library's (HSHSL's) Scholarly Communications Committee developed an interactive dashboard to visualize university-wide OA publishing trends. Using publication data exported from Scopus and visualized in Microsoft Power BI, the dashboard displays five years of publishing trends by OA model, publisher, journal, school, and citation count. The dashboard is fully interactive, allowing users to filter results based on school, OA model, and year. The design of the dashboard was iterative, with planning discussions taking place in Summer 2024, data model development and initial data collection in Fall 2024, refining of the visualization and data model in early Spring 2025, and the publication of the final dashboard to our website in April 2025. The dashboard continues to be refined and improved based on feedback from stakeholders, and the project team plans to incorporate data on publishing costs in Spring 2026. The project was designed for sustainability and adaptability, with a documented workflow that will be easy for future committees to implement. This innovative, replicable approach supports informed decision-making around OA publishing and provides a model that can be adopted by other academic health sciences libraries.
    Keywords:  Data Visualization; Interactive Dashboards; Open Access Publishing; Scholarly Communications
    DOI:  https://doi.org/10.5195/jmla.2026.2340
  20. Nurs Outlook. 2026 Apr 16. pii: S0029-6554(26)00097-7. [Epub ahead of print]74(3): 102774
       BACKGROUND: Quality improvement (QI) is a type of inquiry distinct from research with separate implementation designs and evaluation methods; however, many publications conflate QI terminology when research designs and methods are used.
    PURPOSE: The purpose of this study was to objectively quantify misalignment in QI reports by assessing concordance between stated project implementation design and evaluation methods with actual designs and methods used.
    METHODS: A descriptive cross-sectional design was used. The first 2025 issue for all journals listed in the Nursing Journal Directory was sought for review. Articles in which QI was the stated design/method were extracted.
    DISCUSSION: Two hundred fifty journals and 5,398 articles were reviewed, of which 75 were labeled as QI reports. Seventy-three full-text reports were retrieved, of which only 19.2% (n = 14) used QI implementation designs (i.e., implemented practice changes through iterative cycles) and 6 (8.2%) appropriately evaluated data using QI methodology. Most articles (58.9%, n = 43) employed a pre/post quasi-experimental research design using inferential statistical tests to determine differences before and after an intervention was implemented.
    CONCLUSION: Results of this study quantify the misalignment between stated project implementation design and evaluation methods with actual methods and designs used. Academic program leaders, faculty, clinicians, ethical review board personnel, peer reviewers, and journal editors are encouraged to reflect on these results and ensure they can appropriately distinguish between QI and research designs and methods.
    Keywords:  Evaluation; Guidelines; Quality improvement; Reporting
    DOI:  https://doi.org/10.1016/j.outlook.2026.102774
  21. J Glob Health. 2026 Apr 15. 16 01003
      We address the growing concern of requests for post-submission and post-acceptance changes in authorship which our editorial team has observed in recent years. We emphasise that authorship order should be agreed upon by all contributors before submission, as all authors are expected to approve the final version of their manuscript and agree on their respective authorship positions. This editorial identifies seven categories of authorship change requests, ranging from adding or removing ordinary authors to modifying first or last authorship positions, or introducing group authorship. We consider some of these requests legitimate, such as adding author(s) who performed additional analyses based on reviewers' feedback or removing author(s) disagreeing with such revisions. We consider some others indicative of potentially concerning practices, particularly those involving changes to first or last authorship positions. We place this trend within a broader context of questionable research practices, including 'honorary' or 'gift' authorships driven by institutional power dynamics and the more recent emergence of 'paper mills'. These practices seem to be increasing in frequency with the rise of artificial intelligence (AI) and large publicly available data sets, which have lowered the barriers to producing large volumes of research of questionable value. Existing safeguards developed by organisations such as the International Committee of Medical Journal Editors (ICMJE) and the Committee on Publication Ethics (COPE) are helpful, but limited in their ability to prevent such practices. To address these challenges, the Journal of Global Health introduces the GUidelines for Authors on Requesting and DIsclosing changes in Authorship Nominations (GUARDIAN), which mandate full transparency whenever authorship changes occur after submission. Specifically, in the part of the standard 'acknowledgements' section at the end of each paper, where the mandatory 'authorship contributions' statement is typically detailed, the authors will be required to: (i) declare that the authorship byline has been changed since submission; (ii) disclose precisely what changes to the byline occurred between these two versions; (iii) provide an explanation for this change; and (iv) provide the final authorship contributions accordingly. Supporting documentation will also be archived by our editors and may be shared upon legitimate requests. The GUARDIAN aim to deter misconduct through transparency, protect early-career researchers from authorship pressure, and improve accountability in academic publishing. Together with our previously introduced Guidelines for Reporting Analyses of Big Data Repositories Open to the Public (GRABDROP) and other integrity initiatives, the GUARDIAN represent a proactive effort to safeguard credibility of authorships, while allowing legitimate adjustments whenever they are properly justified.
    DOI:  https://doi.org/10.7189/jogh.16.01003
  22. Cad Saude Publica. 2026 ;pii: S0102-311X2026000105401. [Epub ahead of print]42 e00273825
      We analyze the perceptions and challenges related to open peer review (OPR) among contributors to Cadernos de Saúde Pública (CSP), in the context of Open Science practices. Seeking to understand how authors and reviewers perceive the adoption of this model, a cross-sectional survey was conducted between January and April 2025, with 1,280 respondents among nearly 3,000 Brazilian reviewers from the past three years. The questionnaire, developed on REDCap, consisted of 20 open- and closed-ended questions. Most respondents were female (59.4%), had a PhD degree (70.6%), and ties to public institutions (55.9%) by working in Collective Health research and teaching. As for OPR, while 23.1% were in favor of disclosing the names of authors and reviewers, 24.2% were opposed and 32.7% preferred intermediate answers, revealing caution. Respondents pointed out prior knowledge between authors and reviewers (52.7%) as the main source of discomfort, followed by fears about conflicts of interest and professional constraints. Results indicate that the CSP scientific community recognizes the benefits of OPR for transparency, but also underscore the need for clear guidelines, active editorial mediation, and participant protection. Model acceptance depends on its gradual and contextualized implementation, based on dialogue, training and recognition of review work. In conclusion, OPR can strengthen integrity and trust in science if accompanied by institutional responsibility and sensitivity to the specificities of Collective Health.
    DOI:  https://doi.org/10.1590/0102-311XEN273825
  23. Sports Med Arthrosc Rev. 2026 Apr 14.
      We extend our sincere thanks to our Guest Editors, who are the editors for each issue, and Associate Editors for helping us build an outstanding library of content. These articles, written by acknowledged experts, cover key topics in orthopedic sports medicine and arthroscopy. To further our mission of delivering in-depth analysis on important topics, we are expanding into social media and podcasting to better engage our community of readers and authors. To maximize the impact of these platforms, we are introducing two new optional features: Visual Abstracts and Auditory Abstracts. These additions cater to different learning styles and broaden the reach of our content. While participation is voluntary, both features offer authors a unique opportunity to present their work in a more engaging and resonant format-benefiting both authors and readers through enhanced visibility and accessibility.
    Keywords:  Podcasts; Social Media
    DOI:  https://doi.org/10.1097/JSA.0000000000000464