bims-skolko Biomed News
on Scholarly communication
Issue of 2026–04–26
39 papers selected by
Thomas Krichel, Open Library Society



  1. Naunyn Schmiedebergs Arch Pharmacol. 2026 Apr 23.
      Science is frequently imagined as a domain untouched by human preference, where truth emerges cleanly from method. This paper challenges that image directly. Bias is a family of distortions that cause results to be skewed or unfair, so they do not accurately represent the truth. Bias settles quietly into the scaffolding of a study, shaping what gets measured, what gets ignored, and ultimately what gets believed. Bias enters research from multiple directions. At the cognitive level, confirmation bias, anchoring bias, and availability bias distort how evidence is gathered and interpreted. At the institutional level, publication bias filters the scientific record toward positive findings, while funding relationships and disciplinary hierarchies shape which questions are considered worth asking in the first place. At the cultural level, researchers' values, positionalities, and social locations color every methodological choice, often invisibly. No stage of inquiry is immune. Objectivity, this paper argues, is best understood not as an achievable state but as a regulative ideal. Transparency, reflexivity, and willingness to be corrected are its practical expressions. For Naunyn-Schmiedeberg's Archives of Pharmacology which has navigated paper mill infiltration, AI-generated manuscripts, geographic citation disparities, and persistent gender imbalance in authorship, this carries concrete implications. It is suggested that authors, reviewers, and editors consider positionality statements, declarations, demographic monitoring, and methodological auditing as ongoing commitments. Recognizing bias is not a concession of failure. It is, paradoxically, the foundation on which trustworthy knowledge is built.
    Keywords:  Bias mitigation; Cognitive bias; Objectivity; Peer review; Publication bias; Reflexivity; Reproducibility; Value-ladenness
    DOI:  https://doi.org/10.1007/s00210-026-05362-1
  2. Nutr Health. 2026 Apr 21. 2601060261441168
      IntroductionRetractions aim to remove flawed science, yet delayed withdrawals allow erroneous nutrition research to influence reviews, guidelines, and textbooks. This bibliometric study examined causes, geographic patterns, and post-retraction impact of nutrition articles withdrawn between 2000 and 2025.MethodsPubMed/MEDLINE and PubMed Central were searched (30 April 2025) for records indexed as 'Retracted Publication' using human-nutrition MeSH terms. Inclusion required a human-nutrition focus and formal retraction notice. Scopus provided citation counts, document types, and author affiliations. Notices were coded into eight exclusive categories (κ = 0.86). Time-to-retraction (TTR) was the interval, in years, between publication and withdrawal. Descriptive analyses used JAMOVI.ResultsForty-five retracted articles were identified. Leading causes were methodological/statistical errors (18.9%), lack of ethics approval (15.1%), and data-integrity breaches (11.3%). Mean TTR was 5.7 ± 4.8 years (median = 4.2). China and the U.S. contributed most to absolute numbers (28 % and 14 %), while Greece and Nigeria had the highest retraction densities. The corpus accrued 1155 citations, 53 % post-retraction; 62 % of those appeared in reviews or meta-analyses. Citation half-life post-withdrawal was 3.3 years, and fewer than 10 % included explicit warnings.ConclusionDelayed retractions enable flawed findings to distort pooled estimates in secondary research. Persistent citations expose gaps in alert systems. Most retractions stem from methodological or ethical flaws, yet their influence lingers. To safeguard scientific integrity, retraction notices must be standardised and machine-readable, peer review must include automated checks, and graduate education should incorporate training in citation.
    Keywords:  Diet; health; nutrition; research integrity; retracted article
    DOI:  https://doi.org/10.1177/02601060261441168
  3. Science. 2026 Apr 23. 392(6796): 347
      New investigation of ads for paper mills puts numbers to shady practice.
    DOI:  https://doi.org/10.1126/science.aei2482
  4. Environ Toxicol Chem. 2026 Apr 20. pii: vgag101. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1093/etojnl/vgag101
  5. PLoS Biol. 2026 Apr;24(4): e3003746
      As team science grows, so do 'equal contribution' designations, yet this information is routinely hidden or lost, creating inequity in recognition and crediting. We must fix this problem, now.
    DOI:  https://doi.org/10.1371/journal.pbio.3003746
  6. Surgeon. 2026 Apr 21. pii: S1479-666X(26)00028-4. [Epub ahead of print]
      
    Keywords:  Academic publishing; Authorship; Research ethics; Surgical training
    DOI:  https://doi.org/10.1016/j.surge.2026.04.003
  7. Ear Nose Throat J. 2026 Apr 20. 1455613261443236
      
    Keywords:  bibliometrics; medical student publications; single-institution publication impact
    DOI:  https://doi.org/10.1177/01455613261443236
  8. BJGP Open. 2026 Apr 21. pii: BJGPO.2026.0048. [Epub ahead of print]
      
    Keywords:  Ethics; General practitioners; Primary Healthcare; Research methods
    DOI:  https://doi.org/10.3399/BJGPO.2026.0048
  9. Front Res Metr Anal. 2026 ;11 1781697
       Background: The rising complexity of publication ethics, particularly authorship disputes, necessitates exploring Large Language Models (LLMs) as potential evaluative tools. This study compares the performance of Google Gemini 2.5 Flash and DeepSeek-V3.2 against expert Committee on Publication Ethics (COPE) forum responses.
    Methods: A cross-sectional analysis including 12 COPE authorship and contributorship cases was conducted using three prompting strategies: Minimal, Deterministic, and Stochastic. Responses were scored across seven domains on a 5-point Likert scale (1 = poor, 5 = excellent) by independent raters.
    Results: Both LLMs achieved perfect scores (5 ± 0) in Actionability of Recommendations and high marks in Safety and Avoidance of Hallucination (4.88 ± 0.33). In the Consistency with COPE Principles domain, DeepSeek performed slightly better than Gemini (4.45 ± 1.0 vs. 4.12 ± 1.29), while Gemini showed a better Overall Appropriateness (4.03 ± 0.98 vs. 3.82 ± 1.29) but they were not statistically significant. Both models struggled most with Identification of Ethical Issues (Gemini: 3.91 ± 1.33; DeepSeek: 3.82 ± 1.29). Under Minimal prompts, Gemini's ethical identification was lower (3.55 ± 1.44) compared to Deterministic/Stochastic prompts (4.09 ± 1.3). Qualitatively, Gemini recorded an 8% major disagreement rate with COPE, while DeepSeek had a 16% combined (minor and major) disagreement rate. Mean similarity scores to COPE forum experts were approximately 4 for both models. Both models missed specific legal/copyright nuances but provided unique "value-add" strategies, such as author disassociation statements and editorial de-escalation training, not present in original COPE forum advice.
    Conclusion: LLMs demonstrated high degree of alignment with COPE expert ethical reasoning. While they possess a "legal blind spot," their ability to provide actionable and clear guidance, optimized through structured prompting, makes them valuable supplementary tools for journal editors.
    Keywords:  COPE guidelines; authorship and contributorship; large language models; prompt engineering; publication ethics
    DOI:  https://doi.org/10.3389/frma.2026.1781697
  10. Nurs Womens Health. 2026 Apr 17. pii: S1751-4851(26)00069-3. [Epub ahead of print]
      Artificial intelligence use in publishing has benefits, but concerns around intellectual ownership, authorship, and scholarly authenticity must be addressed.
    DOI:  https://doi.org/10.1016/j.nwh.2026.03.001
  11. Neurosurg Pract. 2026 Jun;7(2): e000230
       BACKGROUND AND OBJECTIVES: Neurosurgery Publications encourages the creation of graphical abstracts to accompany published articles. The goal of this study was to develop a pipeline for the automatic conversion of Neurosurgery Publications articles into graphical abstracts using Cascade Styling Sheets (CSS) templates and iterative prompting of a frontier vision language model and to conduct a human evaluation of this pipeline.
    METHODS: We developed an automated pipeline to convert extracted manuscript content into standardized graphical abstracts. The pipeline implements a custom CSS profile designed to match existing journal standards. Using Claude Sonnet-3.5, we generated structured hypertext markup language summaries organized into 6 sections: Objectives, Background, Methods, Results, Discussion, and Conclusion. The model selected up to 2 representative figures per manuscript based on caption analysis. We evaluated performance using 100 randomly selected articles published between 2020 and 2024 (95 from Neurosurgery, 4 from Operative Neurosurgery, 1 from Neurosurgery Practice). Three Editorial Review Board members independently assessed abstracts using 3 binary criteria: (1) proper formatting, (2) factual accuracy, and (3) visual appeal.
    RESULTS: Generated graphical abstracts achieved proper formatting in 85% of cases (95% CI: 76.7%-90.7%), factual accuracy in 99% (95% CI: 94.4%-99.9%), and visual appropriateness in 82% (95% CI: 73.3%-88.3%). Overall, 70% of abstracts (95% CI: 60.5%-78.1%) met all 3 criteria and were deemed "publication ready" without manual intervention. Error analysis revealed poor figure selection (40.0%) as the most common failure mode, followed by title replacement errors from PDF extraction (26.7%).
    CONCLUSION: Our artificial intelligence-CSS pipeline demonstrates the feasibility of automating graphical abstract generation for neurosurgical manuscripts, achieving publication-ready quality in 70% of cases with 99% factual accuracy. This technology offers a scalable augmentation tool that can reduce the design burden for authors, enhancing visual scientific communication in neurosurgical publishing while complementing human expertise.
    Keywords:  Cascade Styling Sheets templates; large language models; medical publishing; scientific communication; vision-language models; visual abstracts
    DOI:  https://doi.org/10.1227/neuprac.0000000000000230
  12. J Eval Clin Pract. 2026 Apr;32(3): e70455
      
    Keywords:  ChatGPT; artificial intelligence; medical papers; non‐native; writing
    DOI:  https://doi.org/10.1111/jep.70455
  13. BMC Psychol. 2026 Apr 22.
      
    Keywords:  AI-assisted writing; Educational technology; Scale development; Second language (L2) writing; Self-efficacy
    DOI:  https://doi.org/10.1186/s40359-026-04503-8
  14. Elife. 2026 Apr 23. pii: e102619. [Epub ahead of print]15
      The process of publishing a research article in a scientific journal inevitably involves revising the original version of the article to respond to the concerns raised by peer reviewers. In this article we describe a course module that introduces MSc students at Utrecht University in the Netherlands to this part of the publication process. During the module the students and an invited speaker actively discuss the revision process for a recent article by the speaker. Feedback from students and speakers on the module - which could be readily transferred to other courses in the life and biomedical sciences - has been largely positive.
    Keywords:  Point of View; early-career researchers; genetics; genomics; medicine; none; peer review; scientific publishing
    DOI:  https://doi.org/10.7554/eLife.102619
  15. Vet Evid. 2024 Jul-Sep;9(3):pii: vetevid-09-3-682. [Epub ahead of print]9(3):
      No matter where you are in the world, you have unrestricted access to the evidence published in Veterinary Evidence. You can submit to, read and share our peer-reviewed content for free and use it to enhance the quality of care you provide to animals.
    Keywords:  academic publishing; drug registration; pharmaceutical companies
    DOI:  https://doi.org/10.18849/ve.v9i3.682
  16. Int J Popul Data Sci. 2025 ;10(2): 2960
       Introduction: Open sharing of research methods and software code is fundamental to open science principles and reproducible research practices and has long been the norm in some scientific disciplines. Increasingly, scientific publishers are introducing policies to encourage or mandate sharing of research protocols and analytical code. Code sharing is especially important when research data cannot be shared, as is often the case in research using population data. However, the prevalence of protocol and code sharing in population data science research has been underexplored.
    Objectives: To assess open science practice usage by authors in real world evidence (RWE) research published in the International Journal of Population Data Science (IJPDS).
    Method: We reviewed RWE research articles publishing estimates of associations in the IJPDS from January 2019 to October 2024. We determined the proportion of published articles reporting (i) a link to a study protocol, (ii) a link to a pre-registered study protocol, (iii) a statement about the availability of the data, (iv) a link to the analytical code, and (iv) reference to a reporting checklist or guideline.
    Results: None of the 41 eligible articles met all five open science domains. One article included a link to the study protocol and none cited a pre-registered protocol. Fourteen (34%) articles included a statement about data availability. No articles included a link to the analytical code, although one included it in supplementary material and two indicated availability on request. Five (12%) articles referred to using a reporting checklist. There was no clear evidence of increasing adoption of open science practices over time.
    Conclusions: Researcher alignment with international best practice for open science was poor among RWE articles published in IJPDS. Potential solutions to encourage an open science culture include increasing awareness through training and education, building Communities of Practice, providing incentives and implementing open science publication policies.
    Keywords:  data science; data sharing; open science; real word evidence; reporting guidelines; reproducibility; software; transparency
    DOI:  https://doi.org/10.23889/ijpds.v10i2.2960
  17. Account Res. 2026 Apr 22. 2661668
       BACKGROUND: Open Access (OA) agreements were introduced to remove financial barriers to scientific dissemination and promote equity in knowledge access. As Article Processing Charges (APCs) have shifted from individual researchers to institutions, access to OA publishing has become an institutional asset, unevenly distributed across institutions, countries, and career stages.
    PURPOSE: This article introduces and defines value extraction in OA - the use of access to APC coverage as leverage to obtain authorship or corresponding authorship without proportional intellectual contribution - and examines it as a structurally enabled integrity risk distinct from previously described forms of authorship abuse.
    APPROACH: We conduct a conceptual and normative analysis of the mechanisms by which OA agreements interact with metric-driven academic evaluation systems and existing research integrity frameworks, identifying governance gaps and distributional inequities produced by these interactions.
    FINDINGS: Value extraction in OA is enabled by the convergence of three factors: centralized APC control within institutions, performance metrics that privilege publication counts and corresponding authorship, and integrity frameworks that treat publishing infrastructure as an ethically neutral background condition. Researchers at less-resourced institutions, early-career researchers, and scholars in the Global South face heightened vulnerability. Existing authorship guidelines fail to address mechanisms in which infrastructural access - rather than hierarchy or prestige - functions as leverage for academic credit.
    CONCLUSIONS: Safeguards are needed at institutional, publisher, and systemic levels, including procedural firewalls between APC decisions and authorship documentation, publisher-level monitoring of authorship patterns, and reform of evaluation frameworks to decouple infrastructural access from academic credit. Future research should investigate the prevalence of value extraction using bibliometric and network-based screening approaches.
    Keywords:  Open Access; Open Access agreements; authorship ethics; evaluation metrics; research integrity
    DOI:  https://doi.org/10.1080/08989621.2026.2661668
  18. Rev Esp Cir Ortop Traumatol. 2025 May-Jun;69(3):pii: S1888-4415(25)00065-7. [Epub ahead of print]69(3): T225-T227
      
    DOI:  https://doi.org/10.1016/j.recot.2025.03.011
  19. Reprod Fertil Dev. 2026 May 11. pii: RD26034. [Epub ahead of print]38(7):
      The field needs your data. Despite rapid progress in reproductive proteomics, a major barrier to scientific advancement remains the limited availability and transparency of proteomic datasets. Although more than 2000 sperm proteomics studies are indexed on PubMed, fewer than 414 datasets have been deposited in ProteomeXchange, leaving the majority of published findings effectively inaccessible for reanalysis. This Viewpoint highlights the urgent need for improved data stewardship, standardised quality control and open access to raw mass spectrometry files across reproductive biology. In this article, I outline how transparent false discovery rate control, true biological replication and clearly defined quantitative thresholds are essential for generating robust and interpretable proteomic outputs. I further discuss how interactive data platforms, such as ShinyApps, can substantially improve the accessibility and usability of these complex reproductive proteomic datasets. Using recent examples, I demonstrate how public data reanalysis can uncover species-conserved pathways, improve proteome coverage, validate biological functions and enable new discoveries and insights far beyond the aims of the original studies. Finally, I present a practical roadmap for authors, reviewers and journals to ensure that reproductive proteomics embraces the FAIR data principles, and moves towards a culture where sharing raw data, comprehensive metadata and interactive applications becomes standard practice. To support implementation, a concise checklist is provided to summarise key criteria for data availability, quality control and metadata reporting. Improving data accessibility and quality will not only strengthen individual studies, but will accelerate discovery and create a more robust, connected and future-proof foundation for reproductive biology.
    Keywords:  Shiny app; data reuse; data stewardship; open science; publicly accessible data; quality control; reproductive proteomics; roadmap
    DOI:  https://doi.org/10.1071/RD26034
  20. PLoS One. 2026 ;21(4): e0346938
       BACKGROUND: To investigate the endorsement of the 32-item Consolidated Criteria for Reporting Qualitative (COREQ) and Standards for Reporting Qualitative Research (SRQR) in the instructions for authors (IFA) of Chinese nursing journals. The awareness of Chinese editors of the COREQ and SRQR, together with their application and requirements for following the standards, were also investigated. These findings would assist as the endorsement, application, and promotion of the COREQ and SRQR in Chinese nursing journals, and improve the reporting quality of qualitative research in nursing.
    METHODS: Nursing journals were identified from the National Press and Publication Administration. The IFA and applications of the COREQ and SRQR were assessed. The editors of the journals were asked about their awareness of and demand for the COREQ checklist and SRQR standards, as well as their implementation at different stages of the publication process, including manuscript submission, editing, and peer review.
    RESULTS: A total of 29 nursing journals were included, and only 2 journals (6.9%, 2/29) mentioned the COREQ and SRQR in their IFA. Among the 24 surveyed editors, only 45.83% (11/24) and 33.33% (8/24) were aware of the COREQ and SRQR, respectively. None of the surveyed editors required authors to follow the COREQ/SRQR at the submission stage, editors to follow COREQ/SRQR in the journal editing and processing stage, and reviewers to use the COREQ/SRQR in the expert review stage.
    CONCLUSION: Nursing journals in China endorsing the COREQ and SRQR constitute a small percentage of the total. In addition, both awareness and application of the COREQ and SRQR were poor among nursing journal editors. Therefore, we strongly recommend that the China Periodicals Association undertake measures to encourage and support the endorsement of biomedical research reporting guidelines in nursing journals. Also, the education and training of journal editors, researchers, and medical students on biomedical research reporting guidelines should be strengthened.
    DOI:  https://doi.org/10.1371/journal.pone.0346938
  21. Neurosci Biobehav Rev. 2026 Apr 21. pii: S0149-7634(26)00150-8. [Epub ahead of print] 106693
      Crucial aspects of reproducible, replicable and reusable science include the responsiveness of study authors for clarifications and the availability of research data and analysis results. Getting in contact with authors and obtaining information or results is, however, not always straight-forward. Here we report and discuss the issues and obstacles we faced when contacting authors of scientific papers with such requests. Our investigation rests on the results of a retrospective quantitative analysis of research data requests sent to authors of neuroimaging studies for a series of meta-analyses. Overall, only 52% of the requests received a reply, and only 29% contributed data or information that was relevant for the respective meta-analysis. Obtaining a response was less likely if (i) the request was sent to the contact e-mail address provided in the publication, (ii) behavioral data was requested, (iii) reminders had to be sent, or (iv) there was personal acquaintance with the contracted author. As expected, obtaining unpublished data or information from older publications was significantly more difficult than for more recent ones. We discuss possible reasons for the observed low response rates and limited sharing of information and conclude our account by providing suggestions to improve open-science practices and by pointing to a need for change in the academic system to foster better research data management for transparency and efficient reuse of results.
    Keywords:  data sharing; reproducibility; research synthesis; systematic review; transparency
    DOI:  https://doi.org/10.1016/j.neubiorev.2026.106693
  22. Med Teach. 2026 Apr 21. 1-6
      Research publications have become a defining metric for medical careers and institutional prestige. While publications are mandatory for faculty selection and promotion, institutional rankings are increasingly driven by research output. This intense, metric-driven environment has inadvertently created fertile ground for the rapid proliferation of predatory journals, leaving medical researchers-especially in the digital era-vulnerable to exploitation. In response to this growing concern, I present twelve practical and experience-based tips aimed at equipping researchers to recognize and avoid predatory journals. These insights are intended not only for early-career researchers navigating the pressures of 'publish or perish', but also for experienced faculty members who may be encountering, for the first time, the increasingly sophisticated strategies employed by predatory publishers.
    Keywords:  Fake journals; Predatory journals; Predatory publishing; Publication ethics,; Scopus
    DOI:  https://doi.org/10.1080/0142159X.2026.2657386
  23. J Am Coll Clin Pharm. 2026 Feb;9(2): e70157
       BACKGROUND: Formal training in medical writing provides numerous benefits for pharmacy residents including the development of writing and scholarship fundamentals, enhancement of time and project management skills, and cultivation of mentoring relationships. Literature describing the outcomes of a dedicated medical writing rotation is limited. The objective of this study was to describe publication and citation outcomes associated with the implementation of a medical writing rotation for post-graduate year two (PGY2) pharmacy residents at a Veterans Affairs health care system.
    METHODS: All publication activities for residents that participated in the medical writing rotation between July 1, 2011 and June 30, 2024 were included for review. A literature search was conducted using the PubMed, Embase, and Scopus databases for each included resident, and both Scopus and individual journal websites were utilized to collect journal and publication level metrics.
    RESULTS: Forty PGY2 pharmacy residents (3.1 residents per year) participated in the medical writing rotation. Of those, 35 (87.5%) residents were successful in publishing at least one manuscript developed during the rotation, resulting in 32 total publications. The majority of publications were review articles (84.4%). The total number of times one of the publications was cited by another published article was 596, representing a median of 16.5 (range: 2-61) citations per publication. The calculated h-index for all publications from the rotation was 16. A median of 1 (range: 1-3) pharmacy preceptors were involved as a co-author for each publication with a total of 15 different preceptors co-authoring at least one publication. After the completion of the rotation, 28 of 38 (73.7%) residents authored at least one future publication.
    CONCLUSION: Implementation of a medical writing rotation helps residents develop foundational scholarship skills to make meaningful contributions to the medical literature while supporting preceptor development and resident collaboration.
    Keywords:  authorship; peer review; pharmacists; pharmacy residencies; publishing; scholarship
    DOI:  https://doi.org/10.1002/jac5.70157
  24. Eco Environ Health. 2026 Jun;5(2): 100233
      To maximize the impact of our contributions, we strive to perfect our scientific writing. Much of the existing guidance on how to effectively structure reviews originates from anecdotal opinions and guidelines set out by the journals themselves. This makes it less clear what ultimately determines the number of citations of review papers. Citation frequencies partly depend on the topic of the review, and on the innovative nature of the ideas within the review. However, the language norms and the narrative flow within a review might also play an important role in the eventual acceptance of the ideas. Here, we analyzed the text of review papers published in 2020 in the top ten journals in ecology. Citation counts correlated with two of the four psychometrics tested, as well as the word count of the contributions, explaining an aggregate of 1.9% of total variation. We further observed relationships in citation counts with two descriptors of the article structure. We identify linguistic traits correlated with citation frequency in ecology, with potential relevance across other disciplines. A solid theoretical background on best practices in review writing would be transformative in terms of contributing to tools for further improving the impact of reviews, but also to assist their preliminary editorial evaluation.
    Keywords:  Computerized text analysis; Ecology and Evolution; Narrative arc; Psychometrics; Scientific writing
    DOI:  https://doi.org/10.1016/j.eehl.2026.100233
  25. Curr Med Res Opin. 2026 Apr 22. 1-18
       INTRODUCTION: The first iteration of Good Practice for Conference Abstracts and Presentations (GPCAP), published in 2019, set the baseline for general recommendations and expectations for scientific conference abstracts and presentations. While individual conference guidelines must be followed, these recommendations aim to provide principles and best practice for pharmaceutical company-sponsored research. The purpose of conferences is the prompt communication of new data for dissemination and discussion within the context of short deadlines and relevant audiences. These updated recommendations aim to provide support for all individuals with a vested interest in the communication of scientific data at conferences.
    PURPOSE: To provide principles and best practice covering the preparation and presentation of pharmaceutical-supported conference material. Feedback from the previous iteration, interviews with experts, and general revisions have been incorporated into these updated recommendations, aligning with Good Publication Practice (GPP) 2022.
    RESULTS: New sections have been added to cover topics that have risen in prominence since the first iteration: patient engagement, accessibility, and inclusivity; artificial intelligence; and enhanced content. Other sections cover authorship, copyright, citations, and encores, and have been updated accordingly.
    CONCLUSION: Conferences remain the key arena for communication and discussion of scientific data in real time as new developments within the scientific field continue to evolve rapidly. These updated recommendations provide principle-based, practical, insights-driven recommendations and suggestions on how to submit and present company-sponsored research with high standards and a commitment to consistency, transparency, and integrity for the scientific community. The aim is to make conferences the best they can be for all interested parties, within the context of pharmaceutical company-sponsored research.
    Keywords:  Accessibility; best practice; ethics; inclusivity; patient engagement; recommendations; scientific conferences
    DOI:  https://doi.org/10.1080/03007995.2026.2656541
  26. Arch Prev Riesgos Labor. 2026 Apr 20. 29(2): 113-114
      La reciente inclusión de Archivos de Prevención de Riesgos Laborales en el Diamond Discovery Hub (DDH) representa la verificación pública de un compromiso que esta revista ha mantenido desde sus orígenes: que el conocimiento sobre salud y seguridad en el trabajo sea accesible sin barreras económicas ni institucionales….
    DOI:  https://doi.org/10.12961/aprl.2026.29.2.700
  27. Tumori. 2026 Apr 19. 3008916261441119
      The current scientific ecosystem is characterized by a systemic crisis driven by the Publish or Perish culture and an exponential growth in publication volumes that outpaces the number of active researchers. Here, we highlight the limitations of traditional bibliometric indicators, such as the Impact Factor and H-index, which have become targets for manipulation and enable inflationary business models, including the proliferation of special issues. To address these distortions, we explore the transformative potential of Open Science and the adoption of FAIR data principles to move from a model based on blind trust to one rooted in verifiability. Furthermore, we examine innovative evaluation frameworks, such as the independent peer review model, and the integration of artificial intelligence through Technology Assisted Research Assessment (TARA), emphasizing that human judgment must remain central to ensuring research integrity. Ultimately, the transition from quantitative metrics to qualitative assessment is an ethical duty necessary to safeguard the credibility of oncological research and the quality of patient care.
    Keywords:  artificial intelligence; bibliometrics; oncology; open science; peer review; publication ethics; research integrity
    DOI:  https://doi.org/10.1177/03008916261441119
  28. J Eval Clin Pract. 2026 Apr;32(3): e70448
      
    Keywords:  clinical decision‐making; conflict of interest; ethics; meta‐analysis; publishing/standards; research integrity; transparency
    DOI:  https://doi.org/10.1111/jep.70448
  29. BMC Nurs. 2026 Apr 20.
      
    Keywords:  Academic promotion; Descriptive phenomenology; Jordan; Nursing faculty; Publication stress; Research integrity
    DOI:  https://doi.org/10.1186/s12912-026-04529-8