bims-skolko Biomed News
on Scholarly communication
Issue of 2025–09–21
twenty-two papers selected by
Thomas Krichel, Open Library Society



  1. OTO Open. 2025 Jul-Sep;9(3):9(3): e70158
       Objective: To evaluate the clarity of retraction notices in otolaryngology journals and examine the relationship between retraction notice clarity and improper post-retraction citations.
    Study Design: A retrospective analysis of retracted articles in otolaryngology journals from journal inception to August 1, 2024.
    Setting: Articles were selected from leading otolaryngology journals with citation data retrieved from major academic databases.
    Methods: Retracted articles were identified using the Retraction Watch Database. Citation patterns were analyzed through Google Scholar and Scopus. Retraction notices were evaluated for adherence to Committee on Publication Ethics (COPE) guidelines. The study included 80 retracted articles, with 1398 citations in Google Scholar and 714 in Scopus. Primary outcomes included the proportion of retraction notices meeting COPE guidelines and the rate of improper post-retraction citations.
    Results: Retraction notices adhered to COPE guidelines in 52.5% of cases (N = 42). Among 80 retracted articles, only 42.5% were labeled as retracted across all platforms. Alarmingly, 98.2% of citations that occurred after articles were retracted did not acknowledge their retracted status. Clearer retraction notices correlated with fewer improper citations. Proper labeling across all platforms led to a 52.89% reduction in citation rates, whereas any missing labels resulted in only a 28.72% reduction.
    Conclusion: Clarity in retraction notices significantly impacts improper citation rates. Standardized, prominently displayed retraction notices adhering to ethical guidelines can reduce misinformation. Strengthening retraction practices and improving database integration are recommended to enhance the effectiveness of retractions and maintain scientific integrity.
    Keywords:  ethical publishing; ethics; misinformation; otolaryngology; retractions
    DOI:  https://doi.org/10.1002/oto2.70158
  2. EXCLI J. 2025 ;24 1019-1022
      The post-publication scrutiny of the literature occasionally reveals errors that have filtered past the scrutiny of peer reviewers and editors. Microscopes, as used in scanning electron microscopy (SEM), form an integral part of the evidence-based methodology of many biomedical studies. A 2025 preprint (DOI: 10.31219/osf.io/4wqcr) claimed that a body of literature in indexed and ranked journals may have published potentially incorrect microscopy (SEM)-based evidence, noting that in about 2400 cases, the model or maker of SEM microscopes, as indicated in the text (e.g., in the methodology section), do not match information indicated in the figures or micrographs. One possible explanation may be that those analyses and/or equipment may have been outsourced to third-party services, although the outsourcing was not declared. Homing in on a sub-set of that preprint's 2400 cases, looking specifically at 23 of the 94 papers published in the mega open access journal, Heliyon, that were flagged in that exposé, textual descriptors in the methods section were compared against SEM descriptors in figures' micrographs. Only two papers showed an unequivocal discord between textual and figure descriptors related to SEM at the level of model and maker, while 16 of the 23 papers had no methodological description of SEM in the methods section. Heliyon editors need to investigate these omissions and discrepancies, and correct the articles accordingly, wherever applicable. See also the graphical abstract(Fig. 1).
    Keywords:  SEM; TEM; ethics; honesty; medical communication; scientific ethos; truth
    DOI:  https://doi.org/10.17179/excli2025-8605
  3. Anal Chem. 2025 Sep 15.
      Artificial intelligence (AI) is increasingly present across all phases of analytical chemistry, not only in experimental workflows but also in the way scientific writing is produced, evaluated, and published. This perspective offers a critical reflection on the growing use of AI tools as writing copilots in the field, focusing on novel yet underexplored practices such as literature review support, manuscript drafting, and AI-assisted peer review. While tools like ChatGPT, SciSpace, and Grammarly are becoming commonplace in manuscript preparation, their integration also raises important concerns about authorship transparency, originality, and the homogenization of scientific voice. The article highlights both the opportunities and limitations of these technologies. A comparative analysis is presented to summarize the main strengths, weaknesses, opportunities, and threats associated with AI use in scientific communication. This work advocates for the responsible adoption of these tools, the development of ethical guidelines, and the inclusion of AI training in analytical chemistry curricula. By encouraging the scientific community to reflect on these changes collectively, we expect to ensure that AI enhances, rather than undermines, the critical thinking and creativity that define scientific authorship.
    DOI:  https://doi.org/10.1021/acs.analchem.5c03767
  4. Front Artif Intell. 2025 ;8 1644098
      The use of Generative AI (GenAI) in scientific writing has grown rapidly, offering tools for manuscript drafting, literature summarization, and data analysis. However, these benefits are accompanied by risks, including undisclosed AI authorship, manipulated content, and the emergence of papermills. This perspective examines two key strategies for maintaining research integrity in the GenAI era: (1) detecting unethical or inappropriate use of GenAI in scientific manuscripts and (2) using AI tools to identify mistakes in scientific literature, such as statistical errors, image manipulation, and incorrect citations. We reviewed the capabilities and limitations of existing AI detectors designed to differentiate human-written (HWT) from machine-generated text (MGT), highlighting performance gaps, genre sensitivity, and vulnerability to adversarial attacks. We also investigate emerging AI-powered systems aimed at identifying errors in published research, including tools for statistical verification, citation validation, and image manipulation detection. Additionally, we discuss recent publishing industry initiatives to AI-driven papermills. Our investigation shows that these developments are not yet sufficiently accurate or reliable yet for use in academic assessment, they mark an early but promising steps toward scalable, AI-assisted quality control in scholarly publishing.
    Keywords:  AI detection; artificial intelligence; generative AI; research ethics; research integrity; responsible research
    DOI:  https://doi.org/10.3389/frai.2025.1644098
  5. Res Ethics. 2025 Jun 21.
      Researchers have been using generative artificial intelligence (GenAI) to support writing manuscripts for several years now. However, as GenAI evolves and scientists are using it more frequently, the case for mandatory disclosure of GenAI for writing assistance continues to diverge from the initial justifications for disclosure, namely (1) preventing researchers from taking credit for work done by machines; (2) enabling other researchers to critically evaluate a manuscript and its specific claims; and (3) helping editors determine if a submission satisfies their editorial policies. Our initial position (communicated through previous publications) regarding GenAI use for writing assistance was in favor of mandatory disclosure. Nevertheless, as we show in this paper, we have changed our position and now support instituting a voluntary disclosure policy because currently (1) the credit due to machines for assisting researchers is moving below the threshold of requiring recognition; (2) it is impractical (if not impossible) to accurately specify what parts of the text are human-/GenAI-generated; and (3) disclosures could increase biases against non-native speakers of the English language and compromise the integrity of the peer review system. Consequently, we argue, it should be up to the authors of manuscripts to disclose their use of GenAI for writing assistance. For example, in disciplines where writing is the hallmark of originality, or when authors believe disclosure is beneficial, a voluntary checkbox in manuscript submission systems, visible only after publication (rather than a free-text note in the manuscripts) would be preferable.
    Keywords:  artificial intelligence; disclosure; editorial policies; peer review; publication ethics; writing
    DOI:  https://doi.org/10.1177/17470161251345499
  6. J Am Pharm Assoc (2003). 2025 Sep 12. pii: S1544-3191(25)00604-1. [Epub ahead of print] 102925
      The process of responding to peer review comments can sometimes be confusing and emotional. This article aims to provide guidance to authors on how to navigate this aspect of the academic publishing process. It details the peer review process generally, formatting responses, negotiating timelines, leveraging co-authors for assistance, navigating conflicts and disagreements, dealing with unprofessional reviews, and handling manuscript rejections. The authors emphasize the importance of constructive engagement with reviewer feedback, even when challenging, and offer strategies for managing the submission process effectively.
    Keywords:  journal article; peer review; research
    DOI:  https://doi.org/10.1016/j.japh.2025.102925
  7. J Bioeth Inq. 2025 Sep 19.
      Peer review is the cornerstone of scholarly publishing, ensuring the quality and credibility of academic research. As Article Processing Charges (APC) continue to rise, many journals provide only symbolic rewards to reviewers, such as certificates of appreciation and/or minimal discount vouchers, raising ethical concerns about fairness and the marginalization of scholarly labour. This commentary explores the disparity between the financial gains of journals and the no compensation for reviewers, who are crucial to maintaining research standards. It questions whether the current model appropriately recognizes the reviewer's contributions and advocates for actual compensation structures, including financial rewards, substantial reductions in APC, and professional recognition. Additionally, the article highlights the impact of these inequities on early-career researchers and scholars from less affluent regions, suggesting that equitable compensation could improve the sustainability and efficiency of the peer review process. By addressing these ethical concerns, scholarly publishing can better support the essential work of reviewers while fostering a more just and inclusive scholarly environment.
    Keywords:  Academic; Article processing fee; Compensation; Editor; Peer review; Publishing
    DOI:  https://doi.org/10.1007/s11673-025-10459-y
  8. Front Public Health. 2025 ;13 1676987
      
    Keywords:  data sharing; global health; infectious disease; pandemic; surveillance
    DOI:  https://doi.org/10.3389/fpubh.2025.1676987
  9. Integr Med Res. 2026 Mar;15(1): 101229
       Background: Data sharing can reduce research waste, enable researchers to avoid duplicating efforts, and allow resources to be effectively directed towards addressing new clinical questions. This study aimed to evaluate data sharing practices and identify associated factors in acupuncture meta-analyses.
    Methods: A PubMed search identified meta-analyses of any type of acupuncture (April 2022 to December 2023). Journal guidelines were classified by data sharing policies, and their associations with data availability statements (DASs) and data availability, were examined using chi-squared tests or generalised estimating equations analyses.
    Results: Of 3713 studies, 300 were included. Articles published in journals with data sharing policies were more likely to include DASs compared to those without (75.8 % vs. 21.7 %, p < 0.001). DASs were more frequently present when journals mandated sharing rather than merely recommended it (94.6 % vs. 59.2 %, p < 0.001). While no significant association was found between the presence of DASs or sharing policies and data availability, articles from mandating journals had higher odds of data provision than those from recommending journals (OR 1.58, 95 % CI [1.11, 2.25]). Non-Complementary and Alternative Medicine (CAM) journal articles outperformed those in CAM journals in DAS inclusion (79.1 % vs. 49.3 %, p < 0.001), though data accessibility was comparable (71.6 % vs. 69.3 %, p = 0.826). Impact factor was not significantly associated with any aspects of data sharing practices (all p > 0.05).
    Conclusions: Mandatory journal data sharing policies were associated with more frequent inclusion of DASs and provision of raw data, but neither a policy nor a DAS alone ensured reusable datasets. Mandatory policies paired with adequate training and supports may help improve transparency, promote reusability and reproducibility of results, and reduce research waste.
    Keywords:  Acupuncture; Data sharing; Meta-analysis; Reproducibility; Transparency
    DOI:  https://doi.org/10.1016/j.imr.2025.101229
  10. Proc Biol Sci. 2025 Sep;292(2055): 20251394
      Data and code are essential for ensuring the credibility of scientific results and facilitating reproducibility, areas in which journal sharing policies play a crucial role. However, in ecology and evolution, we still do not know how widespread data- and code-sharing policies are, how accessible they are, and whether journals support data and code peer review. Here, we first assessed the clarity, strictness and timing of data- and code-sharing policies across 275 journals in ecology and evolution. Second, we assessed initial compliance to journal policies using submissions from two journals: Proceedings of the Royal Society B (Mar 2023-Feb 2024: n = 2340) and Ecology Letters (Jun 2021-Nov 2023: n = 571). Our results indicate the need for improvement: across 275 journals, 22.5% encouraged and 38.2% mandated data-sharing, while 26.6% encouraged and 26.9% mandated code-sharing. Journals that mandated data- or code-sharing typically required it for peer review (59.0% and 77.0%, respectively), which decreased when journals only encouraged sharing (40.3% and 24.7%, respectively). Our evaluation of policy compliance confirmed the important role of journals in increasing data- and code-sharing but also indicated the need for meaningful changes to enhance reproducibility. We provide seven recommendations to help improve data- and code-sharing, and policy compliance.
    Keywords:  journal policy; open science; peer review; replicability; reproducibility; transparency
    DOI:  https://doi.org/10.1098/rspb.2025.1394
  11. Res Social Adm Pharm. 2025 Sep 16. pii: S1551-7411(25)00453-X. [Epub ahead of print]
       INTRODUCTION: The Granada Statements were developed to improve the quality and visibility of pharmacy practice research by encouraging consistency in reporting. However, little is known about how these guidelines are interpreted in low- and middle-income countries (LMICs), where professional roles and services may differ. Examining these perspectives can highlight both barriers and opportunities for wider uptake.
    AIM: This study explored how clinical and social pharmacy researchers perceive the Granada Statements, focusing on the challenges, enablers, and strategies that could enhance their application in resource-limited contexts.
    METHOD: A qualitative design was adopted, using focus group discussions with researchers in Türkiye. Data were thematically analyzed through collaborative coding and interpretation. Special attention was given to the Statements' key areas, including terminology, journal selection, perceptions of relevance, and proposed improvements.
    RESULTS: Participants regarded the Statements as a useful framework for clarifying expectations, promoting consistency, and stimulating dialogue about research quality. Barriers included difficulties applying standardized terminology in evolving service contexts, challenges in translating technical terms, undervaluation of LMIC research, financial constraints in open access publishing, and discouraging peer review experiences. Suggested enablers included templates, illustrative examples, modular adoption, culturally sensitive glossaries, and training with editors. A global classification framework for benchmarking pharmacy practice was also proposed.
    CONCLUSION: This study shows that the Granada Statements have the potential to act not only as reporting guidance but also as a framework for more intentional, theory-driven, and globally relevant pharmacy practice research. Flexibility, contextual sensitivity, and institutional support are key to achieving this vision.
    Keywords:  Clinical pharmacy; Developing countries; Pharmacy practice; Pharmacy research; Publishing standards; Qualitative research; Social pharmacy
    DOI:  https://doi.org/10.1016/j.sapharm.2025.09.004
  12. J Coll Med S Afr. 2024 ;2(1): 64
      Article processing charges (APCs) for open access (OA) journals perpetuate inequities in scientific knowledge. High APCs systematically restrict low- and middle-income country (LMIC) researchers from contributing to research knowledge, preventing the dissemination of high-value, high-quality, and sustainable LMIC-driven solutions. Otolaryngology journals are no exception. The authors propose solutions to rectify the inequities in academic publishing because of APCs, including innovative solutions adopted by several major journals. Addressing these inequities requires medical society and journal editorial board leadership to ensure equitable APC policies.
    Keywords:  article processing charges; global otolaryngology; global surgery; open access; research equity
    DOI:  https://doi.org/10.4102/jcmsa.v2i1.64
  13. Nature. 2025 Sep;645(8081): 809-812
      
    Keywords:  Careers; Peer review; Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-02922-y
  14. Perspect Med Educ. 2025 ;14(1): 560-569
       Introduction: The journey through submission, rejection, and eventual publication of scholarly work is challenging to academic researchers' resilience. Dealing with rejection without succumbing to burnout or impostor syndrome requires a growth mindset. This paper analyses one author's manuscript rejections over five years and makes recommendations for academic researchers regarding manuscript rejections.
    Methods: This retrospective longitudinal mixed-methods study included one author's rejected submissions from 2019 to 2023. Quantitative data on manuscript rejection characteristics: number of rejections, subsequent publication, submission (field and research type), journal location and impact factor, and nature of rejection (desk rejection, rejection after review or revision) were analysed descriptively. Qualitative data (narrative text indicating reasons for desk rejection) were analysed thematically. Ethics approval was obtained.
    Results: Eighty submissions of 47 manuscripts were rejected, including 65% desk rejections. Most manuscripts were rejected once (60%) or twice (26%), and 77% were subsequently published. Most submissions were to journals in Africa (56%), on postgraduate student research (63%), in the field of medicine (71%). Themes related to reasons for desk rejection included not meeting journal requirements (scope, focus, criteria or priority), manuscript inadequacy (novelty, relevance, methodology, or contribution), and ethical issues (similarity indices, or ethics documentation).
    Discussion: This study on manuscript rejections received by one author over five years revealed that most rejected manuscripts were subsequently published. Desk rejection was most common. We support literature on normalizing and destigmatizing rejection and bolstering resilience to support academic researchers when dealing with technical, manuscript-related revisions and inevitable emotional responses to rejection to ensure healthy longevity in their scholarly careers.
    DOI:  https://doi.org/10.5334/pme.1727