bims-skolko Biomed News
on Scholarly communication
Issue of 2025–04–06
23 papers selected by
Thomas Krichel, Open Library Society



  1. Commun Med (Lond). 2025 Apr 01. 5(1): 99
      In this Perspective article, we call for a fairer approach to authorship practice in collaborative biomedical research to promote equity and inclusiveness. Current practice does not adequately recognise all contributors involved in different stages of the work and may exacerbate preexisting inequalities. Here, we discuss some key features of contemporary collaborative research practice that complicate authorship decisions. These include the project size, complexity of multidisciplinary team involvement and researchers having varying degrees of expertise and experience. We conclude by making some suggestions to address these concerns.
    DOI:  https://doi.org/10.1038/s43856-025-00815-9
  2. Curr Med Res Opin. 2025 Apr 03. 1-3
      
    Keywords:  bibliometric analysis; open access publishing; scientific misconduct
    DOI:  https://doi.org/10.1080/03007995.2025.2488949
  3. Curr Med Res Opin. 2025 Apr 03. 1-2
      
    Keywords:  bibliometrics; grey publishers; knowledge synthesis; open access publishing; reporting guidelines; scientometrics
    DOI:  https://doi.org/10.1080/03007995.2025.2488948
  4. Musculoskelet Sci Pract. 2025 Mar 19. pii: S2468-7812(25)00064-5. [Epub ahead of print]77 103316
       BACKGROUND: Following clinical practice guidelines is widely recommended for treating many musculoskeletal diagnoses, including low back pain, but it is unknown if clinical practice guidelines for low back pain do.
    OBJECTIVE: Assess whether clinical practice guidelines for low back pain reference publications from predatory journals or include retracted publications.
    DESIGN: Meta-research.
    METHODS: Clinical practice guidelines focusing on the management of adults with low back pain published between January 2010-June 2024 were included. All referenced publications in each guideline were evaluated for predatory categorization using a systematic process that included assessing publisher/journal websites, the Directory of Open Access Journals, Beall's List and major literature databases. The Retraction Watch Database was used to assess retraction status.
    RESULTS: Nineteen clinical practice guidelines with 1617 unique publications met inclusion criteria. The majority of publications (1598/1617; 98.8 %) were categorized as 'non-predatory.' Fourteen publications (0.9 %) were categorized as 'predatory,' two (0.1 %) 'presumed predatory,' and three (0.2 %) were retracted. Four guidelines cited 'predatory' and/or 'presumed predatory' publications, and four guidelines cited the retracted publications.
    CONCLUSION: Only 1.2 % of the cited publications included in clinical practice guidelines for the management of low back pain were deemed predatory or retracted, implying that guideline recommendations are unlikely to be influenced by their inclusion. There are currently no standard criteria for how to manage the inclusion of these publications in guidelines or systematic reviews. Future research should investigate the development of a valid and reliable checklist that allows authors to assess for and manage the presence of predatory or retracted publications.
    Keywords:  Clinical practice guidelines; Low back pain; Predatory; Retraction
    DOI:  https://doi.org/10.1016/j.msksp.2025.103316
  5. Res Integr Peer Rev. 2025 Mar 31. 10(1): 3
       BACKGROUND: Journals and publishers vary in the methods they use to detect plagiarism, when they implement these methods, and how they respond when plagiarism is suspected both before and after publication. This study aims to determine the policies and procedures of oncology journals for detecting and responding to suspected plagiarism in unpublished and published manuscripts.
    METHODS: We reviewed the websites of each journal in the Oncology category of Journal Citation Reports' Science Citation Index Expanded (SCIE) to determine how they detect and respond to suspected plagiarism. We collected data from each journal's website, or publisher webpages directly linked from journal websites, to ascertain what information about plagiarism policies and procedures is publicly available.
    RESULTS: There are 241 extant oncology journals included in SCIE, of which 224 (92.95%) have a plagiarism policy or mention plagiarism. Text similarity software or other plagiarism checking methods are mentioned by 207 of these (92.41%, and 85.89% of the 241 total journals examined). These text similarity checks occur most frequently at manuscript submission or initial editorial review. Journal or journal-linked publisher webpages frequently report following guidelines from the Committee on Publication Ethics (COPE) (135, 56.01%).
    CONCLUSIONS: Oncology journals report similar methods for identifying and responding to plagiarism, with some variation based on the breadth, location, and timing of plagiarism detection. Journal policies and procedures are often informed by guidance from professional organizations, like COPE.
    Keywords:  Cancer research; Ethics; Plagiarise; Plagiarism; Plagiarize; Research misconduct; Scholarly communications; Scientific misconduct; Text similarity
    DOI:  https://doi.org/10.1186/s41073-025-00160-4
  6. PLoS One. 2025 ;20(4): e0320444
      Is it possible to reliably evaluate the quality of peer reviews? We study this question driven by two primary motivations - incentivizing high-quality reviewing using assessed quality of reviews and measuring changes to review quality in experiments. We conduct a large scale study at the NeurIPS 2022 conference, a top-tier conference in machine learning, in which we invited (meta)-reviewers and authors to voluntarily evaluate reviews given to submitted papers. First, we conduct a randomized controlled trial to examine bias due to the length of reviews. We generate elongated versions of reviews by adding substantial amounts of non-informative content. Participants in the control group evaluate the original reviews, whereas participants in the experimental group evaluate the artificially lengthened versions. We find that lengthened reviews are scored (statistically significantly) higher quality than the original reviews. Additionally, in analysis of observational data we find that authors are positively biased towards reviews recommending acceptance of their own papers, even after controlling for confounders of review length, quality, and different numbers of papers per author. We also measure disagreement rates between multiple evaluations of the same review of 28% - 32%, which is comparable to that of paper reviewers at NeurIPS. Further, we assess the amount of miscalibration of evaluators of reviews using a linear model of quality scores and find that it is similar to estimates of miscalibration of paper reviewers at NeurIPS. Finally, we estimate the amount of variability in subjective opinions around how to map individual criteria to overall scores of review quality and find that it is roughly the same as that in the review of papers. Our results suggest that the various problems that exist in reviews of papers - inconsistency, bias towards irrelevant factors, miscalibration, subjectivity - also arise in reviewing of reviews.
    DOI:  https://doi.org/10.1371/journal.pone.0320444
  7. J Am Acad Dermatol. 2025 Mar 28. pii: S0190-9622(25)00545-6. [Epub ahead of print]
      
    Keywords:  Anonymization; Bias: Journalology; Double-blinded Review; Ethics; Open peer-review; Peer-review Process; Publishing; Single-blinded Review; Transparency
    DOI:  https://doi.org/10.1016/j.jaad.2025.03.072
  8. J Neurol Surg Rep. 2025 Jan;86(1): e45-e49
      Academic scholarship is an increasingly emphasized component of undergraduate medical education (UME), in particular since the USMLE Step 1 examination transitioned to a pass/fail grading scheme in 2022. Peer review is a cornerstone of academic publishing, but essentially no formal training exists at the UME or graduate medical education levels to prepare trainees for participation in the process as authors or reviewers. This clinical research primer presents an introductory set of guidelines and pearls to empower trainee participation in the peer-review process as both authors and reviewers. We outline a systematic approach to manuscript evaluation and recommend a nonlinear strategy that begins with the Abstract and Methods, followed by Figures, Tables, and Results, concluding with the Discussion. This framework includes guidelines for constructing effective reviews, from initial summary and overall recommendations to specific, actionable comments. Participation in peer review can also advance trainees' scholarly development by exposing gaps in literature that inspire new research questions and developing their ability to anticipate and address potential reviewer critiques in their own manuscript preparation. While initial implementation requires close supervision from experienced mentors, this structured approach streamlines the peer-review learning process and provides substantial benefits for all participants in academic publishing, enhancing both mentorship relationships and scholarly development.
    Keywords:  GME; UME; education; medical student; peer review
    DOI:  https://doi.org/10.1055/a-2554-2357
  9. Clin Exp Dent Res. 2025 Feb;11(1): e70122
       OBJECTIVES: The aim of this study was to evaluate the risk of editorial bias in the field of Dentistry by examining surrogate measures which can be readily extracted from published randomized controlled trials (RCTs) in a journal of high impact factor.
    MATERIAL AND METHODS: RCTs published between January 2019 and March 2023 were manually downloaded. Data related to author affiliation, dates of submission and first publication, study location, review time, compliance with Consolidated Standards for Reporting Trials (CONSORT) checklist, ethics approval number, clinical trial registration time, reported outcomes, and eligibility criteria in registries and sample size calculation were extracted.
    RESULTS: A total of 40 RCTs were included in this cross-sectional study. The mean review time was 165.38 ± 91.40 days with 55% of RCTs exceeding 120-day review time. A total of 23 RCTs (57.5%) were compliant with the CONSORT statement. The review time of RCTs with editorial co-authorship was significantly shorter than the review time of RCTs that had no authors from the editorial team (91.75 ± 42.03 vs. 239.00 ± 63.00 days; p < 0.001).
    CONCLUSIONS: RCTs with editorial co-authorship in the field of Dentistry were statistically favored in the initial screening or peer-review process having significantly short review time.
    PRACTICAL IMPLICATIONS: Scientific journals should adopt a double-blind peer-review process that is thorough, fair, and transparent to improve the quality of published research. To address any concerns related to editorial co-authorship, editors should explicitly explain the peer-review process in a commentary added to the published paper.
    Keywords:  publication delay; randomized controlled trial; review time; risk of bias
    DOI:  https://doi.org/10.1002/cre2.70122
  10. Mymensingh Med J. 2025 Apr;34(2): 592-597
      Artificial Intelligence (AI) is revolutionizing various fields, including scientific writing, which traditionally relies on human intellectual effort. This review article explores the evolving role of AI in scientific writing, highlighting its applications, challenges and ethical implications. The adoption of AI in scientific writing offers several key advantages that make it attractive to researchers. AI-powered tools are employed to scan large volumes of academic literature quickly and efficiently. AI writing assistants have become increasingly sophisticated in generating human-like text. AI-driven language models can help authors who are not native English speakers to produce high-quality, well-written manuscripts. Another significant benefit is the ability to handle large amounts of data efficiently. Moreover, AI tools reduce the risk of plagiarism by detecting unintentional similarities between newly drafted manuscripts and previously published work. AI systems are also being developed to assist with the peer-review process. Automated tools use AI to analyze manuscripts, checking for completeness, adherence to journal guidelines and even suggesting potential reviewers. Furthermore, AI aids in citation management by helping researchers organize and insert references correctly. Despite the benefits of AI in scientific writing, several ethical considerations and challenges accompany its adoption. A significant concern relates to the potential over-reliance on AI for generating text and performing critical analyses. Additionally, the question of authorship becomes increasingly complex with the involvement of AI in writing. Another significant issue concerns the potential for bias in AI-generated content. AI models are trained on vast amounts of data, which often reflect existing biases in published literature. This is particularly concerning in fields such as healthcare, where biased research could have serious consequences for patient care and treatment outcomes. Finally, the use of AI raises concerns about data privacy and security. As AI continues to evolve, it is essential for the scientific community to establish guidelines that ensure the responsible use of these tools, maximizing their benefits while mitigating potential risks.
  11. Int J Gynaecol Obstet. 2025 Apr 02.
      
    Keywords:  ChatGPT; artificial intelligence; human touch; journal; manuscript
    DOI:  https://doi.org/10.1002/ijgo.70135
  12. Cureus. 2025 Feb;17(2): e79864
      For those treating patients with rare diseases, there may be a disproportionate clinical reliance on the literature, compared with those treating patients with common problems. Moreover, the rare disease literature consists of a preponderance of case reports. Together, these factors place a higher burden for accuracy on authors of case reports of patients with rare diseases. Our decades of experience with the rare congenital craniofacial myopathy, Freeman-Sheldon syndrome-now, Freeman-Burian syndrome, and other rare diseases suggests that accurate and current information may not efficiently proliferate in the rare disease literature-a potentially significant clinical and scholarly concern. Based on our experience of reading case reports of patients with Freeman-Burian syndrome, we suggest mutually supporting mitigation strategies. Our quality-improvement strategies for rare disease case reports emphasize a careful search of recent literature, not exclusively case reports, in-person clinical experience with the patient described, and involvement of a rare disease expert as bedrocks for improving case report accuracy. We propose that objectively demonstrating the patient's findings relative to an accepted diagnostic criteria, presenting the clinical course within a known disease mechanism, cautiously proposing a new one, and adhering to the relevant case report guidelines can help construct a stronger case report. We hope the wide dissemination of these quality improvement strategies among authors, editors, peer reviewers, and readers will improve the accuracy and completeness of case reports involving rare diseases to ensure the best chances for advancing clinical care and science for this often marginalized patient population.
    Keywords:  case report; clinical reasoning; clinical relevance; craniofacial dysostosis; freeman-burian syndrome; freeman-sheldon syndrome; medical writing; rare diseases; research methodology; whistling face syndrome
    DOI:  https://doi.org/10.7759/cureus.79864
  13. BMJ Evid Based Med. 2025 Apr 03. pii: bmjebm-2024-113364. [Epub ahead of print]
       OBJECTIVES: To investigate the reporting, data sharing and spin (using reporting strategies to emphasise the benefit of non-significant results) in acupuncture randomised controlled trials (RCTs).
    DESIGN: Cross-sectional meta-epidemiological study.
    DATA SOURCES: Eligible studies indexed in MEDLINE, Embase, CENTRAL, CBM, CNKI, Wanfang Data and VIP Database between 1 January 2014 and 1 May 2024.
    ELIGIBILITY CRITERIA: Peer-reviewed acupuncture RCTs used traditional medicine (TM), published in English or Chinese, two parallel arms for humans.
    MAIN OUTCOME MEASURES: We assessed (1) the reporting of acupuncture RCTs by the Consolidated Standards for Reporting Trials (CONSORT) 2010 statement and STandards for Reporting Interventions in Clinical Trials of Acupuncture (STRICTA) checklist; (2) the data sharing level by the International Committee of Medical Journal Editors (ICMJE) data sharing statement; (3) spin frequency and level by the prespecified spin strategies.
    RESULTS: This study evaluated 476 eligible studies, of which 166 (34.9%) explored the specific efficacy or safety of acupuncture in the nervous system, 68 (14.3%) in the motor system and 61 (12.8%) in the digestive system. 244 (57.7%) studies used conventional acupuncture, 296 (62.2%) used multicentre study design and 369 (77.5%) were supported by institutional funding. 312 (65.5%) eligible studies were poorly reported. The sufficiently reporting scores of the CONSORT 2010 statement and the STRICTA checklist differed from 0.63% to 97.5%, and 32 (59.3%) items were less than 50%. For the data sharing level of acupuncture RCTs, only 66 (17.2%) studies followed the ICMJE data sharing statement, but 49 (14.5%) need to require authors to obtain data, and only 5 (1.5%) provided data by open access. Spins were identified in 408 (85.7%) studies (average spin frequencies: 2.94). 59 (37.2%) studies with non-significant primary outcomes had spin levels.
    CONCLUSIONS: This study found that the reporting of acupuncture RCTs was low compliance with the CONSORT 2010 statement, the STRICTA checklist and the ICMJE data sharing statement, and spin appeared frequently. Journal policies on using reporting guidelines, data sharing and equitable consideration of non-significant results might enhance the reporting of acupuncture RCTs.
    TRIAL REGISTRATION NUMBER: This study was registered at the Open Science Framework (OSF): (https://doi.org/10.17605/OSF.IO/2WTE6, and https://doi.org/10.17605/OSF.IO/9XDN4,).
    Keywords:  Acupuncture; Methods
    DOI:  https://doi.org/10.1136/bmjebm-2024-113364
  14. Clin Med (Lond). 2025 Mar 27. pii: S1470-2118(25)00022-3. [Epub ahead of print] 100304
      The quality of statistical reporting in biomedical journals remains insufficient despite the introduction of SAMPL Guidelines in 2015. These guidelines aim to improve clarity and accuracy but are underutilized by authors and editorial boards. Common deficiencies include unclear descriptions of statistical test purposes, inadequate reporting of effect sizes, poor analysis of assumptions, and limited consideration of outliers. Addressing these challenges requires broader adoption of SAMPL recommendations, improved statistical literacy among researchers and editors, and stronger editorial oversight. To enhance transparency and reliability in biomedical research, the SAMPL Guidelines should become standard practice, supported by targeted training and clear guidance for authors.
    Keywords:  Biostatistics; Clinical Medicine; SAMPL Guidelines; Statistical analysis; Statistical reviews
    DOI:  https://doi.org/10.1016/j.clinme.2025.100304
  15. Microsurgery. 2025 May;45(4): e70057
       BACKGROUND: The general practice of journal editors publishing original articles in their own journals has been examined in several reviews. No such study has been reported for plastic surgery journals. This study analyzes editorial publication practice in plastic surgery journals over an 8-year period.
    METHODS: A retrospective analysis of twelve PubMed indexed journals, including Plastic and Reconstructive Surgery (PRS), Plastic and Reconstructive Surgery Global Open (PRS-GO), Annals of Plastic Surgery, Aesthetic Surgery Journal, Journal of Plastic, Reconstructive & Aesthetic Surgery (JPRAS), Journal of Plastic, Reconstructive, & Aesthetic Surgery Open (JPRAS-Open), The Journal of Craniofacial Surgery, Archives of Plastic Surgery, the Journal of Plastic Surgery and Hand Surgery, Indian Journal of Plastic Surgery, Microsurgery, and Journal of Reconstructive Microsurgery. We reviewed all articles between 2014 and 2021 to identify articles published by the journal's editor. Editorials and articles appearing in supplements were excluded from this analysis.
    RESULTS: The proportion of editor authorship ranged from 0% to 5.88%. We found that editors of PRS and Journal of Plastic Surgery and Hand Surgery had a significantly greater authorship proportion than the other journals reviewed.
    CONCLUSION: This study found that almost all the studied journals had original articles published by their respective editors. Two journals: PRS and the Journal of Plastic Surgery and Hand Surgery had higher rates of editor article publication compared to the other journals.
    Keywords:  editors; publication trends; publishing
    DOI:  https://doi.org/10.1002/micr.70057
  16. Transplant Cell Ther. 2025 Apr;pii: S2666-6367(25)01066-8. [Epub ahead of print]31(4): 187-189
      
    DOI:  https://doi.org/10.1016/j.jtct.2025.03.003