bims-skolko Biomed News
on Scholarly communication
Issue of 2025–03–23
sixteen papers selected by
Thomas Krichel, Open Library Society



  1. Nature. 2025 Mar 19.
      
    Keywords:  Institutions; Publishing; Research data; Research management
    DOI:  https://doi.org/10.1038/d41586-025-00710-2
  2. Eur J Cardiothorac Surg. 2025 Mar 21. pii: ezaf092. [Epub ahead of print]
      
    Keywords:  Equity; Open Access; Publishing
    DOI:  https://doi.org/10.1093/ejcts/ezaf092
  3. Front Med (Lausanne). 2025 ;12 1557024
      
    Keywords:  artificial intelligence; editors; papermill; research integrity; trust
    DOI:  https://doi.org/10.3389/fmed.2025.1557024
  4. Pediatr Pulmonol. 2025 Mar;60 Suppl 1 S111-S113
      Pediatric respiratory disease is a major cause of morbidity and mortality worldwide, and children in low and middle-income countries are disproportionately impacted. Because high-income countries have more public and private resources to support research, and health care systems that can incorporate new, high-cost therapies, they are overrepresented in biomedical literature compared to the size of their populations. This brief review discusses innovations in publishing practices that increase opportunities for authors from less resourced environments to have their work published to expand global knowledge and promote global health equity.
    DOI:  https://doi.org/10.1002/ppul.27445
  5. Cell Mol Life Sci. 2025 Mar 17. 82(1): 120
      Since its discovery in the middle of the XX century, research into autophagy has undergone a spectacular expansion, particularly in the early 1990s. A number of physiological processes involving autophagy have been revealed and important human pathologies have been associated with perturbations in autophagy. In 2008 the "Guidelines for the use and interpretation of assays for monitoring autophagy" was launched with the purpose of collecting in a single document all the available information to monitor autophagy, which, it was thought, might be useful for established groups and any new scientists attracted by this field. The usefulness and success of this Guidelines has led to the subsequent publication of editions every 4 years, a task in which a growing number of authors have become involved and consequently included in the list of contributors. However, this worthy initiative and closely associated metric parameters has led to important scholarly repercussions in terms of perceived merits, grants and financial support obtained, professional careers and other areas concerning scientific activity. All these aspects are carefully examined in this contribution.
    Keywords:  Authorship; Autophagy; Ethics; Metric parameters; Scholar repercussions
    DOI:  https://doi.org/10.1007/s00018-025-05650-8
  6. J Clin Neurosci. 2025 Mar 19. pii: S0967-5868(25)00165-1. [Epub ahead of print] 111193
      This letter discusses findings from a recent study comparing AI-generated and humanwritten neurosurgery articles. The study reveals that AI-generated articles exhibit higher readability scores (Lix: 35 vs. 26, Flesch-Kincaid: 10 vs. 8) but may lack depth in analysis. Evaluators could correctly identify AI authorship with 61 % accuracy, and preferences were nearly even between AI-generated (47 %) and human-written (44 %) articles. While AI improves accessibility and efficiency in academic writing, its limitations in clinical experience, originality, and nuanced analysis highlight the need for human oversight. The integration of AI should be as a complementary tool rather than a replacement for human expertise. Future research should focus on refining AI's analytical capabilities and ensuring ethical use in scientific publishing.
    Keywords:  AI-generated content; Academic publishing; Artificial intelligence; Neurosurgery articles; Readability
    DOI:  https://doi.org/10.1016/j.jocn.2025.111193
  7. Cureus. 2025 Feb;17(2): e79086
       INTRODUCTION: With the rapid proliferation of artificial intelligence (AI) tools, important questions about their applicability to manuscript preparation have been raised. This study explores the methodological challenges of detecting AI-generated content in neurosurgical publications, using existing detection tools to highlight both the presence of AI content and the fundamental limitations of current detection approaches.
    METHODS: We analyzed 100 randomly selected manuscripts published between 2023 and 2024 in high-impact neurosurgery journals using a two-tiered approach to identify potential AI-generated text. The text was classified as AI-generated if both a robustly optimized bidirectional encoder representations from transformers pretraining approach (RoBERTa)-based AI classification tool yielded a positive classification and the text's perplexity score was less than 100. Chi-square tests were conducted to assess differences in the prevalence of AI-generated text across various manuscript sections, topics, and types. In an effort to eliminate bias introduced by the more structured nature of abstracts, a subgroup analysis was conducted that excluded abstracts as well.
    RESULTS: Approximately one in five (20%) manuscripts contained sections flagged as AI-generated. Abstracts and methods sections were disproportionately identified. After excluding abstracts, the association between section type and AI-generated content was no longer statistically significant.
    CONCLUSION: Our findings highlight both the increasing integration of AI in manuscript preparation and a critical challenge in academic publishing as AI language models become increasingly sophisticated and traditional detection methods become less reliable. This suggests the need to shift focus from detection to transparency, emphasizing the development of clear disclosure policies and ethical guidelines for AI use in academic writing.
    Keywords:  academic integrity; academic neurosurgery; academic publishing; artificial intelligence; ethics
    DOI:  https://doi.org/10.7759/cureus.79086
  8. Int J Ther Massage Bodywork. 2025 Mar;18(1): 1-4
      Peer review is a vital component of scholarly publishing, ensuring that research adheres to the highest standards of rigor, relevance, and integrity. For the International Journal of Therapeutic Massage & Bodywork (IJTMB), peer reviewers play a critical role in advancing the field by providing constructive feedback and supporting the development of impactful research. This editorial outlines the expectations for IJTMB reviewers, emphasizing objectivity, inclusivity, cultural competence, and timeliness. Practical guidelines for conducting a thorough review are provided. Additionally, the editorial highlights key resources available to reviewers. By working together, reviewers, editors, and authors can strengthen evidence-based practice in therapeutic massage and bodywork.
    Keywords:  Massage therapy; evidence-based practice; peer review; reviewer guidelines
    DOI:  https://doi.org/10.3822/ijtmb.v18i1.1205
  9. BMC Med Res Methodol. 2025 Mar 14. 25(1): 71
       BACKGROUND: Although medical journals endorse reporting guidelines, authors often struggle to find and use the right one for their study type and topic. The UK EQUATOR Centre developed the GoodReports website to direct authors to appropriate guidance. Pilot data suggested that authors did not improve their manuscripts when advised to use a particular reporting guideline by GoodReports.org at journal submission stage. User feedback suggested the checklist format of most reporting guidelines does not encourage use during manuscript writing. We tested whether providing customized reporting guidance within writing templates for use throughout the writing process resulted in clearer and more complete reporting than only giving advice on which reporting guideline to use.
    DESIGN AND METHODS: GRReaT was a two-group parallel 1:1 randomized trial with a target sample size of 206. Participants were lead authors at an early stage of writing up a health-related study. Eligible study designs were cohort, cross-sectional, or case-control study, randomized trial, and systematic review. After randomization, the intervention group received an article template including items from the appropriate reporting guideline and links to explanations and examples. The control group received a reporting guideline recommendation and general advice on reporting. Participants sent their completed manuscripts to the GRReaT team before submitting for publication, for completeness of each item in the title, methods, and results section of the corresponding reporting guideline. The primary outcome was reporting completeness against the corresponding reporting guideline. Participants were not blinded to allocation. Assessors were blind to group allocation. As a recruitment incentive, all participants received a feedback report identifying missing or inadequately reported items in these three sections.
    RESULTS: Between 9 June 2021 and 30 June 2023, we randomized 130 participants, 65 to the intervention and 65 to the control group. We present findings from the assessment of reporting completeness for the 37 completed manuscripts we received, 18 in the intervention group and 19 in the control group. The mean (standard deviation) proportion of completely reported items from the title, methods, and results sections of the manuscripts (primary outcome) was 0.57 (0.18) in the intervention group and 0.50 (0.17) in the control group. The mean difference between the two groups was 0.069 (95% CI -0.046 to 0.184; p = 0.231). In the sensitivity analysis, when partially reported items were counted as completely reported, the mean (standard deviation) proportion of completely reported items was 0.75 (0.15) in the intervention group and 0.71 (0.11) in the control group. The mean difference between the two groups was 0.036 (95% CI -0.127 to 0.055; p = 0.423).
    CONCLUSION: As the dropout rate was higher than expected, we did not reach the recruitment target, and the difference between groups was not statistically significant. We therefore found no evidence that providing authors with customized article templates including items from reporting guidelines, increases reporting completeness. We discuss the challenges faced when conducting the trial and suggest how future research testing innovative ways of improving reporting could be designed to improve recruitment and reduce dropouts.
    Keywords:  Education; Reporting guidelines; Reproducibility; Standards
    DOI:  https://doi.org/10.1186/s12874-025-02518-0
  10. Trials. 2025 Mar 18. 26(1): 93
      Systematic reviews and meta-analyses are essential tools for synthesizing evidence from multiple studies. Recently, trial sequential analyses (TSAs) have gained popularity as a component of meta-analyses, helping researchers dynamically monitor evidence as new studies are incorporated. This article introduces a meta-epidemiological study aimed at evaluating the reproducibility of TSAs within systematic reviews published in 2023. Two independent investigators assessed and reproduced the main TSA for each included systematic review. Our search in PubMed yielded a convenience sample of 98 systematic reviews. Only 28% (27/98) of the included TSAs provided sufficient data to calculate the required information size, an essential element for assessing statistical power and conducting TSAs. Among these, 81% (22/27) provided the necessary data to determine decision boundaries and Z-curves in TSAs. Overall, full reproducibility was achieved for only 13% (13/98) of TSAs. Specifically, for binary outcomes, 65% (47/72) of TSAs failed to report event rates in control groups, and 44% (32/72) did not report relative risk reductions. For continuous outcomes, 53% (17/32) failed to report minimally relevant differences, and 72% (23/32) did not report variances. These elements are crucial for TSA reproducibility. Moreover, the reproducibility of TSAs was associated with journal impact factors and adherence to the PRISMA guidelines. A collective effort is needed from systematic review authors, peer reviewers, and journal editors to improve the reproducibility of TSAs.
    Keywords:  Meta-analysis; Reproducibility; Systematic review; Trial sequential analysis
    DOI:  https://doi.org/10.1186/s13063-025-08799-6
  11. Prehosp Disaster Med. 2025 Mar 18. 1-3
      The scientific manuscript review process can often seem daunting and mysterious to authors. Frequently, medical journals do not describe the peer-review process in detail, which can further lead to frustration for authors, peer reviewers, and readers. This editorial describes the updated manuscript review process for Prehospital and Disaster Medicine. It is hoped that this editorial will lead to increased clarity and transparency in the review process.
    Keywords:  conflict of interest; journal review process; peer review; retractions
    DOI:  https://doi.org/10.1017/S1049023X25000172