bims-skolko Biomed News
on Scholarly communication
Issue of 2025–04–27
fifteen papers selected by
Thomas Krichel, Open Library Society



  1. Pathog Immun. 2025 ;10(2): 69-73
      The biomedical publications industry is vital to progress in science and health care. We observe that this industry has become unnecessarily complex and expensive for researchers and readers, impeding the sharing of research findings. In this perspective, we offer selected critiques of this industry and suggest how it might be improved.
    Keywords:  Biomedical Publications; Journal Formatting; Open Access; Peer Review; Publication Fees; Scientific Publications
    DOI:  https://doi.org/10.20411/pai.v10i2.819
  2. J Clin Pharmacol. 2025 Apr 23.
      Systematic reviews hold significant academic weight, but poor execution can render them misleading and unreliable. To help improve the quality of systematic reviews, the peer review process plays a crucial role. Peer reviewing systematic reviews requires a distinct skill set compared to reviewing primary research studies. Systematic reviews differ in their methodology and reporting standards, necessitating a structured approach to evaluation. This commentary offers guidance on best practice when peer reviewing systematic reviews, with an emphasis on synthesis of quantitative data from clinical trials. In this article, nine key topics are covered, namely correct classification of review type, adherence to systematic methods, pre-registration, methodological and reporting quality, search strategy evaluation, risk of bias assessment, evidence synthesis methods, data and code availability, and use of standardized assessment tools. By helping to ensure best practice is followed for each of these topics, peer reviewers can play a crucial role in upholding the methodological integrity of systematic reviews, ensuring they contribute reliable and meaningful evidence to the scientific literature.
    Keywords:  AMSTAR; PRISMA; ROBIS; meta‐analysis; peer review; systematic reviews
    DOI:  https://doi.org/10.1002/jcph.70036
  3. J Oral Maxillofac Surg. 2025 Mar 28. pii: S0278-2391(25)00187-9. [Epub ahead of print]
       BACKGROUND: The peer review process faces challenges of reviewer fatigue and bias. Artificial intelligence (AI) may help address these issues, but its application in the oral and maxillofacial surgery peer review process remains unexplored.
    PURPOSE: The purpose of the study was to measure and compare manuscript review performance among 4 large language models and human reviewers. large language models are AI systems trained on vast text datasets that can generate human-like responses.
    STUDY DESIGN/SETTING/SAMPLE: In this cross-sectional study, we evaluated original research articles submitted to the Journal of Oral and Maxillofacial Surgery between January and December 2023. Manuscripts were randomly selected from all submissions that received at least one external peer review.
    PREDICTOR VARIABLE: The predictor variable was source of review: human reviewers or AI models. We tested 4 AI models: Generative Pretrained Transformer-4o and Generative Pretrained Transformer-o1 (OpenAI, San Francisco, CA), Claude (version 3.5; Anthropic, San Francisco, CA), and Gemini (version 1.5; Google, Mountain View, CA). These models will be referred to by their architectural design characteristics, ie, dense transformers, sparse-expert, multimodal, and base transformer, to highlight their technical differences rather than their commercial identities.
    OUTCOME VARIABLES: Primary outcomes included reviewer recommendations (accept = 3 to reject = 0) and responses to 6 Journal of Oral and Maxillofacial Surgery editor questions. Secondary outcomes comprised temporal stability (consistency of AI evaluations over time) analysis, domain-specific assessments (methodology, statistical analysis, clinical relevance, originality, and presentation clarity; 1 to 5 scale), and model clustering patterns.
    ANALYSES: Agreement between AI and human recommendations was assessed using weighted Cohen's kappa. Intermodel reliability and temporal stability (24-hour interval) were evaluated using intraclass correlation coefficients. Domain scoring patterns were analyzed using multivariate analysis of variance with post hoc comparisons and hierarchical clustering.
    RESULTS: From 22 manuscripts, human reviewers rejected 15 (68.2%), while AI rejection rates were statistically significantly lower (0 to 9.1%, P < .001). AI models demonstrated high consistency in their evaluations over time (intraclass correlation coefficient = 0.88, P < .001) and showed moderate agreement with human decisions (κ = 0.38 to 0.46).
    CONCLUSIONS: While AI models showed reliable internal consistency, they were less likely to recommend rejection than human reviewers. This suggests their optimal use is as screening tools complementing expert human review rather than as replacements.
    DOI:  https://doi.org/10.1016/j.joms.2025.03.015
  4. Int J Periodontics Restorative Dent. 2025 Apr 25. 45(3): 293-299
      The use of artificial intelligence (AI) is rapidly expanding. While it comes with some drawbacks, it also offers numerous advantages. One significant application of AI is chatbots, which utilize natural language processing and machine learning to provide information, answer queries, and assist users. AI has various applications, and dentistry is no exception. The authors conducted an experiment to assess the application of AI, particularly OpenAI's ChatGPT, used with Google Apps Script in various stages of information gathering and manuscript preparation in parallel with conventional human-driven approaches. AI can serve as a valuable instrument in manuscript preparation; however, relying solely or predominantly on AI for manuscript writing is insufficient if the goal is to produce a high-quality article for publication in a peer-reviewed, high-impact journal that can contribute to the advancement of science and society.
    DOI:  https://doi.org/10.11607/prd.7022
  5. Front Artif Intell. 2025 ;8 1546064
       Introduction: The widespread application of artificial intelligence in academic writing has triggered a series of pressing legal challenges.
    Methods: This study systematically examines critical issues, including copyright protection, academic integrity, and comparative research methods. We establishes a risk assessment matrix to quantitatively analyze various risks in AI-assisted academic writing from three dimensions: impact, probability, and mitigation cost, thereby identifying high-risk factors.
    Results: The findings reveal that AI-assisted writing challenges fundamental principles of traditional copyright law, with judicial practice tending to position AI as a creative tool while emphasizing human agency. Regarding academic integrity, new risks, such as "credibility illusion" and "implicit plagiarism," have become prominent in AI-generated content, necessitating adaptive regulatory mechanisms. Research data protection and personal information security face dual challenges in data security that require technological and institutional innovations.
    Discussion: Based on these findings, we propose a three-dimensional regulatory framework of "transparency, accountability, technical support" and present systematic policy recommendations from institutional design, organizational structure, and international cooperation perspectives. The research results deepen understanding of legal attributes of AI creation, promote theoretical innovation in digital era copyright and academic ethics, and provide practical guidance for academic institutions in formulating AI usage policies.
    Keywords:  academic integrity; academic writing; artificial intelligence; copyright protection; data security; legal regulation
    DOI:  https://doi.org/10.3389/frai.2025.1546064
  6. Oncologist. 2025 Apr 04. pii: oyaf042. [Epub ahead of print]30(4):
       INTRODUCTION: Findings from early phase studies are not always placed in the public domain. This study aims to explore why many early phase clinical oncology studies are not published, as well as identify the potential barriers investigators encountered in the publication process.
    METHODS: Semi-structured interviews were conducted among investigators with experience in early phase clinical oncology studies. Interviews were analyzed using reflexive thematic analysis.
    RESULTS: Twenty-one investigators were interviewed. The majority worked in Europe (n = 13), while other investigators were based in North America (n = 4), Asia (n = 2) or Oceania (n = 2). We identified three reasons why investigators believed publishing early phase clinical trial results was important: (1) there is an ethical and moral responsibility; (2) there should be no loss of knowledge to society; and (3) there should be no waste of resources. Four main barriers in the publication process of early phase clinical trials were identified: (1) practical barriers (eg, an increased complexity of number of trials/trial sites), (2) insufficient resources (eg, money, time and human), (3) limited motivation (eg, limited intrinsic motivation of the investigator or limited prospect of return for the sponsor), and (4) inadequate collaboration (eg, different interests between industry partners and investigators). Finally, five major stakeholders were identified that can potentially contribute to improving the publication process: (1) journal editors, (2) sponsors, (3) investigators, (4) regulatory bodies, and (5) society. Investigator suggestions for improving this process, for each stakeholder, are presented.
    CONCLUSIONS: This study highlights the barriers experienced in publishing early phase clinical trials. Recognizing and acknowledging these barriers is crucial to devise effective strategies to improve the publishing and public sharing of early phase clinical trials.
    Keywords:  ethics; medical oncology; publication; qualitative research
    DOI:  https://doi.org/10.1093/oncolo/oyaf042
  7. Account Res. 2025 Apr 21. 1-19
       BACKGROUND: Researchers in low- and middle-income countries (LMICs) confront multifactorial challenges when publishing their manuscripts. Here, we aimed to quantify Arab researchers' perceptions of these challenges.
    MATERIALS AND METHODS: We distributed an online questionnaire to Arab researchers from 17 countries, the majority of which were LMICs.
    RESULTS: Among 286 respondents, 71.7% experienced rejection of at least one manuscript. The main reasons for manuscript rejection included being outside the journals' scopes (46.1%) and lacking novelty (35.1%). Over one-third of the respondents believed they might have faced bias in the review process being of Arab origin. More than 60% thought a Western coauthor would make their manuscripts be reviewed more favorably. Moreover, 60% thought it would be easier to publish in open-access journals. Over 75% of our respondents were aware of predatory journals, and an alarming 17.1% published in such journals.
    CONCLUSION: To improve the quality of scholarly publications and address publishing challenges, we propose strengthening research training, enhancing language support, and increasing the representation of LMIC researchers in editorial roles. These measures aim to foster inclusivity in peer review and ensure a more diverse academic publishing landscape.
    Keywords:  Arab region; Bias; Low- and middle-income countries (LMICs); open access; predatory journals
    DOI:  https://doi.org/10.1080/08989621.2025.2489544
  8. J Clin Epidemiol. 2025 Apr 17. pii: S0895-4356(25)00125-8. [Epub ahead of print] 111792
      We developed SPIRIT 2025 and CONSORT 2025 together and in a similar manner. To ascertain new evidence to inform both updates we completed two scoping reviews. The results of the scoping reviews helped inform the SPIRIT and CONSORT Delphis. Following two Delphi rounds and an international online consensus meeting, draft checklists for both reporting guidelines were drafted. SPIRIT 2025 contains a 34-item checklist. Compared to the original SPIRT 2013 reporting guideline, SPIRIT 2025 include the addition of two new items, revisions to five, and the deletion or merging of five, alongside a new Open Science section. Emphasis on harm assessment, intervention description, and patient/public involvement was also strengthened. CONSORT 2025, compared to CONSORT 2010, is a 30-item checklist with seven new items, three revised, and one deleted, with content integrated from existing CONSORT extensions - Harms, Outcomes, Non-pharmacological Treatment, and TIDieR. As with SPIRIT 2025, CONSORT 2025 also includes a new open science section. Both reporting guidelines also include updated extensive explanation and elaboration papers meant to be pedagogical assistants to facilitate the use of both reporting guidelines. Widespread adoption of SPIRIT 2025 and CONSORT 2025 by investigators, funders, ethics committees, journals, and regulators should improve the quality and usability of research, ultimately benefiting patients and the broader research ecosystem.
    DOI:  https://doi.org/10.1016/j.jclinepi.2025.111792
  9. J Clin Oncol. 2025 Apr 22. JCO2402472
      The purpose of this essay is to take readers of the Journal on my challenging journey of writing a memoir describing my patients and career.
    DOI:  https://doi.org/10.1200/JCO-24-02472
  10. Farm Comunitarios. 2025 Apr 15. 17(2): 5-10
       Resumen: The Granada statements were a result of the need to strengthen clinical, social and administrative pharmacy practice as an area of knowledge that translates into practice, research and policy. As a re-sponse, a group of clinical and social pharmacy practice journal editors launched an initiative in Grana-da in 2022 to discuss ways to improve the quality of publications in this area, which culminated in the Granada statements. Eighteen statements were developed, clustered into six main domains:1) the ap-propriate use of terminology; 2) developing impactful abstracts; 3) having the required peer reviews; 4) preventing journal scattering; 5) more effective and wiser use of journal and article performance met-rics; and 6) authors' selection of the most appropriate pharmacy practice journal to submit their work.The full Granada statements have been published in 14 journals.(1-14) These pioneering statements are rooted in similar endeavors undertaken by scholars in other health professions groups, fostering the concept of interdisciplinary consensus and advancing scientific paradigm.
    Keywords:  International Collaboration of Pharmacy Journal Editors (ICPJE)
    DOI:  https://doi.org/10.33620/FC.2173-9218.(2025).08