bims-skolko Biomed News
on Scholarly communication
Issue of 2025–07–27
28 papers selected by
Thomas Krichel, Open Library Society



  1. Niger Med J. 2025 Mar-Apr;66(2):66(2): 681-691
       Background: Academic publication is a cornerstone of advancing nursing science, as it provides evidence-based research that guides clinical practice and education. However, the pressure to publish for career advancement has led to concerns about behaviours such as the 'urge to publish' and 'panic publishing'. The perception behind these trends remains unclear. This study aimed to assess the perception of Indian nurse academicians on academic publications.
    Methodology: This cross-sectional study surveyed Indian nurse academicians from Institutes of National Importance (INIs) of India who were selected convenient sampling technique. Data was collected through an online self-structured questionnaire. The survey covered socio-demographic details and perceptions for publishing in academic publications. Data analysis was performed using SPSS Version 26.0, with descriptive and inferential statistics. A significance level (p < 0.05) was used for statistical associations.
    Results: Of the respondents, 66.8% were female, and 92.3% had published before [median 12 (IQR 7-38) articles], with 21.7% preferring PubMed-indexed journals. More than two-thirds of participants (64.7%) spent an excessive amount of time on the publication process. There were differences by gender as men were more prone to assess publication metrics (p<0.05), ignore other facets of life for publishing (p=0.005), and self-reported publication addiction-like behaviors (18% compared to 4% for women).
    Conclusion: Academic publishing is a crucial but stress-inducing aspect of nurse academicians' careers. These findings underscore the need for a balanced approach that values quality over quantity in academic publishing. Institutions should promote ethical research practices, provide support to manage publication pressures and foster a more sustainable academic environment.
    Keywords:  Academia; Nurse Academician; Perception; Publications
    DOI:  https://doi.org/10.71480/nmj.v66i2.806
  2. Nature. 2025 Jul 22.
      
    Keywords:  Careers; Publishing; SARS-CoV-2; Vaccines
    DOI:  https://doi.org/10.1038/d41586-025-01920-4
  3. Am J Nurs. 2025 Aug 01. 125(8): 14
      Letters from a former U.S. Justice Department prosecutor raise concern.
    DOI:  https://doi.org/10.1097/AJN.0000000000000123a
  4. Turk J Anaesthesiol Reanim. 2025 Jul 22.
      
    Keywords:  Ethics; letter to editor; medical jousting; publication ethics; scholarly criticism
    DOI:  https://doi.org/10.4274/TJAR.2025.251893
  5. Nature. 2025 Jul 18.
      
    Keywords:  Careers; Lab life; Peer review; Publishing
    DOI:  https://doi.org/10.1038/d41586-025-01954-8
  6. Can J Cardiol. 2025 Jul 16. pii: S0828-282X(25)00631-2. [Epub ahead of print]
      
    Keywords:  Publishing; academics; open access
    DOI:  https://doi.org/10.1016/j.cjca.2025.07.018
  7. Worldviews Evid Based Nurs. 2025 Aug;22(4): e70063
      
    Keywords:  nursing education; peer review; podcast; social media
    DOI:  https://doi.org/10.1111/wvn.70063
  8. J Clin Orthop Trauma. 2025 Oct;69 103116
       Background: Scientific writing is essential for orthopedic residents, enabling academic contributions and evidence-based practice. However, challenges such as time constraints, lack of training, and language barriers hinder their writing. AI-based tools like ChatGPT offer potential solutions to improve writing efficiency and quality. This study evaluates the impact of ChatGPT on orthopedic residents' scientific writing by assessing their performance before and after AI assistance in a single cohort.
    Methods: Thirty-six orthopedic residents first underwent structured training in scientific writing. They then wrote an article within six weeks without AI. In the second phase, they used ChatGPT to write another article over six weeks. Articles were assessed based on six criteria: understanding article writing, identifying research problems, knowledge of research types, understanding research methods, mastering writing techniques, and using correct language.
    Results: Following the AI intervention in, writing significantly improved residents' ability to understand article writing (4.3 ± 0.6 vs. 3.1 ± 0.8, p < 0.001), master writing techniques (4.4 ± 0.6 vs. 2.7 ± 0.9, p < 0.001), and use correct language (4.6 ± 0.5 vs. 2.6 ± 1.0, p < 0.001). Following the AI writing intervention, residents had slightly higher scores in identifying research problems, understanding research models, and knowledge of research types, these differences were not statistically significant. Despite improvements in language quality and article structure, AI-generated content lacked originality and critical depth.
    Conclusion: ChatGPT enhances writing efficiency and quality but does not replace critical thinking and originality. AI should supplement, not replace, human oversight in scientific writing. Integrating AI-assisted writing into medical education, alongside proper research methodology training, may optimize benefits for orthopedic residents.
    Keywords:  Artificial intelligence; Medical education; Natural language processing; Scientific writing
    DOI:  https://doi.org/10.1016/j.jcot.2025.103116
  9. J Cardiovasc Pharmacol. 2025 Jul 22.
    Artificial Intelligence in Medical Publishing (AIMPub) Working Group
      Artificial intelligence (AI) has been increasingly integrated into medical publishing, hopefully improving efficiency and accuracy, but serious concerns persist regarding ethical implications, authorship attribution, and content reliability. We aimed at understanding the perspectives of editors of medical journals on AI. A structured online questionnaire was developed and distributed to Editors-in-Chief of medical journals worldwide. The survey comprised 27 concise questions exploring demographics, journal practices, and perspectives on AI in editorial workflows. Quantitative data were analyzed using descriptive statistics to summarize usage patterns, perceived benefits, risks, and future expectations. A total of 59 Editors-in-Chief completed the survey (response rate: 19%), with replies suggesting substantial variability in beliefs and attitudes towards AI for publication in medical journals. Artificial intelligence tools were already in use by 49% of journals, mainly for plagiarism detection (76%) and data verification (35%). Only 9% of responders reported that journals used AI for both scientific and linguistic review. Time savings (79%) and cost reduction (43%) were the most commonly cited benefits, and concerns included potential bias (71%) and lack of accountability (60%). Overall, 81% of responders anticipated a major role for AI in publishing within 10 years. Exploratory analyses suggested several potential associations between replies and respondent or journal features, requiring further validation in future surveys. In conclusion, the present survey on attitudes toward AI in publication in medical journals suggests that Editors-in-Chief are cautiously adopting AI in their editorial workflow, supporting its operational use while explicitly calling for clear guidance to address ethical and regulatory concerns.
    Keywords:  Artificial intelligence; Editor; Journal; Publishing; Research; Survey
    DOI:  https://doi.org/10.1097/FJC.0000000000001738
  10. Hand Surg Rehabil. 2025 Jul 19. pii: S2468-1229(25)00147-1. [Epub ahead of print] 102225
      While the peer review process remains the gold standard for evaluating the quality of scientific articles, it is facing a crisis due to the increase in submissions and prolonged review times. This study assessed ChatGPT's ability to formulate editorial decisions and produce peer reviews for surgery-related manuscripts. We tested the hypothesis that ChatGPT's peer review quality exceeds that of human reviewers. Eleven published articles in the field of hand surgery, initially rejected by one journal and after accepted by another, were anonymized by removing the title page from the original PDF submission and subsequently evaluated by requesting ChatGPT 4o and o1 to determine each article's eligibility for publication and generate a peer review. The policy prohibiting the submission of unpublished manuscripts to large language models was not violated, as all articles had already been published at the time of the study. An experienced hand surgeon assessed all peer reviews (including the original human reviews from both the rejecting and the accepting journals and ChatGPT-generated) using the ARCADIA score, which consists of 20 items rated from 1 to 5 on a Likert scale. The average acceptance rate of ChatGPT 4o was 95%, while that of ChatGPT o1 was 98%. The concordance of ChatGPT 4o's decisions with those of the journal with the highest impact factor was 32%, whereas that of ChatGPT o1 was 29%. ChatGPT 4o's decisions were in accordance with those of the journal with the lowest impact factor, which was 68%, while ChatGPT o1's was 71%. The ARCADIA scores of peer reviews generated by human reviewers (2.8 for journals that accepted the article and 3.2 for those that rejected it) were lower than those of ChatGPT 4o (4.8) and o1 (4.9). In conclusion, ChatGPT can optimize the peer review process for scientific articles if it receives precise instructions to avoid "hallucinations." Many of its functionalities surpass human capabilities, but managing its limitations rigorously is essential to improving publication quality.
    Keywords:  ChatGPT; artificial intelligence; hand surgery; large language model; peer review; scientific article
    DOI:  https://doi.org/10.1016/j.hansur.2025.102225
  11. Hosp Pediatr. 2025 Jul 25. pii: e2025008326. [Epub ahead of print]
      Authorship confers both credit and responsibility for original scientific research. It is a highly prized, currency-like resource in academic medicine, and discussing it transparently with colleagues can feel as uncomfortable as talking about salaries. Junior researchers especially need to feel confident in their approach to navigating contentious authorship scenarios because they arise so commonly in academic research. Fortunately, there are simple tools researchers can employ to successfully prevent and navigate most authorship conflict. With these tools, researchers can systematically identify authors, outline author roles, recognize common forms of authorship conflict, and escalate conflict to appropriate channels for resolution.
    DOI:  https://doi.org/10.1542/hpeds.2025-008326
  12. Biomol Biomed. 2025 Jul 20.
      A systematic review with meta-analysis (SRMA) represents the pinnacle of evidence, but its validity depends on methodological rigor. This narrative review synthesizes recommendations from major reporting frameworks- Preferred Reporting Items for Systematic Reviews and Meta‑Analyses 2020 (PRISMA‑2020), Meta‑Analysis of Observational Studies in Epidemiology (MOOSE) and Preferred Reporting Items for Overviews of Reviews (PRIOR)-into a concise checklist for peer reviewers. The checklist addresses common sources of bias that often escape editorial assessment. Initially, it outlines how reviewers should assess the rationale for an SRMA by identifying existing syntheses on the same topic and determining whether the new work provides substantive novelty or a significant update. Best practices are summarized for protocol registration, comprehensive search strategies, study selection and data extraction, risk-of-bias evaluation, and context-appropriate statistical modeling, with a specific focus on heterogeneity, small-study effects, and data transparency. Case examples highlight frequent pitfalls, such as unjustified pooling of heterogeneous designs and selective outcome reporting. Guidance is also provided for formulating balanced, actionable review comments that enhance methodological integrity without extending editorial timelines. This checklist equips editors and reviewers with a structured tool for systematic appraisal across clinical disciplines, ultimately improving the reliability, reproducibility, and clinical utility of future SRMAs.
    DOI:  https://doi.org/10.17305/bb.2025.12979
  13. Nature. 2025 Jul;643(8073): 913
      
    Keywords:  Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-02317-z
  14. JACC Adv. 2025 Jul 17. pii: S2772-963X(25)00220-0. [Epub ahead of print]4(8): 101802
      
    Keywords:  paper mill; peer review; plagiarism; research misconduct
    DOI:  https://doi.org/10.1016/j.jacadv.2025.101802
  15. Br J Radiol. 2025 Jul 24. pii: tqaf174. [Epub ahead of print]
       OBJECTIVES: We aimed to establish the extent to which prognostic models published in highly indexed radiological journals cite and adhere to generally accepted reporting guidelines.
    METHODS: We identified articles reporting multivariable prognostic models, developed using regression and published in the top 3 indexed general radiological journals, December 2022 to May 2023 inclusive. We determined whether they cited the generally accepted reporting guideline, TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis). We scored adherence to individual TRIPOD domains to determine reporting quality both overall and for specific areas.
    RESULTS: We included 140 articles. Only 4% (n = 6) cited TRIPOD, with just one including the checklist. TRIPOD adherence was poor overall, with a median score of 57% (inter-quartile range, IQR 48% to 64%, range 30% to 87%). Individual domains particularly poorly reported were, title (2% adherence), abstract (3%), and statistical analysis (5%). Only 38% articles (n = 53) named a statistician author. Only one journal mentioned TRIPOD guidelines in their "Instructions For Authors" but did not mandate checklist submission.
    CONCLUSIONS: The large majority of prognostic models published in highly indexed radiological journals did not cite TRIPOD, nor fulfil its recommendations.
    ADVANCES IN KNOWLEDGE: Authors should adhere to the TRIPOD statement so that their work is reported with sufficient clarity, and radiological journals should stipulate adherence for authors submitting prognostic models for publication.
    Keywords:  Models; Multivariate Analysis; Prognostic; Radiology; Regression Analysis; Statistical
    DOI:  https://doi.org/10.1093/bjr/tqaf174
  16. Nature. 2025 Jul 22.
      
    Keywords:  Careers; Publishing; Research data; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-02312-4
  17. BMJ Open. 2025 Jul 22. 15(7): e097148
       OBJECTIVE: The International Committee of Medical Journal Editors requires data sharing statements in trial publications, but whether cardiology journals request data sharing statements in clinical trial submissions is unclear. We performed a survey to assess whether cardiology journals request data sharing statements in clinical trials.
    DESIGN, SETTING, DATA SOURCE AND PARTICIPANTS: All cardiac and cardiovascular systems journals that published clinical trials from January 2019 to December 2022 were included. The study outcome was journal requests for data sharing statements. Multivariable logistic regression analysis was used to examine the association between journal characteristics and journal requests. We also explored whether journal requests aligned with their subsequently published clinical trials.
    RESULTS: A total of 126 journals were included, among which 96 (76.2%) requested data sharing statements in clinical trials. Elsevier journals and Consolidated Standards of Reporting Trials endorsement had increased adjusted odds of requesting data sharing statements, with an OR of 5.74 (95% CI 1.45 to 22.70) and 7.21 (2.69 to 19.32), respectively. In the 78 journals that requested statements, 24 (30.8%) indeed did not publish any data sharing statement in their trial reports.
    CONCLUSIONS: Approximately one in four cardiology journals did not request data sharing statements on clinical trial submissions, while a substantial inconsistency existed between journal requests and the actual publications of statements in their published trial reports.
    Keywords:  CARDIOLOGY; Clinical Trial; Health informatics; Surveys and Questionnaires
    DOI:  https://doi.org/10.1136/bmjopen-2024-097148
  18. J Thorac Cardiovasc Surg. 2025 Jul 21. pii: S0022-5223(25)00618-X. [Epub ahead of print]
    Cardiothoracic Ethics Forum
      
    Keywords:  Ethics; Law and regulation; Peer review; Professional affairs; Surgical ethics
    DOI:  https://doi.org/10.1016/j.jtcvs.2025.07.024
  19. Ewha Med J. 2024 Jul;47(3): e44
      Objectives: The objective of this study was to develop a reporting guideline for epidemiological survey reports, referred to as "Guidelines for Survey Reporting (G-SURE)." Methods: To develop G-SURE, we adopted a systematic approach, starting with a detailed review of recent survey reports in Public Health Weekly Report, Eurosurveillance, and Morbidity and Mortality Weekly Report and an analysis of current reporting standards. After drafting the guidelines, our team conducted an in-depth internal evaluation to assess their effectiveness and applicability. We then refined the guidelines based on insights from external experts and potential users, particularly those with significant experience in survey reporting. The plan also includes ongoing efforts to widely share the guidelines and update them periodically, incorporating new findings and user feedback. Results: G-SURE will provide a structured framework for reporting outbreak investigations, comprising a detailed checklist and Explanation & Elaboration documents. These will improve the transparency, consistency, and quality of public health documentation. Conclusion: In this protocol article, we introduce G-SURE, a guideline developed to improve epidemiological survey research. G-SURE addresses the critical need for uniform reporting standards in epidemiological surveys, aiming to improve the quality and relevance of research outcomes in this area. This guideline is also designed to be a key resource for peer reviewers and editors, aiding them in efficiently assessing the thoroughness and accuracy of survey reports. By providing consistent reporting criteria, G-SURE seeks to minimize confusion and irregularities, which are often encountered in the process of scientific publication.
    Keywords:  Public health; Reporting guideline; Study protocol; Survey report
    DOI:  https://doi.org/10.12771/emj.2024.e44
  20. Ecol Evol. 2025 Jul;15(7): e71837
      Open science, work and knowledge that are developed in full, offers critical resources that provide students with insights into the process of research in many fields. There are extensive opportunities within environmental sciences to incorporate open science into undergraduate level courses. There are seven major open science concepts that could be used to teach undergraduate environmental science courses that align with professional research activities, including open-access papers, pre-prints, open data, open-source software, published code, collaborative tools for version control, and open notebooks. Here, we assessed the use of these open science concepts in connection to the European Union pillars of open science, outlining key benefits, challenges, and how these tools can be used in undergraduate environmental science courses. Specifically, these tools support a framework for open science structured around eight pillars, providing incentives to collaborate, enhancing transparency and openness, and promoting diversity and inclusivity. Collectively, these tools support teaching environmental science content as many of the skills gained directly relate to analyzing environmental topics and data while supporting transparency to collaborators and stakeholders. This provides learning opportunities including finding and reusing data, team collaboration, and reading and working with code. Further endorsing the use of open science in environmental science courses can enhance these courses as these tools align with professional research activities that are currently being used, including publishing data collected in labs, pre-print publishing capstone papers or lab reports, openly publishing code used for analysis, and publishing field notes.
    Keywords:  collaborative learning; education; higher education; open data; open science; science education; teaching
    DOI:  https://doi.org/10.1002/ece3.71837
  21. Ann Surg Oncol. 2025 Jul 19.
       BACKGROUND: Less than half of scientific presentations result in manuscript publication. This study aims to assess manuscript publication rate for oral breast surgery abstracts at three national surgical conferences and examine factors associated with successful publication.
    METHODS: We performed a retrospective review of orally presented breast surgery abstracts from the American Society of Breast Surgeons (ASBrS), American College of Surgeons (ACS), and Society of Surgical Oncology (SSO) annual meetings from 2017 to 2022. Univariate and multivariate logistic and linear regression models were used to examine factors predictive of publication, journal impact factor, and time from presentation to publication.
    RESULTS: A total of 441 oral presentations met inclusion. Most presenting first authors were trainees (residents or fellows) (56.8%) or attendings/faculty (26.7%). Overall manuscript publication rate was 60.5% (n=267) and was 81.1% for ASBrS, 43.0% for ACS, and 63.4% for SSO. On unadjusted models, compared to medical students, trainees (OR 2.2, 95% CI 1.1-4.5) and attendings/faculty (OR 2.9, 95%CI 1.4-6.4) were more likely to publish (p<0.05). On multivariate analysis, abstracts presented at ACS (OR 0.2, 95%CI 0.11-0.36, p<0.001) and SSO (OR 0.4, 95%CI 0.25-0.79, p<0.001) had lower odds of publication than those presented at ASBrS. Median journal impact factor was 3.7 and median time to publication was 5 months (IQR 2-12.5).
    CONCLUSION: First author role and meeting type were predictive of manuscript publication for breast surgery presentations at national meetings on unadjusted and multivariate models, respectively. Findings may inform abstract submissions and underscores the need to support medical students to publish their work.
    Keywords:  Breast; Conference; Manuscript; Oncology; Presentation; Publication; Surgery
    DOI:  https://doi.org/10.1245/s10434-025-17852-2