bims-skolko Biomed News
on Scholarly communication
Issue of 2025–06–08
thirty-two papers selected by
Thomas Krichel, Open Library Society



  1. Nature. 2025 Jun 04.
      
    Keywords:  Medical research; Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-01739-z
  2. Pak J Med Sci. 2025 May;41(5): 1261-1263
      
    Keywords:  Editorial process; Peer review; Research integrity; Simulated Training program
    DOI:  https://doi.org/10.12669/pjms.41.5.12262
  3. mBio. 2025 Jun 05. e0043025
      Peer review is the process by which the quality of scholarly work is assessed prior to being published, presented, or funded. The consequences of flawed research entering the public domain in the "post-truth" era highlight the need to improve peer review quality, which we believe can be achieved by standardizing training. Here, we aim to enhance the quality of published literature by presenting a systematic guide to train new reviewers (and aid experienced ones) in the art of peer reviewing the rigor of scientific manuscripts.
    Keywords:  education; peer review; reproducibility; rigor; scientific premise; training; transparency
    DOI:  https://doi.org/10.1128/mbio.00430-25
  4. J Nurs Scholarsh. 2025 Jun 05.
      
    Keywords:  ChatGPT; artificial intelligence; review
    DOI:  https://doi.org/10.1111/jnu.70019
  5. Mov Disord Clin Pract. 2025 Jun 05.
      
    Keywords:  case reports; genotype–phenotype correlation; movement disorders; peer review; phenomenology
    DOI:  https://doi.org/10.1002/mdc3.70173
  6. J Dent. 2025 May 30. pii: S0300-5712(25)00311-2. [Epub ahead of print] 105867
       OBJECTIVE: Artificial intelligence (AI) is increasingly used in dental research for diagnosis, treatment planning, and disease prediction. However, many dental AI studies lack methodological rigor, transparency, or reproducibility, and no dedicated peer-review guidance exists for this field.
    METHODS: Editors and reviewers from the ITU/WHO/WIPO AI for Health - Dentistry group participated in a structured survey and group discussions to identify key elements for reviewing AI dental research. A draft of the recommendations was circulated for feedback and consensus.
    RESULTS: The consensus from editors and reviewers identified four key indicators of high-quality AI dental research: (1) relevance to a real clinical or methodological problem, (2) robust and transparent methodology, (3) reproducibility through data/code availability or functional demos, and (4) adherence to ethical and responsible reporting practices. Common reasons for rejection included lack of novelty, poor methodology, limited external testing, and overstated claims. Four essential checks were proposed to support peer review: the study should address a meaningful clinical question, follow appropriate reporting guidelines (e.g., DENTAL-AI, STARD-AI), clearly describe reproducible methods, and use precise, justified, and clinically relevant wording.
    CONCLUSION: Editors and reviewers play a critical role in improving the quality of AI research in dentistry. This guidance aims to support more robust peer review and contribute to the development of reliable, clinically relevant, and ethically sound AI applications in dentistry.
    Keywords:  Artificial intelligence; Deep learning; Dentistry; Machine learning; Peer-review
    DOI:  https://doi.org/10.1016/j.jdent.2025.105867
  7. J Korean Med Sci. 2025 Jun 02. 40(21): e170
      Artificial intelligence (AI) has shown its ability to transform academic writing and publishing. It offers significant benefits, including enhancing efficiency, consistency, and integrity, However, these advancements are accompanied by ethical concerns (particularly around authorship, originality, and transparency) and the need for human oversight in peer review and editorial processes. In this study we explore AI for ethics checks in journal submissions. Specific AI platforms-such as YesChat for bias detection, Turnitin's iThenticate for plagiarism, Proofig for image integrity, and GPTZero for AI-generated content-can identify ethical breaches through tailored prompts and queries. Additionally, AI is increasingly used to detect missing or vague ethics statements, conflicts of interest, and citation manipulation by analyzing structured text and databases. AI-enhanced tools like Elsevier's Editorial Manager and Enago Read assist in ensuring compliance with journal-specific ethical guidelines and streamline peer review. Moreover, emerging algorithms, such as CIDRE, have shown promise in identifying abnormal citation behaviors. As AI accuracy improves, these platforms are expected to be integrated directly into submission systems, enhancing research integrity, transparency, and accountability.
    Keywords:  Academic Publishing; Artificial Intelligence; Authorship; Ethics; Peer Review
    DOI:  https://doi.org/10.3346/jkms.2025.40.e170
  8. Nature. 2025 Jun 03.
      
    Keywords:  Communication; Media; Publishing; Technology
    DOI:  https://doi.org/10.1038/d41586-025-01527-9
  9. J Arthroplasty. 2025 Jun 02. pii: S0883-5403(25)00612-6. [Epub ahead of print]
      In recent years, there has been a rise in the use of artificial intelligence (AI) in medical research, including within the field of orthopaedic surgery[1-4]. The increased volume and availability of digital data due to the widespread adoption of electronic health record systems provide rich datasets for training AI models. In addition, advancements in AI methodology and computational hardware have allowed for more powerful and flexible models. Despite these promising developments, the inherently multidisciplinary nature of medical AI research presents a challenge for reporting findings. The algorithms used to train AI models are mathematically complex, and medical data is notoriously messy. These difficulties complicate the effective communication and publication of research findings to the broader medical community. Consequently, there is a growing need for standardized reporting guidelines that can bridge this communication gap and ensure the transparency, reproducibility, and clinical relevance of AI research in orthopaedic surgery. The need for such standardization is particularly urgent given the combination of the recency of AI-driven research with the rapid pace of adoption. The concepts are unfamiliar to many readers and journal reviewers; thus, guidelines are critical to ensure quality and transparency. Currently, there are 17 reporting guidelines registered on the Enhancing the Quality and Transparency of Health Research (EQUATOR) network relating to AI and machine learning[5]. Many of these guidelines are study-type specific and have been adapted from earlier guidelines, such as the TRIPOD[6], SPIRIT[7], CONSORT[8], CHEERS[9], and DECIDE[10] guidelines. Others have created more generic recommendations on AI reporting, such as the MINIMAR[11], CAIR[12], and MI-CLAIM[13] guidelines, but these often do not take the intricacies of medical research into account. Also, there have been subspecialty-specific guidelines on AI research proposed, such as STREAM-URO[14] (urology), PRIME[15] (cardiology), and CLAIM [16,17] (radiology)[18,19]. A list of these guidelines and their item domains has been provided in Table 1. The purpose of this paper was to provide general guidelines and important considerations when publishing AI-derived findings in orthopaedic surgery. For more general information regarding AI in orthopaedics, we refer to these papers as a primer [20-26].
    DOI:  https://doi.org/10.1016/j.arth.2025.05.093
  10. Chiropr Man Therap. 2025 Jun 04. 33(1): 23
       BACKGROUND: In the realm of biomedical research articles, authors typically utilize descriptive statistics to outline the characteristics of their study samples. The standard deviation (SD) serves to illustrate variability among the individuals in a sample, whereas the standard error of the mean (SEM) conveys the level of uncertainty associated with the sample mean's representation of the population mean. It is not unusual for authors of scientific articles to incorrectly utilize the SEM rather than the SD when explaining data variability. This is problematic because the SEM is consistently smaller than the SD, which could cause readers to underestimate variation in the data. In medical journals, inappropriate use has been found in 14-64% of articles. Moreover, in the field of musculoskeletal health and manual medicine, there is a noticeable absence of literature on the appropriate presentation of statistics.
    AIM: The aim of this study was to map the frequency of inappropriate reporting of SEM in articles published over a three-year period in three prominent journals in manual medicine.
    METHODS: In this critical analysis, all articles in three journals - BMC Chiropractic and Manual Therapies (CMT), Journal of Manipulative and Physiological Therapeutics (JMPT) and Musculoskeletal Science and Practice: An International Journal of Musculoskeletal Physiotherapy (MSP) - published between 2017 and 2019 were analysed based on descriptive statistics that inappropriately or vaguely reported SEMs.
    RESULTS: In total, 790 articles were analysed from the three journals, 487 of which were found to report the SEM. Among these articles, we identified a frequency of 1.4% of inadequate SEM use. The investigation also showed that in 2.5% of the cases, authors did not clarify whether the ± sign presented in text, tables or figures expressed SDs or SEMs.
    CONCLUSION: There was a low frequency (1.4%) of inaccurately reported SEMs in scientific journals focusing on manual medicine, which was notably lower than studies conducted in other fields. Additionally, it was noted that in 2.5% of the articles, the ± sign was not adequately defined, which could lead to confusion among readers and hinder the interpretation of the results.
    Keywords:  And Variability of the study sample; Descriptive statistics; Standard deviation; Standard error of the mean; Statistics; Statistics as a topic
    DOI:  https://doi.org/10.1186/s12998-025-00587-y
  11. J Vet Pharmacol Ther. 2025 Jun 06.
      Reproducibility and replicability of study results are crucial for advancing scientific knowledge. However, achieving these goals is often challenging, which can compromise the credibility of research and incur immeasurable costs for the progression of science. Despite efforts to standardize reporting with guidelines, the description of statistical methodology in manuscripts often remains insufficient, limiting the possibility of replicating scientific studies. A thorough, transparent, and complete report of statistical methods is essential for understanding study results and mimicking statistical strategies implemented in previous studies. This review outlines the key statistical reporting elements required to replicate statistical methods in most current veterinary pharmacology studies. It also offers a protocol for statistical reporting to aid in manuscript preparation and to assist trialists and editors in the collective strive for advancing veterinary pharmacology research.
    Keywords:  assumptions; experimentation; replicability; reproducibility; statistics
    DOI:  https://doi.org/10.1111/jvp.70001
  12. Adv Clin Exp Med. 2025 Jun 05.
      Instructions for authors issues by editorial offices of scientific journals require periodical critical analysis in order to maintain their clarity and understandability. In this editorial, selected aspects of such guidelines published issued by Advances in Clinical and Experimental Medicine were reappraised by the editors of this journal - regulations concerning financial disclosure and conflict of interest, as well as acknowledgements, equal contribution of 2 or more authors, tables, figures, and references were discussed. Reasons for chosen rules were provided - those which (based on editors' experience and expertise) may not seem obvious to authors - e.g., why equal contribution is permitted, while co-first authorship is not. Multiple examples of papers fulfilling the analyzed rules in a copybook fashion were provided. In Conclusions, it was briefly discussed whether some of the rules specified in instructions for authors could be enforced only after acceptance for publication (e.g., when numeration of tables and figures is concerned or rules regarding acknowledgements). In this section, it was also explained why other rules listed above should be fulfilled before the peer review commences, for 3 reasons: 1) information about funding sources and conflict of interest is crucial for the ethical integrity of the whole work and cannot be added at a later stage; 2) Satisfactory quality of tables and figures is a prerequisite for peer review; 3) Resolving many issues after acceptance would be cumbersome (e.g., reducing the number of tables or figures) or at least significantly extend the time required for editing.
    Keywords:  co-authorship; editor; instructions for authors; scientific journal; scientific publishing
    DOI:  https://doi.org/10.17219/acem/205025
  13. Cochrane Evid Synth Methods. 2024 Apr;2(4): e12054
       Background: Well-conducted systematic reviews contribute to informing clinical practice and public health guidelines. Between 2008 and 2018 Cochrane authors in sub-Saharan Africa were publishing progressively fewer Cochrane Reviews, compared to non-Cochrane reviews. The objective of this study was to determine what motivated trained Cochrane authors in sub-Saharan Africa to conduct and publish non-Cochrane reviews over Cochrane Reviews.
    Methods: We conducted a mixed-methods exploratory sequential study. We purposely selected 12 authors, who had published at least one Cochrane- and one non-Cochrane review, for in-depth, semi-structured interviews. We manually coded and analysed the qualitative data using Grounded Theory approach and used the results to inform the survey questions. Subsequently we surveyed 60 authors with similar publishing experience. We analysed the quantitative data using descriptive and inferential statistics.
    Results: Facilitators to publish with Cochrane were a high-impact factor, rigorous research, and visibility. From barriers, the main categories were protracted time to complete Cochrane Reviews, complex title registration process, and inconsistencies between Cochrane Review groups regarding editorial practices. From the survey, authors confirmed rigorous research and reviewing process (84%), high impact factor (77%), and good mentorship (73%). The major barriers included Cochrane's long reviewing process (70%) and Cochrane's complicated title registration (50%). Authors with publishing experience in the previous 10 years at <95 percentile of systematic review publications, there was no significant difference between the medians for publishing with Cochrane (1) and non-Cochrane (0) reviews, p = 0.06. Similarly, for those with publishing experience of ≥95 percentile of systematic review publication there was no significant difference between the medians for publishing with Cochrane (4) and non-Cochrane (6), p = 0.344.
    Conclusion: Authors considered the visibility and relevance of Cochrane research as a trade-off point. They continued publishing with Cochrane despite the barriers that they encountered. However, the concerns raised by many authors are worth addressing.
    Keywords:  Cochrane Reviews; mixed‐methods; non‐Cochrane reviews; publication practices; sub‐Saharan Africa
    DOI:  https://doi.org/10.1002/cesm.12054
  14. Adv Simul (Lond). 2025 Jun 02. 10(1): 31
      In response to Cheng et al.'s article on ethical recommendations for artificial intelligence (AI)-assisted academic writing, we propose an expanded ethical discourse to address the evolving role of AI in scholarly communication. While applauding the authors' foundational framework, we argue for greater disciplinary specificity, clearer thresholds for AI contribution, and broader consideration of systemic risks including linguistic bias, environmental impact, and corporate concentration. We advocate for the development of a graded typology of AI involvement, institution-led regulatory mechanisms, and integration of ethical AI use into editorial and research training practices. These enhancements are essential for building equitable, transparent, and sustainable AI governance in academic publishing.
    DOI:  https://doi.org/10.1186/s41077-025-00362-2
  15. JDS Commun. 2025 May;6(3): 452-457
      The launch of generative artificial intelligence (GenAI) tools has catalyzed considerable discussion about the potential impacts of these systems within the scientific article preparation process. This symposium paper seeks to summarize current recommendations on the use of GenAI tools in scientific article preparation, and to provide speculations about the future challenges and opportunities of GenAI use in scientific publishing. Due to the dynamic nature of these tools and the rapid advancement of their sophistication, the most important recommendation is that ongoing engagement and discussion within the scientific community about these issues is critical. When using GenAI tools in scientific article preparation, humans are ultimately accountable and responsible for products produced. Given that accountability, an expert panel convened by the National Academies of Science, Engineering, and Medicine recently proposed principles of GenAI use in science communication, including (1) transparent disclosure and attribution; (2) verification of AI-generated content and analyses; (3) documentation of artificial intelligence (AI)-generated data; (4) a focus on ethics and equity; and (5) continuous monitoring, oversight, and public engagement. In addition to the importance of human accountability, many publishers have established consistent policies suggesting that GenAI tools should not be used for peer reviewing, figure generation or manipulation, or assigned authorship on scientific articles. Along with the potential ethical challenges associated with GenAI use in scientific publishing, there are numerous potential benefits. Herein we summarize example conversations demonstrating the capacity of GenAI tools to support the article preparation process, and an example standard operating procedure for human-AI interaction in article preparation. Finally, diverse broader questions about the impact of GenAI tools on communication, knowledge, and advancement of science are raised for rumination.
    DOI:  https://doi.org/10.3168/jdsc.2024-0707
  16. Front Med (Lausanne). 2025 ;12 1518399
       Background: To enhance reproducibility and transparency, the International Committee of Medical Journal Editors (ICMJE) required that all trial reports submitted after July 2018 must include a data sharing statement (DSS). Accordingly, emerging biomedical journals required trial authors to include a DSS in submissions for publication if trial reports were accepted. Nevertheless, it was unclear whether endocrinology and metabolism journals had this request for DSS of clinical trial reports. Therefore, we aimed to explore whether endocrinology and metabolism journals requested DSS in clinical trial submissions, and their compliance with the declared request in published trial reports.
    Methods: Journals that were from the category of "Endocrinology & Metabolism" defined by Journal Citation Reports (JCR, as of June 2023) and published clinical trial reports between 2019 and 2022, were included for analysis. The primary outcome was whether a journal explicitly requested a DSS in its manuscript submission instructions for clinical trials, which was extracted and verified in December 2023. We also evaluated whether these journals indeed included a DSS in their published trial reports that were published between December 2023 and May 2024.
    Results: A total of 141 endocrinology and metabolism journals were included for analysis, among which 125 (88.7%) requested DSS in clinical trial submissions. Journals requesting DSS had a significantly lower JCR quartile and higher impact factor when compared with those journals without DSS request. Among the 90 journals requesting DSS, 14 (15.6%) journals indeed did not publish any DSS in their published trial reports between December 2023 and May 2024.
    Conclusion: Over 10% of endocrinology and metabolism journals did not request DSS in clinical trial submissions. More than 15% of the journals declaring to request DSS from their submission instructions, did not publish DSS in their published trial reports. More efforts are needed to improve the practice of endocrinology and metabolism journals in requesting and publishing DSS of clinical trial reports.
    Keywords:  ICMJE; clinical trial; data sharing; endocrinology; metabolism
    DOI:  https://doi.org/10.3389/fmed.2025.1518399
  17. Cochrane Evid Synth Methods. 2024 Sep;2(9): e70002
       Objectives: To assess the publication rate and time from registration to publication of systematic intervention reviews registered in PROSPERO originated from South American countries in 2020.
    Study Design and Setting: Cross-sectional study. We searched PROSPERO for protocols of systematic reviews of interventions with affiliation in South America during 2020. We randomly extracted 10% and searched databases to identify their publication status.
    Results: We identified 1361 intervention systematic reviews with South American affiliation registered in PROSPERO during 2020. We assessed a random sample of 10% (n = 135). The publishing rate in indexed journals was 36.9% (n = 41). The median time to publication was 1.6 years (IQR 0.9-2.1).
    Conclusion: The publication rate of South American PROSPERO registers is low. These findings emphasize the need for further efforts to improve publication rates and increase the visibility of South American research in the global scientific community.
    Keywords:  PROSPERO registry; South America; Systematic reviews; intervention studies; meta‐epidemiology; publication bias
    DOI:  https://doi.org/10.1002/cesm.70002
  18. Clin Microbiol Infect. 2025 May 31. pii: S1198-743X(25)00277-0. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.cmi.2025.05.029
  19. Med Clin (Barc). 2025 May 30. pii: S0025-7753(25)00264-7. [Epub ahead of print]165(2): 107036
      
    DOI:  https://doi.org/10.1016/j.medcli.2025.107036
  20. Cochrane Evid Synth Methods. 2024 Apr;2(4): e12053
       Aim: We aimed to investigate authorship issues after the implementation of an authorship declaration form in a Cochrane Review Group.
    Methods: The Cochrane Colorectal Group uses an authorship declaration form that consists of three parts: (1) manuscript information, (2) documentation for roles according to the four authorship criteria of the International Committee of Medical Journal Editors (ICMJE), and (3) identification information of individual authors and signed approval. The manuscripts' contact authors were responsible for collecting the forms from all coauthors. This observational cohort study reports on all authorship issues in authorship declaration forms collected from February 2020 to December 2023.
    Results: We received 276/277 authorship declaration forms or replies from authors (response rate 99.6%) from 44 manuscripts, including 52% protocols and 48% reviews. There were authorship issues present in 14/44 (32%) of the manuscripts, and the most common issue was that not all authors fulfilled all four ICMJE authorship criteria. Six gift authors were removed from by-lines. Issues in nine of the 14 manuscripts were resolved by the author group when informing them about the ICMJE authorship criteria and guidance from the Committee on Publication Ethics (COPE). The issues in the remaining five manuscripts were unresolved since the manuscripts were transferred or rejected, thus, ceased to be developed by the Cochrane Colorectal Group.
    Conclusion: Authorship issues were raised in almost one-third of manuscripts. Most issues were resolved and six gift authorships were prevented. The awareness of authorship criteria is sharpened when all authors are individually asked to fill out and sign a form. This could help decrease the rate of unethical authorships in Cochrane publications and contribute to more ethical and robust evidence production.
    Keywords:  authorship; cohort; editorial policies; publishing; review literature as topic
    DOI:  https://doi.org/10.1002/cesm.12053
  21. Cochrane Evid Synth Methods. 2024 Apr;2(4): e12050
       Introduction: Systematic reviews play a crucial role in informing clinical decision-making, policy formulation, and evidence-based practice. However, despite the existence of well-established guidelines, inadequately executed and reported systematic reviews continue to be published. These highly cited reviews not only pose a threat to the credibility of science but also have substantial implications for medical decision-making. This study aims to evaluate and recommend improvements to the author instructions of biomedical and health journals concerning the conducting and reporting of systematic reviews.
    Methods: A sample of 168 journals was selected based on systematic reviews published between 2020 and 2021, taking into account their Altmetric attention score, citation impact, and mentions in Altmetric Explorer. Author instructions were downloaded, and data extraction was carried out using a standardized web form. Two reviewers independently extracted data, and discrepancies were resolved by a third reviewer. The findings were presented using descriptive statistics, and recommendations for editorial teams were formulated. The protocol is registered with the Open Science Framework Registries (osf. io/bym8d).
    Results: One-third of the journals lack tailored guidance for systematic reviews, as demonstrated by the absence of references to conducting or reporting guidelines, protocol registration, data sharing, and the involvement of an information specialist. Half of the author instructions do not include a dedicated section on systematic reviews, hampering the findability of tailored information. The involvement of information specialists is seldom acknowledged. Ultimately, the absence of an update date in most author instructions raises concerns about the incorporation of the most recent developments and tools for systematic reviews.
    Conclusion: Journals that make substantial contributions to synthesizing evidence in biomedicine and health are missing an opportunity to provide clear guidance within their author instructions regarding the conducting and reporting of reliable systematic reviews. This not only fails to inform future authors but also potentially compromises the quality of this frequently published research type. Furthermore, there is a need for greater recognition of the added value of information specialists to the systematic review and publishing processes. This article provides recommendations drawn from the study's observations, aiming to help editorial teams enhance author instructions and, consequently, potentially assisting systematic reviewers in improving the quality of their reviews.
    Keywords:  author guideline; author instruction; editorial recommendation; information specialist; publishing; reporting guideline; systematic review
    DOI:  https://doi.org/10.1002/cesm.12050
  22. Cochrane Evid Synth Methods. 2024 Aug;2(8): e12099
       Background: Historically, peer reviewing has focused on the importance of research questions/hypotheses, appropriateness of research methods, risk of bias, and quality of writing. Until recently, the issues related to trustworthiness-including but not limited to plagiarism and fraud-have been largely neglected because of lack of awareness and lack of adequate tools/training. We set out to identify all relevant papers that have tackled the issue of trustworthiness assessment to identify key domains that have been suggested as an integral part of any such assessment.
    Methods: We searched the literature for publications of tools, checklists, or methods used or proposed for the assessment of trustworthiness of randomized trials. Data items (questions) were extracted from the included publications and transcribed on Excel including the assessment domain. Both authors then independently recategorised each data item in five domains (governance, plausibility, plagiarism, reporting, and statistics).
    Results: From the 41 publications we extracted a total of 284 questions and framed 77 summary questions grouped in five domains: governance (13 questions), plausibility (17 questions), plagiarism (4 questions), reporting (29 questions), and statistics (14 questions).
    Conclusion: The proposed menu of domains and questions should encourage peer reviewers, editors, systematic reviewers and developers of guidelines to engage in a more formal trustworthiness assessment. Methodologists should aim to identify the domains and questions that should be considered mandatory, those that are optional depending on the resources available, and those that could be discarded because of lack of discriminatory power.
    DOI:  https://doi.org/10.1002/cesm.12099
  23. Med Sci Monit. 2025 Jun 01. 31 e949923
      The Consolidated Standards of Reporting Trials (CONSORT) statement was first published in 1996 and emphasized the importance of accurate and complete reporting of clinical trials. In 2013, the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) reporting guidelines for trial protocols were first published. There have been several extensions of CONSORT as new developments have been incorporated into clinical trials, and conditions have changed. For the first time, in 2025, the CONSORT and SPIRIT statements for clinical trials have been published simultaneously, aiming to harmonize these essential guidelines. However, it is important to recognize the practical challenges and complexities that have inevitably developed during the past three decades since the first CONSORT statement was published. This editorial aims to describe the opportunities and challenges of harmonizing the publication of the 2025 updates of the CONSORT and SPIRIT statements.
    DOI:  https://doi.org/10.12659/MSM.949923
  24. Nature. 2025 Jun 02.
      
    Keywords:  Communication; Information technology; Machine learning; Publishing
    DOI:  https://doi.org/10.1038/d41586-025-01661-4
  25. Ann Plast Surg. 2025 Jun 01. 94(6S Suppl 4): S559-S561
       ABSTRACT: Academic journals can expand scientific content by development of supplement issues based on academic society meetings. From 2014 to 2023, Annals of Plastic Surgery published between 42 and 143 supplemental articles per year, representing between 212 and 719 supplemental pages per year in collaboration of academic society meetings. This feature can serve as a model for other journals exploring strategies of expansion.
    Keywords:  article; issue; journal; manuscript; meeting; society; supplemental
    DOI:  https://doi.org/10.1097/SAP.0000000000004381
  26. Kathmandu Univ Med J (KUMJ). 2025 Jan-Mar;22(88):22(88): 123-126
      Structured scientific writing in medicine is seldom a part of curricula especially in non-native English-speaking countries. However, with the right tools and strategies, young researchers and academicians can be assured of artful dissemination of their research. The aim of this study is to propose a checklist that can help authors in structuring a polished scholarly manuscript. In order to achieve this, the authors carried out a literature search across prominent databases like PubMed, MEDLINE and Global Index Medicus to investigate the common reasons for retraction or rejection of manuscripts between 2020 to 2023. The inclusion criteria were as follows: reviews, observational studies, commentaries and editorials published in English since 2020 in the field of healthcare. A total of 32 results were identified, eight of which met the inclusion criteria. The eight included studies were from the field of dentistry, cardiology, neurology, spine surgery, anaesthesiology, nursing, and medically assisted reproduction. The most common reasons for article rejection or retraction were academic misconduct, designing errors, unintentional errors and data fraud. In order to overcome these flaws, the G.R.A.P.E. (Grammar, Reference Management, Archiving, Plagiarism, Equator-Network) checklist is proposed. Satisfying this checklist can result in a well-knit manuscript. The common reasons for article rejection/retraction can be avoided should students and academicians use the recommended strategies and tools as per the proposed checklist.
  27. Nature. 2025 Jun 02.
      
    Keywords:  Careers; Lab life; Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-01516-y
  28. Arch Rehabil Res Clin Transl. 2025 Mar;7(1): 100419
       Objective: To evaluate the submission guidelines of physical medicine and rehabilitation (PM&R) journals regarding their policies on the use of artificial intelligence (AI) in manuscript preparation.
    Design: Cross-sectional study, including 54 MEDLINE-indexed PM&R journals, selected by searching "Physical and Rehabilitation Medicine" as a broad subject term for indexed journals. Non-English journals, conference-related journals, and those not primarily focused on PM&R were excluded.
    Setting: PM&R journals.
    Participants: Not applicable.
    Interventions: Not applicable.
    Main Outcome Measures: Reviewing policies regarding the use of AI and comparing CiteScore, Source Normalized Impact per Paper (SNIP), Scientific Journal Ranking (SJR), and Impact Factor (IF) between journals with an AI policy and those without.
    Results: Of the 54 PM&R journals, only 46.3% had an AI policy. Among these, none completely banned AI use or allowed unlimited use without a declaration. Most journals (52%) permitted AI for manuscript editing with a required declaration, 44% allowed unlimited AI use with a declaration, and only 4% allowed AI-assisted editing without any declaration. No significant difference was found in scientometric scores between journals considered with and without AI policies (P>.05).
    Conclusions: Under half of MEDLINE-indexed PM&R journals had guidelines regarding the use of AI. None of the journals with AI policies entirely prohibited its use, nor did they allow unrestricted use without a declaration. Journals with defined AI policies did not demonstrate higher citation rates or affect scores.
    Keywords:  Artificial intelligence; Guideline; Journals; Machine Learning; Rehabilitation
    DOI:  https://doi.org/10.1016/j.arrct.2024.100419