bims-skolko Biomed News
on Scholarly communication
Issue of 2025–09–28
37 papers selected by
Thomas Krichel, Open Library Society



  1. J Oral Rehabil. 2025 Sep 25.
      Are we truly synthesizing new evidence, or simply restating it?
    Keywords:  bibliometric analysis; protocol registration; systematic reviews
    DOI:  https://doi.org/10.1111/joor.70059
  2. PLoS Biol. 2025 Sep;23(9): e3003368
      Sharing knowledge is a basic tenet of the scientific community, yet publication bias arising from the reluctance or inability to publish negative or null results remains a long-standing and deep-seated problem, albeit one that varies in severity between disciplines and study types. Recognizing that previous endeavors to address the issue have been fragmentary and largely unsuccessful, this Consensus View proposes concrete and concerted measures that major stakeholders can take to create and incentivize new pathways for publishing negative results. Funders, research institutions, publishers, learned societies, and the research community all have a role in making this an achievable norm that will buttress public trust in science.
    DOI:  https://doi.org/10.1371/journal.pbio.3003368
  3. Naunyn Schmiedebergs Arch Pharmacol. 2025 Sep 24.
      Integrity of academic publishing is increasingly undermined by fake publications massively produced by commercial "editing services" (so-called "paper mills"). These services use AI-supported production techniques at scale and sell fake publications to students, scientists, and physicians under pressure to advance their careers. Because the scale of fake publications in biomedicine is unknown, we developed an easy-to-apply rule to red-flag potentially fake publications and estimate their number. After analyzing questionnaires sent to authors of published papers, we developed simple classification rules and tested them in a 9-step bibliometric analysis in a sample of 17,120 publications listed in PubMed®. We first validated various simple rules and finally applied a multifactorial tallying rule comparing 400 known fakes with 400 random (presumed) non-fakes. This rule was then applied to 1,000 random publications each from 2020 and 2023. The multifactorial tallying rule was the best red-flagging method, with a 94% sensitivity and only a 11.5% false-alarm rate. The rate of red-flagged articles increased during the last decade, reaching an estimated 14.9% in 2020 and 16.3% in 2023. Countries with the highest proportion of read-flagged publications were China, India, Iran, Russia, and Turkey, with China and India the largest absolute contributors globally. Applying Bayes' rule resulted in an estimate of 5.8% actual fakes in the biomedical literature. Given 1.86 million Scimago-listed biomedical publications in 2023, we estimate the actual number of true fakes at 107.800 articles per year, growing steadily. Scientific publications in biomedicine can be red-flagged as potentially fake using fast-and-frugal classification rules to earmark them for subsequent scrutiny. When applying Bayes´rule, the annual true scale of fake publishing in biomedicine is about 19 times that of the 5.671 biomedicine retractions in 2023. This scale of fraudulent publishing is concerning as it can damage trust in science, endanger public health, and impact economic spending. But fake detection tools can enable retractions of fake publications at scale and help prevent further damage to the permanent scientific record.
    Keywords:  Biomedical science; Fake; Paper mill; Research integrity; Science fraud; Trust
    DOI:  https://doi.org/10.1007/s00210-025-04275-9
  4. PLoS Med. 2025 Sep 26. 22(9): e1004774
      Systematic fraud threatens the integrity of science, with paper and review mills distorting the evidence base in medicine and global health. Data transparency-once seen mainly as a driver of discovery-must now be recognized as a frontline defense against misconduct. Only through open data and coordinated action can we safeguard trust in research and its impact on health.
    DOI:  https://doi.org/10.1371/journal.pmed.1004774
  5. J Occup Environ Hyg. 2025 Sep 23. 1-9
      The present assessment was undertaken to develop an understanding of the occurrence and status of peer review with biological science-oriented journals in the first few decades of the 20th century. This research was centered around whether peer review would have been a realistic expectation/demand for that era for experimentally oriented biologists. The analysis indicates that the peer review process in 17 major biologically oriented journals from the United States was created principally in the early decades of the 20th century. These journals included those relating to both botanical (e.g., plant physiology, plant pathology) and zoological (e.g., biochemistry, physiology, immunology, genetics) research domains. These findings represent the first integrative evaluation of experimentally oriented biological journals concerning their historical peer-review activities. The information is based on summarized articles in the journals concerning their peer-review process, similar assessments provided by related professional societies that published the journals, as well as the preserved papers of some journal editors, which included actual peer-review documents of that era. This assessment indicates that formal peer review was commonly practiced amongst many of the leading biologically oriented US-based journals during that era. These are useful in evaluating the publication strategy of Hermann J. Muller as it relates to the avoidance of peer review regarding his novel claim to have induced gene mutations via exposure to X-rays.
    Keywords:  Cancer risk assessment; dose response; history of science; linear non-threshold; mutation; radiation
    DOI:  https://doi.org/10.1080/15459624.2025.2549014
  6. Orthop J Sports Med. 2025 Sep;13(9): 23259671251371234
       Background: In the past few years, there has been an increase in the use of artificial intelligence (AI)-based large language models, including ChatGPT, in scientific research. This has shown promise in its ability to draft high-quality articles; however, there has been concerns regarding its ethical use in generating original research.
    Purpose/Hypothesis: The purpose of this study was to quantify the percentage of AI use in articles that were published in major sports medicine journals before and after the release of ChatGPT. It was hypothesized that AI use has changed and increased over time.
    Study Design: Cross-sectional study.
    Methods: All articles that were published from 2023 to 2024 in the 5 sports medicine journals with the highest impact factors were identified (Arthroscopy: The Journal of Arthroscopic and Related Surgery [Arthroscopy], Orthopaedic Journal of Sports Medicine [OJSM], The American Journal of Sports Medicine [AJSM], British Journal of Sports Medicine [BJSM], and Knee Surgery, Sports Traumatology, Arthroscopy [KSSTA]). After removing tables, figures, and references, full texts were assessed for AI-generated content using ZeroGPT. To establish an AI-generated content threshold, articles published before the release of ChatGPT were also assessed for AI-generated content. A 28.69% threshold was determined from 518 articles published before the release of ChatGPT. Articles published after the release of ChatGPT that exceeded this threshold were analyzed across journals and publication dates using chi-square and regression analyses.
    Results: Among the 3596 articles published after the release of ChatGPT and included in this study, 3.28% exceeded the established threshold. Moreover, Arthroscopy was flagged as having the highest AI use among all 5 journals (Arthroscopy = 7.17%; OJSM = 4.01%; AJSM = 3.34%; BJSM = 1.42%; KSSTA = 0.93%; P < .001). Finally, temporal analysis identified a significant rise in the use of AI, increasing from 2.38% in January 2023 to 6.25% in December 2024 (r 2 = 0.34; P < .003).
    Conclusion: AI use in sports medicine research remains low but is steadily rising. Editorial policies allowing AI usage may, in turn, perpetuate its use in published sports medicine articles.
    Keywords:  artificial intelligence; manuscript writing; research integrity; sports medicine
    DOI:  https://doi.org/10.1177/23259671251371234
  7. Nature. 2025 Sep 23.
      
    Keywords:  Machine learning; Publishing; Scientific community; Software
    DOI:  https://doi.org/10.1038/d41586-025-03046-z
  8. PeerJ Comput Sci. 2025 ;11 e2953
      Artificial intelligence (AI) text detection tools are considered a means of preserving the integrity of scholarly publication by identifying whether a text is written by humans or generated by AI. This study evaluates three popular tools (GPTZero, ZeroGPT, and DetectGPT) through two experiments: first, distinguishing human-written abstracts from those generated by ChatGPT o1 and Gemini 2.0 Pro Experimental; second, evaluating AI-assisted abstracts where the original text has been enhanced by these large language models (LLMs) to improve readability. Results reveal notable trade-offs in accuracy and bias, disproportionately affecting non-native speakers and certain disciplines. This study highlights the limitations of detection-focused approaches and advocates a shift toward ethical, responsible, and transparent use of LLMs in scholarly publication.
    Keywords:  AI text detection tools; Accuracy-bias trade-off; ChatGPT; DetectGPT; Fairness in scholarly publication; GPTZero; Gemini; Large language models (LLMs); Non-native authors; ZeroGPT
    DOI:  https://doi.org/10.7717/peerj-cs.2953
  9. BMC Med Ethics. 2025 Sep 26. 26(1): 120
       OBJECTIVE: The information age has transformed technologies across disciplines. Generative artificial intelligence (GenAI), as an emerging technology, has integrated into scientific research. Recent studies identify GenAI-related scientific research integrity concerns. Using Complex Adaptive Systems (CAS) theory, this research examines risk factors and preventive measures for each agent within the scientific research integrity management system during GenAI adoption, providing new perspectives for integrity management.
    METHOD: This study applies CAS theory to analyze the scientific research integrity management system, identifying four core micro-level agents: researchers, research subjects, scientific research administrators, and academic publishing institutions. It examines macro-system complexity, agent adaptability, and the impact of agent interactions on the overall system. This framework enables analysis of GenAI's effects on the research integrity management system.
    RESULTS: The scientific research integrity management system exhibits structural, hierarchical, and multidimensional complexities, with internal circulation of policy, funding, and information elements. In response to GenAI integration, four micro-level agents-researchers, research subjects, scientific research administrators, and academic publishing institutions-adapt their behaviors to systemic changes. Through these interactions, behavioral outcomes emerge at the macro level, driving evolution of the research integrity management system.
    CONCLUSIONS: Risks of scientific misconduct permeate the entire research process and require urgent governance. This study recommends that scientific research administrators promptly define applicable boundaries for GenAI in research to guide researchers. Concurrently, they should collaborate with relevant departments to establish regulatory frameworks addressing potential GenAI-related misconduct. Academic publishing institutions must assume quality assurance responsibilities by strengthening review and disclosure protocols. Furthermore, research integrity considerations should be systematically integrated into GenAI's technological development and refinement.
    HIGHLIGHTS: ● Develops an analytical framework grounded in Complex Adaptive Systems (CAS) theory to map evolving interactions among researchers, research subjects, scientific research administrators, and academic publishing institutions within GenAI-integrated research ecosystems.  ● Identifies self-reinforcing dynamics between GenAI adoption and integrity governance, wherein adaptive rule adjustments by agents reshape system-wide integrity thresholds.  ● Proposes adaptive governance mechanisms that balance innovation safeguards with integrity guardrails, emphasizing context-sensitive policy calibration over universal solutions.
    Keywords:  Generative artificial intelligence; Research integrity; Scientific research management
    DOI:  https://doi.org/10.1186/s12910-025-01288-0
  10. Indian J Tuberc. 2025 Oct;pii: S0019-5707(25)00137-4. [Epub ahead of print]72(4): 453-454
      
    DOI:  https://doi.org/10.1016/j.ijtb.2025.06.011
  11. Clin Chem Lab Med. 2025 Sep 16.
      
    Keywords:  artificial intelligence; confidentiality; editorial policy; peer review; research integrity
    DOI:  https://doi.org/10.1515/cclm-2025-1140
  12. Neurosurg Rev. 2025 Sep 27. 48(1): 670
    Council of State Neurosurgical Societies
      As artificial intelligence (AI), particularly large language models (LLMs), continues to progress, its impact on academic publishing, both in manuscript drafting and peer review, has attracted considerable attention. In neurosurgery, where journals serve a crucial role in disseminating research, formal guidelines regarding AI remain relatively underexplored. Our study aims to investigate the current state of AI policies among prominent neurosurgical journals, focusing on their role in manuscript preparation and peer review. 38 neurosurgical journals were identified by searching the Johns Hopkins University of Medicine Welch Medical Library, combined with National Library of Medicine subject terms. Each journal's author instructions, editorial policies, and peer-review guidelines were examined for explicit AI usage policies, focusing on manuscript preparation and peer review. Tasks such as writing assistance, data analysis, figure generation, and citation management were documented if identified. Any stated requirements, prohibitions, and disclosure practices for AI were recorded, as well as instances where no policy existed. Of the 38 journals surveyed, 31 (81.6%) had AI use guidelines, 9 (23.7%) based on individual journal-level explicit policies and 22 (57.9%) based on publisher-level guidelines. Majority of journals (n=30, 78.9%) provided guidelines for using AI in manuscript preparation, with most prohibiting its inclusion as an author. Most journals allow but mandate transparent disclosure of AI involvement in readability improvements, grammar correction, and style editing. Fewer journals (n = 13, 34.2%) specified AI policies for peer review, although those that did mention AI often prohibited its use for evaluating submissions due to confidentiality concerns. Although many neurosurgical journals now acknowledge AI's role in manuscript preparation, guidelines for AI-driven peer review remain scarce. Given AI's rapid advancement, establishing clear, comprehensive, and standardized AI policies will be critical for upholding transparency, quality, and efficiency in neurosurgical publishing.
    Keywords:  ChatGPT; Large language model; Preprint; Publication
    DOI:  https://doi.org/10.1007/s10143-025-03793-7
  13. Swiss Dent J. 2025 Sep 23. 135(3): 1-15
      The introduction and advancement of large language models (LLMs), such as ChatGPT, DeepSeek, and Google Gemini, present both opportunities and challenges for peer review in dental research. In this article, we propose a framework to inform the discourse on the responsible use of LLMs in dental peer review. We conducted a cross-sectional review of peer review policies from the top 50 dental journals, based on their 2024 Journal Impact Factor, to assess current guidance on LLM use. Our analysis revealed variability across dental journals: some journals permit restricted LLM use under specific conditions, while many either prohibit their use or lack explicit policies. Key concerns regarding LLM use identified by the authors include potential breaches of confidentiality, ambiguity in authorship, reduced reviewer accountability, and inherent limitations of LLMs in terms of domainspecific expertise and factual accuracy. Our proposed framework addresses confidentiality safeguards, suggested appropriate LLM applications, areas requiring caution, disclosure requirements, and accountability standards. It emphasizes that reviewers retain full responsibility for all submitted content, irrespective of LLM assistance. To protect confidentiality, the framework encourages offline or locally hosted LLMs. It also recommends regular policy reviews and reviewer training. This framework aims to support the thoughtful adoption of LLMs in dental research publishing. When employed judiciously, LLMs offer potential benefits in improving review clarity and efficiency, particularly for reviewers writing in a non-native language. However, their use must be grounded in clear ethical principles to ensure the integrity of dental peer review.
    Keywords:  Artificial intelligence; Dentistry; Editorial Policies; Machine Learning; Peer Review Research; Scholarly Communication
    DOI:  https://doi.org/10.61872/sdj-2025-03-01
  14. PLoS One. 2025 ;20(9): e0331871
      The integrity of peer review is fundamental to scientific progress, but the rise of large language models (LLMs) has introduced concerns that some reviewers may rely on these tools to generate reviews rather than writing them independently. Although some venues have banned LLM-assisted reviewing, enforcement remains difficult as existing detection tools cannot reliably distinguish between fully generated reviews and those merely polished with AI assistance. In this work, we address the challenge of detecting LLM-generated reviews. We consider the approach of performing indirect prompt injection via the paper's PDF, prompting the LLM to embed a covert watermark in the generated review, and subsequently testing for presence of the watermark in the review. We identify and address several pitfalls in naïve implementations of this approach. Our primary contribution is a rigorous watermarking and detection framework that offers strong statistical guarantees. Specifically, we introduce watermarking schemes and hypothesis tests that control the family-wise error rate across multiple reviews, achieving higher statistical power than standard corrections such as Bonferroni, while making no assumptions about the nature of human-written reviews. We explore multiple indirect prompt injection strategies-including font-based embedding and obfuscated prompts-and evaluate their effectiveness under various reviewer defense scenarios. Our experiments find high success rates in watermark embedding across various LLMs. We also empirically find that our approach is resilient to common reviewer defenses, and that the bounds on error rates in our statistical tests hold in practice. In contrast, we find that Bonferroni-style corrections are too conservative to be useful in this setting.
    DOI:  https://doi.org/10.1371/journal.pone.0331871
  15. Front Physiol. 2025 ;16 1661509
      
    Keywords:  peer review; publishing; review process; scientific criticism; scientific critique
    DOI:  https://doi.org/10.3389/fphys.2025.1661509
  16. J Clin Epidemiol. 2025 Sep 19. pii: S0895-4356(25)00313-0. [Epub ahead of print] 111980
       OBJECTIVES: Editors and reviewers of research manuscripts may have conflicts of interest that impact their evaluations. We aimed to characterise medical journals' conflict of interest policies for editors and peer reviewers.
    STUDY DESIGN AND SETTING: In this cross-sectional study, we randomly sampled 277 medical journals from Clarivate Journal Citation Reports. Two authors independently retrieved public conflict of interest policies and disclosures for editors and peer reviewers from journal websites, and retrieved publishers' policies when journals also referred to them (January to June 2024). We used content analysis to analyse policies and multivariable mixed-effects logistic regressions to estimate the associations between journal characteristics and having a policy.
    RESULTS: After excluding 27 journals, we included 250 medical journals in English, of which 177 (71%) had a conflict of interest policy for editors and 174 (70%) for peer reviewers. Of journals with a policy, 137 (77%) and 129 (74%) described disclosure requirements, 160 (90%) and 163 (94%) management strategies, 124 (70%) and 106 (61%) policy enforcement strategies, and 17 (10%) and 15 (9%) processes for appealing decisions. All four concepts were addressed in 16 (9%) policies for editors and 11 (6%) for peer reviewers. Having a policy for editors was associated with higher journal impact factor (adjusted odds ratio (OR): 1.28; 95% confidence interval (CI): 1.05-1.56) and Committee on Publication Ethics (COPE) membership (OR: 3.50; 95% CI: 1.42-8.65). Having a policy for peer reviewers was associated with higher journal impact factor (OR: 1.16; 95% CI: 0.97-1.37) and open access journal (OR: 4.59; 95% CI: 1.11-18.93). For a subgroup of journals referring to their publishers' policy, the content was concordant for 5 (11%) of 45 journals for editors and 4 (9%) of 47 journals for peer reviewers. Of 250 journals, 14 (6%) had public declarations of interest from editors, and 3 (1%) from peer reviewers.
    CONCLUSION: More than two-thirds of medical journals have conflict of interest policies for editors and reviewers; however, policies vary in comprehensiveness. There is potential to improve the content of conflict of interest policies and the transparency of interests in medical journals.
    PLAIN LANGUAGE SUMMARY: Before a scientific study is published as a research paper in a medical journal, it is evaluated by the journal editors and other researchers, known as peer reviewers. This process is used to assess and ensure the quality and trustworthiness of the research, and to assist editors in deciding whether to publish the paper. Editors and peer reviewers, however, are not necessarily neutral and may have personal interests that can influence their opinions. For example, they may have personal relationships with the study authors or have financial relationships with a company whose product is investigated in the study, which could result in conflicts of interest. Although most journals have policies addressing study authors' conflicts of interest, little is known about the practices and policies of medical journals concerning editors' and peer reviewers' conflicts of interest. In our study, we randomly selected 250 medical journals and examined their public conflict of interest policies for editors and peer reviewers, as well as whether these policies aligned with their publishers' policies. Additionally, we assessed whether the interests of editors and peer reviewers were publicly disclosed. We estimated the proportion of journals with available policies, assessed which journal characteristics were associated with having a policy, and analysed the content of policies. Of the 250 medical journals, we found that 177 (71%) journals had a conflict of interest policy for editors and 174 (70%) for peer reviewers, but their interests were very rarely publicly disclosed. The policies often contained limited information and were often only described in detail in the publishers' policies, and sometimes information in the journal's and the publisher's policies was in disagreement. Finally, policies rarely describe how journal staff assess interests, how these assessments may influence the journal's editorial process, and how journals enforce the consequences of policy violations. There is substantial potential for medical journals to improve their conflict of interest policies for editors and peer reviewers, as well as the transparency of their interests in medical journals.
    Keywords:  Conflicts of interest; clinical research; editorial management; editors; medicine; peer review; policy; publishing
    DOI:  https://doi.org/10.1016/j.jclinepi.2025.111980
  17. J Perinat Med. 2025 Sep 23.
       OBJECTIVES: Traditional peer review faces critical challenges including systematic bias, prolonged delays, reviewer fatigue, and lack of transparency. These failures violate ethical obligations of beneficence, justice, and autonomy while hindering scientific progress and costing billions annually in academic labor. To propose an ethically-guided hybrid peer review system that integrates generative artificial intelligence with human expertise while addressing fundamental shortcomings of current review processes.
    METHODS: We developed the FAIR Framework (Fairness, Accountability, Integrity, and Responsibility) through systematic analysis of peer review failures and integration of AI capabilities. The framework employs standardized prompt engineering to guide AI evaluation of manuscripts while maintaining human oversight throughout all stages.
    RESULTS: FAIR addresses bias through algorithmic detection and standardized evaluation protocols, ensures accountability via transparent audit trails and documented decisions, maintains integrity through secure local AI processing and confidentiality safeguards, and upholds responsibility through ethical oversight and constructive feedback mechanisms. The hybrid model automates repetitive tasks including initial screening, methodological verification, and plagiarism detection while preserving human judgment for novelty assessment, ethical evaluation, and final decisions.
    CONCLUSIONS: The FAIR Framework offers a principled solution to peer review inefficiencies by combining AI-enabled consistency and speed with essential human expertise. This hybrid approach reduces review delays, eliminates systematic bias, and enhances transparency while maintaining confidentiality and editorial control. Implementation could significantly reduce the estimated 100 million hours of global reviewer time annually while improving review quality and equity across diverse research communities.
    Keywords:  artificial intelligence; hybrid systems; medical publishing; peer review; research ethics; scientific publishing
    DOI:  https://doi.org/10.1515/jpm-2025-0285
  18. Res Integr Peer Rev. 2025 Sep 22. 10(1): 20
       BACKGROUND: Reporting guidelines are key tools for enhancing the transparency and reproducibility of research. To support responsible reporting, such guidelines should also address ethical considerations. However, the extent to which these elements are integrated into reporting checklists remains unclear. This study aimed to evaluate how ethical elements are incorporated in these guidelines.
    METHODS: We identified reporting guidelines indexed on the "Enhancing the Quality and Transparency of Health Research (EQUATOR) Network" website. On 30 January 2025, a random sample of 128 reporting guidelines and extensions was drawn from a total of 657. For each, we retrieved the associated development publication and extracted data into a standardised table. The assessed ethical elements included COI disclosure, sponsorship, authorship criteria, data sharing guidance, and protocol development and study registration. Data extraction for the first 13 guidelines was conducted independently and in duplicate. After achieving 100% agreement, the remaining data were extracted by one author, following "A MeaSurement Tool to Assess Systematic Reviews" (AMSTAR)-2 recommendations.
    RESULTS: The dataset comprised 101 original guidelines and 27 extensions of existing guidelines. Half of the included guidelines were published from 2015 onward, with 32.0% published between 2020 and 2024. The median year of publication was 2016. Approximately 90 of the 128 assessed guidelines focused on clinical studies. Over 70% of the guidelines did not include items related to conflicts of interest (COI) or sponsorship. Only 8.6% addressed COI and sponsorship jointly in a single item, while fewer than 9% covered them as two separate items. Notably, only two guidelines (1.6%) provided instructions for using the ICMJE disclosure form to report potential conflicts of interest. Nearly 20% of the guidelines offered guidance on study registration. Fewer than 30% recommended the development of a research protocol, and only 18.8% provided guidance on protocol sharing. Additionally, fewer than 10% of the checklists included guidance on authorship criteria or data sharing.
    CONCLUSION: Ethical considerations are insufficiently addressed in current reporting guidelines. The absence of standardised items on COIs, funding, authorship, and data sharing represents a missed opportunity to promote transparency and research integrity. Future updates to reporting guidelines should systematically incorporate these elements.
    Keywords:  Conflict of interest; EQUATOR; Ethics; Ethics in publishing; Reporting checklists; Reporting guidelines; Transparency
    DOI:  https://doi.org/10.1186/s41073-025-00180-0
  19. Health Behav Res. 2024 Oct;7(4):
      The introduction and discussion sections play pivotal roles in peer-reviewed manuscripts, yet many authors struggle with these sections. This commentary describes the significance of the introduction and discussion sections for successful publishing, identifies essential components of these sections, and provides recommendations for writing quality introductions and discussions. The introduction defines the problem to be addressed, identifies what is known and unknown about the problem, and states the study purpose. It begins broadly by introducing the area of interest, narrows to identify the specific focus and gap in knowledge, and finally ends with the aim of the present study, seamlessly leading to the methods and results sections. Discussion sections restate the study purpose, interpret the most compelling findings, situate them within the context of existing literature and frameworks, describe study limitations, and provide recommendations for future research and practice. The discussion ends with a brief conclusion paragraph explaining the study's relevance and implications to the field. The introduction and discussion sections are the "bookends" of the scientific manuscript. Successful bookends increase the chances of framing science, getting manuscripts published, and contributing to scientific literature.
    DOI:  https://doi.org/10.4148/2572-1836.1258
  20. Tomography. 2025 Sep 02. pii: 102. [Epub ahead of print]11(9):
      This editorial provides insights on plagiarism, self-plagiarism, and redundant publications, which all represent a serious and common form of misconduct in research [...].
    DOI:  https://doi.org/10.3390/tomography11090102
  21. Dermatol Online J. 2025 Jun 15. 31(3):
      Statistical mistakes can undermine research credibility. Identifying common errors may help researchers avoid them in future studies. This study evaluated the frequency and types of statistical mistakes in dermatology journal articles and identified article characteristics that predict these errors. A cross-sectional analysis was conducted on articles published in the 2023 volumes of 8 dermatology journals. Articles were screened for statistical tests, with a target sample of 200 selected pseudorandomly. Multivariable logistic regressions assessed predictors of statistical mistakes, including journal impact factor, statistician involvement, funding source, first author highest degree, and statistical package. Of the 189 articles analyzed, 78% contained at least one statistical mistake. Reporting mistakes were found in 67% and test selection errors in 46%. The absence of statistician involvement (aOR 2.49, P=0.03) and low journal impact factor (aOR 3.82, P=0.02) predicted the presence of at least one mistake. This sample from 8 journals is not representative of all dermatology literature. Original data were not available for testing of test assumptions, so appropriate test selection was determined using statistical conventions. Statistical mistakes are prevalent in dermatology literature. Researchers should review statistical best practices and consider involving a statistician in their work.
    DOI:  https://doi.org/10.5070/D331365357
  22. J Child Orthop. 2025 Sep 18. 18632521251380440
      The Journal of Children's Orthopedics has compiled a special collection of scientific publications from Chinese centers accepted for publication in the journal. Through this collection, the Journal of Children's Orthopedics demonstrates its commitment to promoting global knowledge sharing and collaboration in pediatric orthopedic surgery. The articles in the collection undergo the same rigorous peer review process as other articles. Once a publication is assigned to an issue, it is automatically added to the Special Chinese Collection on the Journal of Children's Orthopedics website, where it can be easily downloaded. The Special Chinese Collection's open access policy increases the visibility and global reach of Journal of Children's Orthopedics articles, promoting accelerated citations and collaborations. The Journal of Children's Orthopedics is an ideal platform for collecting and disseminating high-quality, relevant scientific publications in pediatric orthopedic surgery from China. The Special Chinese Collection showcases innovative research, encourages knowledge sharing, and fosters cultural exchange, promoting the development of a global community of researchers and clinicians dedicated to advancing the field of pediatric orthopedic surgery and improving children's lives worldwide.
    Keywords:  China; outcome; pediatric orthopedics; special collection; treatment
    DOI:  https://doi.org/10.1177/18632521251380440
  23. J Nurs Scholarsh. 2025 Sep 24.
       INTRODUCTION: Randomized controlled trials (RCTs) are essential for evidence-based nursing care. However, the quality of reporting and adherence to methodological standards in Latin American nursing journals remains unclear. This study evaluates the characteristics, reporting quality, and potential risk of bias of RCTs published in Latin American nursing journals.
    OBJECTIVE: To assess the reporting compliance and risk of bias of RCTs published in Latin American nursing journals.
    DESIGN: Meta-research study.
    METHODS: A comprehensive handsearch of 29 Latin American nursing journals was performed covering publications from 2000 to 2024. Identified RCTs were assessed for adherence to CONSORT reporting guidelines and evaluated for risk of bias. Outcomes were classified using the COMET taxonomy. A descriptive analysis was performed.
    RESULTS: A total of 6377 references were screened, identifying 34 eligible RCTs, most published after 2018. The median CONSORT compliance was 19 reported items (IQR 16-22). High compliance (> 90%) was observed in abstract reporting items, study objectives, and participant selection criteria. However, critical methodological features such as randomization procedures, blinding, and protocol registration showed low adherence (< 40%). Risk of bias was mostly rated as having "some concerns", largely due to insufficient reporting. According to the COMET taxonomy, the most frequently reported outcome domains were "Delivery of care" and "Physical functioning".
    CONCLUSIONS: Reporting compliance and risk of bias of RCTs published in Latin American nursing journals presents significant gaps, particularly in key methodological domains. These shortcomings hinder transparency, reproducibility, and integration into evidence synthesis. Strengthening editorial policies and enforcing reporting standards could enhance the quality and reliability of published research in Latin American nursing journals.
    Keywords:  evidence synthesis; evidence‐based nursing; handsearch methodology; nursing health care; nursing research
    DOI:  https://doi.org/10.1111/jnu.70049
  24. Radiologia (Engl Ed). 2025 Sep-Oct;67(5):pii: S2173-5107(25)00105-3. [Epub ahead of print]67(5): 101576
       INTRODUCTION: Scientific journals are a fundamental tool for the dissemination of evidence-based medicine. The scientific quality of a journal is related to the level of evidence of its publications. The aim of our study is to analyse and quantify changes in the levels of evidence assigned to articles published in the Radiología journal over the last six years.
    MATERIAL AND METHODS: We evaluated articles published in Radiología from 2018 to 2023. A critical reading of the selected articles was carried out and a level of evidence was assigned using two scales that are specific to the field of radiology (Insights into Imaging and the 2011 Oxford Center Evidence Based Medicine). Pearson residuals were used to establish differences in the level of evidence over the years, with a p-value < 0.05 being considered statistically significant. The level of agreement between the two scales for assessing levels of evidence was also compared using the Kappa coefficient.
    RESULTS: Of the total 404 publications in Radiología from 2018 to 2023, 275 articles were included for analysis. There was evidence of a progressive increase in the level of evidence for the publications, with a peak in 2023, consistently on both scales (p = 0.043). A Kappa coefficient of 0.92 was obtained in the analysis of agreement between scales (almost perfect agreement).
    CONCLUSION: The level of evidence for publications in the Radiología journal has significantly increased in 2023.
    Keywords:  Evidence-based medicine; Evidence-based radiology; Level of evidence; Medicina basada en la evidencia; Nivel de evidencia; Radiología basada en la evidencia; Radiología journal; Revista Radiología
    DOI:  https://doi.org/10.1016/j.rxeng.2025.101576
  25. Curr Res Transl Med. 2025 Sep 19. pii: S2452-3186(25)00053-4. [Epub ahead of print]73(4): 103544
       BACKGROUND: Ensuring accurate statistical reporting is critical in oncology research, where data-driven conclusions impact clinical decision-making. Despite standardized guidelines such as the Statistical Analyses and Methods in the Published Literature (SAMPL), adherence remains inconsistent. This study evaluates the performance of Gemini Advanced 2.0 Flash, an AI model, in assessing compliance with SAMPL guidelines in oncology research articles.
    METHODS: A total of 100 original research articles published in four peer-reviewed oncology journals (October 2024-February 2025) were analyzed. Gemini Advanced 2.0 Flash assessed adherence to ten key SAMPL guidelines, categorizing each as "not met," "partially met," or "fully met." AI evaluations were compared with independent assessments by a statistical editor, with agreement quantified using Cohen's Kappa coefficient.
    RESULTS: The overall weighted Kappa coefficient was 0.77 (95 % CI: 0.6-0.94), indicating substantial agreement between AI and manual assessment. Full agreement (Kappa = 1) was found for four guidelines, including naming statistical packages and reporting confidence intervals. High agreement was observed for specifying statistical methods (Kappa = 0.85) and confirming test assumptions (Kappa = 0.75). Moderate agreement was noted for summarizing non-normally distributed data (Kappa = 0.42) and specifying test directionality (Kappa = 0.43). The lowest agreement (Kappa = 0.37) was observed in multiple comparison adjustments due to missing justifications for post hoc tests.
    CONCLUSION: AI-assisted evaluation showed substantial agreement with expert assessment, demonstrating its potential in statistical review. However, discrepancies in specific guidelines suggest human oversight remains essential for ensuring statistical rigor in oncology research. Further refinement of AI models may enhance their reliability in scientific publishing.
    Keywords:  Artificial intelligence; Biostatistics; Oncology
    DOI:  https://doi.org/10.1016/j.retram.2025.103544
  26. Australas Psychiatry. 2025 Sep 27. 10398562251382462
      IntroductionData sharing is the practice of making de-identified participant-level data available for use by other researchers. It increases the potential of a dataset to answer new questions, accelerates knowledge creation and increases research integrity by allowing conclusions to be replicated, verified or corrected. Data sharing helps fulfil the ethical obligation to make the most of research participants' contributions to science.Analysis and EvidenceThere is evidence that research participants and the general public are supportive of data sharing. However, those who conducted the original studies may be reluctant to share data, and datasets may be difficult to access, and there may be ethical and governance concerns.DiscussionThis paper describes the Mental Health Node, an Australian Government initiative that aims to increase mental health data sharing. The Mental Health Node works with primary researchers (those who conduct original studies), and secondary researchers (those who reuse data generated by others) to promote ethical data sharing that respects the role of primary researchers and the privacy concerns of research participants.ConclusionPrimary and secondary researchers can collaborate to maximise the value of data collected. This paper includes recommendations for good practice in data sharing and links to resources.
    Keywords:  data reuse; data sharing; secure research environment; trusted research environment
    DOI:  https://doi.org/10.1177/10398562251382462
  27. Nat Microbiol. 2025 Sep 26.
    Data Reuse Consortium
      Science benefits from rapid open data sharing, but current guidelines for data reuse were established two decades ago, when databases were several million times smaller than they are today. These guidelines are largely unfamiliar to the scientific community, and, owing to the rapid increase in biological data generated in the past decade, they are also outdated. As a result, there is a lack of community standards suited to the current landscape and inconsistent implementation of data sharing policies across institutions. Here we discuss current sequence data sharing policies and their benefits and drawbacks, and present a roadmap to establish guidelines for equitable sequence data reuse, developed in consultation with a data consortium of 167 microbiome scientists. We propose the use of a Data Reuse Information (DRI) tag for public sequence data, which will be associated with at least one Open Researcher and Contributor ID (ORCID) account. The machine-readable DRI tag indicates that the data creators prefer to be contacted before data reuse, and simultaneously provides data consumers with a mechanism to get in touch with the data creators. The DRI aims to facilitate and foster collaborations, and serve as a guideline that can be expanded to other data types.
    DOI:  https://doi.org/10.1038/s41564-025-02116-2
  28. Emerg Med Australas. 2025 Oct;37(5): e70142
      Journal editors play a pivotal yet often unseen role in shaping the direction and integrity of academic discourse. Their responsibilities include coordinating peer review, ensuring ethical oversight and curating content that reflects both relevance and scholarly merit. In an era marked by misinformation and growing scepticism toward experts, editorial processes serve as a safeguard for public trust in scientific publishing. This reflective account draws on personal experience as a section editor for Emergency Medicine Australasia, highlighting the transition from trainee contributor to a steward of original research and reviews. Editorial servitude has offered valuable insights into academic publishing, improved writing skills and a deeper understanding of complex subject matters. Editors influence scholarly inquiry through thoughtful manuscript selection, reviewer engagement and constructive feedback. Although the path to editorial roles is rarely direct, it begins with opportunities to demonstrate capability. Far from passive arbiters, editors are the invisible architects of academia and custodians of academic credibility.
    DOI:  https://doi.org/10.1111/1742-6723.70142
  29. JCPP Adv. 2025 Sep;5(3): e70036
      In this editorial, we reflect on a milestone year for JCPP Advances, marked by our first Journal Impact Factor and significant growth in submissions, readership, and citations. We highlight expanded editorial expertise, strengthened commitments to open science, and new initiatives such as Registered Reports. Recent indexing across PsycINFO, PubMed, Scopus, and Web of Science enhances our global visibility. The September 2025 issue exemplifies our dedication to rigorous, impactful research, including evidence syntheses, participatory studies, and methodological innovation. Together, these developments position JCPP Advances as a leading open-access platform advancing child and adolescent mental health research worldwide.
    Keywords:  child and adolescent mental health; impact factor; open science; registered reports
    DOI:  https://doi.org/10.1002/jcv2.70036