bims-skolko Biomed News
on Scholarly communication
Issue of 2025–06–01
27 papers selected by
Thomas Krichel, Open Library Society



  1. Int J Epidemiol. 2025 Apr 12. pii: dyaf058. [Epub ahead of print]54(3):
       BACKGROUND: The COVID-19 pandemic induced an unprecedented response from the scientific research community. Previous studies have described disruption of the norms of academic publishing during this time. This study uses an epidemiological statistical toolkit alongside machine-learning methods to investigate the functioning of the scientific information-generation and -consumption ecosystem throughout the pandemic.
    METHODS: A dataset of 17 million scientific research papers that were published between January 2019 and December 2022 was analysed. Data on citations and Altmetrics were harvested, and topic modelling was applied to abstracts. COVID-19-related articles were identified from title text. We investigated publication dynamics, correlations between citation metrics and Altmetrics, rates of publication in preprints, and temporal trends in topics, and compared these metrics in COVID-19 papers vs non-COVID-19 papers.
    RESULTS: Throughout 2020-2, 3.7% of English-language research output was on the topic of COVID-19. Journal articles on COVID-19 were published at a consistent rate during this period, while preprints peaked in early 2020 and decreased thereafter. COVID-19 preprints had lower publication rates in the peer-reviewed literature than other preprints, particularly those that were preprinted during early 2020. COVID-19 research received significantly more media and social media attention than non-COVID-19 research, and preprints received more attention, on average, than journal articles, with attention peaking during the initial wave and subsequent peaks corresponding to the emergence of novel variants. COVID-19 articles exhibited a higher correlation between Altmetrics and citation metrics compared with non-COVID-19 publications, suggesting a strong alignment between scientific and public attention.
    CONCLUSION: This study provides a comprehensive description of the rapid expansion of COVID-19 research, revealing evolving research areas and waxing and waning public interest across different topics. Preprints played an important role in disseminating scientific findings, but the level of coverage of preprinted findings emphasizes the need for guidelines in handling preprint research in media, particularly during a pandemic.
    Keywords:  Altmetrics; COVID-19; bibliometrics; pandemics; policy
    DOI:  https://doi.org/10.1093/ije/dyaf058
  2. Res Integr Peer Rev. 2025 May 27. 10(1): 8
       BACKGROUND: The proliferation of generative artificial intelligence (AI) has facilitated the creation and publication of fraudulent scientific articles, often in predatory journals. This study investigates the extent of AI-generated content in the Global International Journal of Innovative Research (GIJIR), where a fabricated article was falsely attributed to me.
    METHODS: The entire GIJIR website was crawled to collect article PDFs and metadata. Automated scripts were used to extract the number of probable in-text citations, DOIs, affiliations, and contact emails. A heuristic based on the number of in-text citations was employed to identify the probability of AI-generated content. A subset of articles was manually reviewed for AI indicators such as formulaic writing and missing empirical data. Turnitin's AI detection tool was used as an additional indicator. The extracted data were compiled into a structured dataset, which was analyzed to examine human-authored and AI-generated articles.
    RESULTS: Of the 53 examined articles with the fewest in-text citations, at least 48 appeared to be AI-generated, while five showed signs of human involvement. Turnitin's AI detection scores confirmed high probabilities of AI-generated content in most cases, with scores reaching 100% for multiple papers. The analysis also revealed fraudulent authorship attribution, with AI-generated articles falsely assigned to researchers from prestigious institutions. The journal appears to use AI-generated content both to inflate its standing through misattributed papers and to attract authors aiming to inflate their publication record.
    CONCLUSIONS: The findings highlight the risks posed by AI-generated and misattributed research articles, which threaten the credibility of academic publishing. Ways to mitigate these issues include strengthening identity verification mechanisms for DOIs and ORCIDs, enhancing AI detection methods, and reforming research assessment practices. Without effective countermeasures, the unchecked growth of AI-generated content in scientific literature could severely undermine trust in scholarly communication.
    Keywords:  Academic fraud; Author misattribution; Fake article; Generative AI; Predatory journal; Scientific integrity
    DOI:  https://doi.org/10.1186/s41073-025-00165-z
  3. Diagnosis (Berl). 2025 May 22.
       OBJECTIVES: The challenges posed by questionable journals to academia are very real, and being able to detect hijacked journals would be valuable to the research community. Using an artificial intelligence (AI) chatbot may be a promising approach to early detection. The purpose of this research is to analyze and benchmark the performance of different AI chatbots in identifying hijacked medical journals.
    METHODS: This study utilized a dataset comprising 21 previously identified hijacked journals and 10 newly detected hijacked journals, alongside their respective legitimate versions. ChatGPT, Gemini, Copilot, DeepSeek, Qwen, Perplexity, and Claude were selected for benchmarking. Three question types were developed to assess AI chatbots' performance in providing information about hijacked journals, identifying hijacked websites, and verifying legitimate ones.
    RESULTS: The results show that current AI chatbots can provide general information about hijacked journals, but cannot reliably identify either real or hijacked journal titles. While Copilot performed better than others, it was not error-free.
    CONCLUSIONS: Current AI chatbots are not yet reliable for detecting hijacked journals and may inadvertently promote them.
    Keywords:  AI chatbot; artificial intelligence; circular economy; circular society; hijacked journals; sustainable development goals
    DOI:  https://doi.org/10.1515/dx-2025-0043
  4. Nature. 2025 May 27.
      
    Keywords:  Media; Peer review; Publishing; Software; Technology
    DOI:  https://doi.org/10.1038/d41586-025-01488-z
  5. JMA J. 2025 Apr 28. 8(2): 662-663
      
    Keywords:  artificial intelligence; experiment; journal; regulation; writing
    DOI:  https://doi.org/10.31662/jmaj.2024-0411
  6. Cad Saude Publica. 2025 ;pii: S0102-311X2025000500100. [Epub ahead of print]41(5): e00067325
      
    DOI:  https://doi.org/10.1590/0102-311XEN067325
  7. Nature. 2025 May 28.
      
    Keywords:  Publishing; Research management; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-01448-7
  8. Nature. 2025 May 28.
      
    Keywords:  Publishing; Research management; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-01664-1
  9. Eur Heart J. 2025 May 30. pii: ehaf359. [Epub ahead of print]
       BACKGROUND AND AIMS: Cardiovascular disease is a leading cause of mortality, with significant investments in research to improve treatment and prevention. Data sharing enhances transparency, reproducibility, and collaboration, yet data sharing statement (DSS) inclusion remains inconsistent. This study evaluates DSS prevalence, content, and influencing factors in high-impact cardiology journals, examines journal policy influence, and assesses data sharing feasibility by contacting authors who indicated data availability.
    METHODS: A cross-sectional analysis was conducted to assess DSS inclusion in top cardiology, selected general medicine, emergency medicine, and orthopaedic surgery journals. A systematic PubMed search identified clinical studies published from 2020 to 2023. Logistic regression models assessed factors associated with DSS inclusion, while thematic analysis categorized DSS content. Corresponding authors who indicated data availability upon request were contacted to evaluate follow-through.
    RESULTS: Among 2941 articles, 1004 (34.14%) included a DSS. Data sharing statement prevalence varied by discipline: cardiology (52%), general medicine (96%), emergency medicine (12%), and orthopedic surgery (14%). Policy enforcement drove DSS inclusion, with post-policy articles significantly more likely to contain a DSS. Funding status, study design, article access, and impact factor also influenced DSS presence. Thematic analysis identified conditional availability and gatekeeping as dominant DSS themes. Of authors who stated data were available upon request, only 31% ultimately provided access.
    CONCLUSIONS: Data sharing statement inclusion in cardiology research remains inconsistent, with journal policies playing a key role in increasing prevalence. However, real-world data-sharing practices often fall short of stated commitments. Addressing logistical and financial barriers will be essential to improving data availability in cardiology research.
    Keywords:  Cardiology; Cross-sectional; Data; Reuse; Sharing; Transparency
    DOI:  https://doi.org/10.1093/eurheartj/ehaf359
  10. Behav Res Methods. 2025 May 30. 57(7): 182
    PECANS Extended Working Group
      Are scientific papers providing all essential details necessary to ensure the replicability of study protocols? Are authors effectively conveying study design, data analysis, and the process of drawing inferences from their results? These represent only a fraction of the pressing questions that cognitive psychology and neuropsychology face in addressing the "crisis of confidence." This crisis has highlighted numerous shortcomings in the journey from research to publication. To address these shortcomings, we introduce PECANS (Preferred Evaluation of Cognitive And Neuropsychological Studies), a comprehensive checklist tool designed to guide the planning, execution, evaluation, and reporting of experimental research. PECANS emerged from a rigorous consensus-building process through the Delphi method. We convened a panel of international experts specialized in cognitive psychology and neuropsychology research practices. Through two rounds of iterative voting and a proof-of-concept phase, PECANS evolved into its final form. The PECANS checklist is intended to serve various stakeholders in the fields of cognitive sciences and neuropsychology, including: (i) researchers seeking to ensure and enhance reproducibility and rigor in their research; (ii) journal editors and reviewers assessing the quality of reports; (iii) ethics committees and funding agencies; (iv) students approaching methodology and scientific writing. PECANS is a versatile tool intended not only to improve the quality and transparency of individual research projects but also to foster a broader culture of rigorous scientific inquiry across the academic and research community.
    Keywords:  Guidelines; Open science; Replicability; Reproducibility; Transparency
    DOI:  https://doi.org/10.3758/s13428-025-02705-3
  11. JMA J. 2025 Apr 28. 8(2): 667-668
      
    Keywords:  letter; manuscript; paper; skill; training
    DOI:  https://doi.org/10.31662/jmaj.2025-0015
  12. Rehabil Nurs. 2025 May 28.
       BACKGROUND: Research studies, evidence-based practice, or continuous quality improvement projects are an integral part of everyday rehabilitation nursing practice. Patients, third-party payers, and accreditors demand clinical practice based on research results and/or project evidence. Rehabilitation nurses need to consider that the only way to build nursing science is to convey their work to others through scientific publication.
    AIM: The purpose of this article is to share a beginner writers' guide with 12 writing habits set on six principles that are behind or underlies written words, leading toward manuscript preparation/publication.
    APPROACH: Rehabilitation nurses who are clinicians, leaders, and managers at all educational levels and in all practice settings with completed research or projects have potential to become authors.
    OUTCOMES: Writers can acquire these habits and principles for success that increase or refine their skills to author scientific publications, supporting and advancing nursing science.
    DISCUSSION AND IMPLICATIONS FOR PRACTICE: These 12 writing habits based on six principles for being successful in publishing scientific work are a learned process. Publications enhance care provided and quality of life for those affected by disability and chronic illness.
    CONCLUSION: All writers will find these habits and principles helpful in constructing a scientific manuscript suitable for publication.
    Keywords:  Evidence-based practice; publishing; quality improvement; rehabilitation nursing; research; writing
    DOI:  https://doi.org/10.1097/RNJ.0000000000000499
  13. PLoS One. 2025 ;20(5): e0324760
      This study investigates the linguistic challenges encountered by Chinese academics in science and engineering when writing for English-language scholarly publications. Employing a mixed-methods approach, the research draws on survey responses from 732 participants and insights from semi-structured interviews with 13 interviewees. Sentence construction emerged as the most significant challenge, followed by issues with vocabulary selection, cohesive devices, coherence, and grammar, with notable variation across academic ranks. Common strategies to address these challenges include utilizing online tools, seeking peer support, and employing professional editing services. The findings offer actionable recommendations for tailored academic writing training, institutional support, and the integration of advanced technological tools, aiming to enhance publication success rates among non-native English-speaking scholars.
    DOI:  https://doi.org/10.1371/journal.pone.0324760
  14. Nature. 2025 May 30.
      
    Keywords:  Authorship; Careers; Lab life; Publishing
    DOI:  https://doi.org/10.1038/d41586-025-01614-x
  15. J Bone Joint Surg Am. 2025 May 30.
       BACKGROUND: The integration of artificial intelligence (AI), particularly large language models (LLMs), into scientific writing has led to questions about its ethics, prevalence, and impact in orthopaedic literature. While tools have been developed to detect AI-generated content, the interpretation of AI detection percentages and their clinical relevance remain unclear. The aim of this study was to quantify AI involvement in published orthopaedic manuscripts and to establish a statistical threshold for interpreting AI detection percentages.
    METHODS: To establish a baseline, 300 manuscripts published in the year 2000 were analyzed for AI-generated content with use of ZeroGPT. This was followed by an analysis of 3,374 consecutive orthopaedic manuscripts published after the release of ChatGPT. A 95% confidence interval was calculated in order to set a threshold for significant AI involvement. Manuscripts with AI detection percentages above this threshold (32.875%) were considered to have significant AI involvement in their content generation.
    RESULTS: Empirical analysis of the 300 pre-AI-era manuscripts revealed a mean AI detection percentage (and standard deviation [SD]) of 10.84% ± 11.02%. Among the 3,374 post-AI-era manuscripts analyzed, 16.7% exceeded the AI detection threshold of 32.875% (2 SDs above the baseline for the pre-AI era), indicating significant AI involvement. No significant difference was found between primary manuscripts and review studies (percentage with significant AI involvement, 16.4% and 18.2%, respectively; p = 0.40). Significant AI involvement varied significantly across journals, with rates ranging from 5.6% in The American Journal of Sports Medicine to 38.3% in The Journal of Bone & Joint Surgery (p < 0.001).
    CONCLUSIONS: This study examined AI assistance in the writing of published orthopaedic manuscripts and provides the first evidence-based threshold for interpreting AI detection percentages. Our results revealed significant AI involvement in 16.7% of recently published orthopaedic literature. This finding highlights the importance of clear guidelines, ethical standards, responsible AI use, and improved detection tools to maintain the quality, authenticity, and integrity of orthopaedic research.
    DOI:  https://doi.org/10.2106/JBJS.24.01462
  16. Radiologia (Engl Ed). 2025 May-Jun;67(3):pii: S2173-5107(25)00074-6. [Epub ahead of print]67(3): 251-252
      
    DOI:  https://doi.org/10.1016/j.rxeng.2025.02.002
  17. J Epidemiol Popul Health. 2025 May 26. pii: S2950-4333(25)00297-6. [Epub ahead of print]73(3): 203103
      
    DOI:  https://doi.org/10.1016/j.jeph.2025.203103
  18. PLoS One. 2025 ;20(5): e0322696
      Grant peer review processes are pivotal in allocating substantial research funding, yet concerns about their reliability persist, primarily due to low inter-rater agreement. This study aims to examine factors associated with agreement among peer reviewers in grant evaluations, leveraging data from 134,991 reviews across four Norwegian research funders. Using a cross-classified linear regression model, we will explore the relationship between inter-rater agreement and multiple factors, including reviewer similarity, experience, expertise, research area, application characteristics, review depth, and temporal trends. Our findings are expected to shed light on whether similarity between reviewers (gender, age), their experience, or expertise correlates with higher agreement. Additionally, we investigate whether characteristics of the applications-such as funding amount, research area, or variability in project size-affect agreement levels. By analyzing applications from diverse disciplines and funding schemes, this study aims to provide a comprehensive understanding of the drivers of inter-rater agreement and their implications for grant peer review reliability. The results will inform improvements to peer review processes, enhancing the fairness and validity of funding decisions. All data and analysis scripts will be publicly available, ensuring transparency and reproducibility.
    DOI:  https://doi.org/10.1371/journal.pone.0322696
  19. J Pediatr Soc North Am. 2024 Nov;9 100072
       Background: Open access (OA) articles are freely accessible online, either on the publisher/journal website or in a repository, a publicly available, free-of-charge online database. The primary aim of this study was to investigate whether OA publication confers a citation advantage in pediatric orthopaedics.
    Methods: Pediatric orthopaedic studies published in English from January 1, 2012 to June 30, 2012 were identified through Excerpta medica database , Cochrane, and PubMed. Abstract screening and full-text evaluation were performed in duplicate. Citation counts over 10 years following publication, 2012 journal impact factor, OA status, type of OA, journal field, geographic location of senior author and journal publication, study design, study focus, subspecialty, level of evidence, and presence of funding were recorded. Statistical analyses were performed using independent samples t-tests, 1-way analysis of variance, χ2 tests, and multiple regression analysis.
    Results: Of this study's 989 pediatric orthopaedic articles, 43.8% were OA. The mean citation count was 19.8 ± 24.4 on Web of Science. Compared to OA publications, the highest percentage of non-OA articles were published in a journal from North America, had a senior author from North America, were indexed in Journal Citation Reports, and were published in orthopaedic journals (P < .001). In multiple regression analysis, OA publication, higher levels of evidence, publication in a journal with a higher impact factor, having a senior author from Europe or North America, and study funding were associated with significantly increased citation counts. OA articles were cited an additional 3 times, on average, over 10 years.
    Conclusions: Open access publication in pediatric orthopaedics confers an advantage of 3 extra citations over a decade, on average.
    Key Concepts: (1)Publishing pediatric orthopaedic articles open access confers an advantage of 3 additional citations over a decade compared to non-open access publication.(2)43.8% of pediatric orthopaedic articles are published open access.
    Level of Evidence: III.
    Keywords:  Citation rate; Open access; Publication trends
    DOI:  https://doi.org/10.1016/j.jposna.2024.100072