bims-skolko Biomed News
on Scholarly communication
Issue of 2025–09–07
35 papers selected by
Thomas Krichel, Open Library Society



  1. Health Promot Int. 2025 Sep 03. pii: daaf146. [Epub ahead of print]40(5):
      Medical writing is a key element in pharmaceutical companies' efforts to shape the relevant medical science literature. As part of what is called 'publication planning', medical writing can influence the knowledge base on which prescribers make decisions, and can build specific claims in targeted sales efforts. Most publication planning is done by hired medical education and communication companies (MECCs), with the rest done by other commercial entities, such as units of pharmaceutical companies or of contract research organizations, that provide essentially the same services as MECCs. Here we provide an estimate of the number of MECCs and comparable entities contributing to the medical science literature in English. To identify these companies, we collected data from Web of Science (858 named firms from 20 498 papers mentioning medical writing assistance), LinkedIn (410 company profiles), and Google and DuckDuckGo (68 company websites). After removing duplicates and false positives, we found 1148 MECCs and other comparable entities providing medical writing services. More than 50% of Web of Science papers that acknowledged medical writing support are sponsored by only ten pharmaceutical companies. Most of the remaining papers in our database are sponsored by other pharmaceutical, device, and biotechnology companies. This study likely undercounts MECCs, because it depends on some level of transparency in publications or other leakage of information. Our combining multiple sources for the data should limit the undercount of MECCs. The study does not identify MECCs that work exclusively in languages other than English.
    Keywords:  industry-funded research; medical education and communication companies; medical writing industry; pharmaceutical companies; scientific communication
    DOI:  https://doi.org/10.1093/heapro/daaf146
  2. Nature. 2025 Sep 04.
      
    Keywords:  Peer review; Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-02809-y
  3. World J Methodol. 2025 Sep 20. 15(3): 98795
      The rise of the "fishing reviewer" phenomenon presents a significant threat to the integrity of academic publishing, undermining the credibility of the peer review process and eroding trust in scientific journals. This editorial explores the risk factors contributing to this troubling trend and identifies key indicators to recognize such reviewers. To address this issue, we propose strategies, including enhanced reviewer vetting, comprehensive training, and transparent recognition policies to foster a culture of accountability and ethical conduct in scholarly review. By implementing these measures, we can safeguard the quality and credibility of academic research.
    Keywords:  Academic research; Editor; Fishing reviewer; Scholarly community; Scientific journal; Scientific publication
    DOI:  https://doi.org/10.5662/wjm.v15.i3.98795
  4. Korean J Radiol. 2025 Sep;26(9): 801-803
      
    Keywords:  Academic publishing; Audio summary; Generative artificial intelligence; Journal; Large language model; NotebookLM; Podcast
    DOI:  https://doi.org/10.3348/kjr.2025.0845
  5. Big Data Cogn Comput. 2024 Oct;pii: 133. [Epub ahead of print]8(10):
      We assessed 19,000 scientific introductions to measure the level of undisclosed use of ChatGPT in scientific papers published in 2023 and early 2024. We applied a "stylistics" approach that has previously been shown to be effective at differentiating AI-generated text from human-written text in a variety of venues. Ten different MDPI journals were selected for this study, and the rate of use of undisclosed AI writing in these journals was fairly consistent across the journals. We estimate that ChatGPT was used for writing or significant editing in about 1 to 3% of the introductions tested. This analysis is the first systematic study of detecting undisclosed ChatGPT in published manuscripts in cases where obvious indicators, such as phrases like "regenerate response", are not present. The work demonstrates that generative AI is not polluting mainstream journals to any appreciable extent and that the overwhelming majority of scientists remain hesitant to embrace this tool for late-stage writing and editing.
    Keywords:  AI text detection; ChatGPT; MDPI; XGBoost; academic publication; ethics; generative AI
    DOI:  https://doi.org/10.3390/bdcc8100133
  6. Reg Anesth Pain Med. 2025 Sep 02. pii: rapm-2025-106852. [Epub ahead of print]
       INTRODUCTION: The use of artificial intelligence (AI) in the scientific process is advancing at a remarkable speed, thanks to continued innovations in large language models. While AI provides widespread benefits, including editing for fluency and clarity, it also has drawbacks, including fabricated content, perpetuation of bias, and lack of accountability. The editorial board of Regional Anesthesia & Pain Medicine (RAPM) therefore sought to develop best practices for AI usage and disclosure.
    METHODS: A steering committee from the American Society of Regional Anesthesia and Pain Medicine used a modified Delphi process to address definitions, disclosure requirements, authorship standards, and editorial oversight for AI use in publishing. The committee reviewed existing publication guidelines and identified areas of ambiguity, which were translated into questions and distributed to an expert workgroup of authors, reviewers, editors, and AI researchers.
    RESULTS: Two survey rounds, with 91% and 87% response rates, were followed by focused discussion and clarification to identify consensus recommendations. The workgroup achieved consensus on recommendations to authors about definitions of AI, required items to report, disclosure locations, authorship stipulations, and AI use during manuscript preparation. The workgroup formulated recommendations to reviewers about monitoring and evaluating the responsible use of AI in the review process, including the endorsement of AI-detection software, identification of concerns about undisclosed AI use, situations where AI use may necessitate the rejection of a manuscript, and use of checklists in the review process. Finally, there was consensus about AI-driven work, including required and optional disclosures and the use of checklists for AI-associated research.
    DISCUSSION: Our modified Delphi study identified practical recommendations on AI use during the scientific writing and editorial process. The workgroup highlighted the need for transparency, human accountability, protection of patient confidentiality, editorial oversight, and the need for iterative updates. The proposed framework enables authors and editors to harness AI's efficiencies while maintaining the fundamental principles of responsible scientific communication and may serve as an example for other journals.
    Keywords:  EDUCATION; Methods; TECHNOLOGY
    DOI:  https://doi.org/10.1136/rapm-2025-106852
  7. J Thorac Oncol. 2025 Aug 28. pii: S1556-0864(25)00985-2. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.jtho.2025.07.133
  8. Malays J Pathol. 2025 Aug;47(2): 209
      No abstract available.
  9. Am Heart J Plus. 2025 Oct;58 100586
       Background: Artificial intelligence (AI) technologies are rapidly evolving and offer efficiencies in manuscript generation however, this technology has raised concerns about the potential for bias, errors, and plagiarism to occur. In response, some journals have updated their author guidelines to address AI use.
    Methods: We assessed author guidelines for 213 MEDLINE-indexed cardiovascular journals to evaluate policies on AI use in manuscript writing. Journal metrics such as CiteScore, Journal Impact Factor (JIF), Journal Citation Indicator (JCI), Source Normalized Impact per Paper (SNIP), and SCImago Journal Rank (SJR) were compared between journals with and without AI policies. We further analyzed the association between AI policy adoption and society affiliation. We reviewed the criteria for listing AI as an author and allowances for AI-generated content.
    Results: Of 213 journals, 170 (79.8 %) had AI policies consistent across evaluations. Policies were present in 115 of 147 (78 %) cardiology journals and 113 of 127 (89 %) vascular journals. Furthermore, 111 of 143 (77.6 %) had AI-use policies, while 59 out of 70 (84.2 %) were unaffiliated journals. Journal metrics did not significantly differ between journals with and without AI policies (P > 0.05). Among journals with policies, 156 out of 158 (98.7 %) excluded AI as authors, while all allowed AI-assisted content.
    Conclusion: Many cardiovascular journals address AI-generated content, but gaps remain in policies and disclosure requirements for AI-created manuscripts. The presence of AI-use policies was independent of journal metrics or society affiliation.
    Keywords:  AI; AI policies; Artificial intelligence; Cardiology journals; Ethical standards; Journal policy; LLM; Large language models; Vascular journals
    DOI:  https://doi.org/10.1016/j.ahjo.2025.100586
  10. J Oral Maxillofac Surg. 2025 Sep;pii: S0278-2391(25)00259-9. [Epub ahead of print]83(9): 1065-1066
      
    DOI:  https://doi.org/10.1016/j.joms.2025.04.026
  11. J Oral Maxillofac Surg. 2025 Sep;pii: S0278-2391(25)00258-7. [Epub ahead of print]83(9): 1065
      
    DOI:  https://doi.org/10.1016/j.joms.2025.04.025
  12. PLoS One. 2025 ;20(9): e0331697
      Responsible data sharing in clinical research can enhance the transparency and reproducibility of research evidence, thereby increasing the overall value of research. Since 2024, more than 5,000 journals have adhered to the International Committee of Medical Journal Editors (ICMJE) Data Sharing Statement (DSS) to promote data sharing. However, due to the significant effort required for data sharing and the scarcity of academic rewards, data availability in clinical research remains suboptimal. This study aims to explore the impact of biomedical journal policies and available supporting information on the implementation of data availability in clinical research publications This cross-sectional study will select 303 journals and their latest publications as samples from the biomedical journals listed in the Web of Science Journal Citation Reports based on stratified random sampling according to the 2023 Journal Impact Factor (JIF). Two researchers will independently extract journal data-sharing policies from the submission guidelines of eligible journals and data-sharing details from publications using a pre-designed form from Apr 2025 to Dec 2025. The data sharing levels of publications will be based on the openness of the data-sharing mechanism. Binomial logistic regression analyses will be used to identify potential journal factors that affect publication data-sharing levels. This protocol has been registered in Open Science Framework (OSF) Registries: https://doi.org/10.17605/OSF.IO/EX6DV.
    DOI:  https://doi.org/10.1371/journal.pone.0331697
  13. BMC Med. 2025 Sep 01. 23(1): 510
       BACKGROUND: The International Committee of Medical Journal Editors (ICMJE) recommends that trial authors must specify data sharing plans when trials are registered and published, yet this uptake remains unclear. We aimed to assess the practice of data sharing plans in trial registration platforms and the concordance between registered and published data sharing plans.
    METHODS: We included clinical trials published between 2021 and 2023 in six high-profile journals (The Lancet, The New England Journal of Medicine, JAMA, BMJ, JAMA Internal Medicine, and Annals of Internal Medicine) that enrolled participants no earlier than 2019 and registered on clinical trial platforms. One study outcome was data sharing plans in the trial registration platform, where trials clearly responding a "yes" to "Plan to share" were considered as planning to share data (including study protocols, statistical analysis plans, analytic codes, and individual participant data). The concordance between registered and published plans to share data was also assessed, which included plans to either share data (Yes/Yes) and not to share data (No/No) in both registration and publications. Univariate analyses were used to assess associations between trial characteristics and registered plans to share data and between trial characteristics and concordance.
    RESULTS: Of the 383 included registration IDs, only 44.6% (171/383) planned to share data in registration. Trials with drug versus non-drug interventions had increased odds of registering plans to share data (OR = 2.71, 95% CI: 1.63, 4.63). There were seven trial publications, each pooling two trials and having two registration IDs. We selected the registration IDs with a later start date, resulting in 376 trial publications for concordance assessment. Over half (216/376, 57.4%) had discordance between registration and publications. COVID-19-related trials were associated with decreased odds of data sharing concordance (OR = 0.59, 95% CI: 0.37, 0.91). Additionally, significant discordance was consistently found in statistical analysis plans or study protocols, analytic codes, and individual participant data.
    CONCLUSIONS: Most registered trials do not specify plans to share data. More than half of published trials have data sharing discordance between registration and publication. Efforts are required to improve the reporting and reliability of plans to share clinical trial data.
    TRIAL REGISTRATION: This study was registered on the Open Science Framework ( https://osf.io/k6etb ).
    Keywords:  Clinical trial; Data sharing; Plan to share data; Trial publication; Trial registration
    DOI:  https://doi.org/10.1186/s12916-025-04328-z
  14. Clin Dermatol. 2025 Aug 26. pii: S0738-081X(25)00211-1. [Epub ahead of print]
      Assessing academic performance in dermatology is an interesting and evolving challenge. Early-career researchers often look for clear indicators to identify leading authors; however, reliance on single measures such as citation counts or the h-index provides only a limited view of scholarly influence. Using diverse bibliometric indicators from Scopus, we observed that author rankings shifted considerably depending on the metric applied, reflecting the lack of agreement on how best to capture academic impact. Similarly, we noted that ethical publications (letters, notes, and related formats) in dermatology may contribute to scholarly discussions and institutional practices but often receive modest citation profiles, highlighting the gap between measurable indicators and genuine value. Inflated authorship practices and citation manipulation further complicate fair evaluation. Fractional authorship models and multidimensional frameworks-which consider publication type, journal quality, collaboration, funding strength, and broader societal contributions-may offer more balanced perspectives. We suggest that institutions, journals, and training programs promote the ethical use of metrics and integrate qualitative assessments alongside quantitative ones. Such an approach can foster fairness, transparency, and meaningful recognition within dermatology and academic medicine more broadly.
    DOI:  https://doi.org/10.1016/j.clindermatol.2025.08.002
  15. BMJ Paediatr Open. 2025 Sep 02. pii: e003717. [Epub ahead of print]9(1):
      
    Keywords:  Cross-Sectional Studies; Epidemiology; Health services research; Qualitative research
    DOI:  https://doi.org/10.1136/bmjpo-2025-003717
  16. Ecol Evol. 2025 Sep;15(9): e71964
      Equity in scientific publishing requires removing financial barriers, structural transformation, and inclusive practices that empower researchers from historically marginalized regions. Here, we reflect on recent Wiley's initiatives supporting Brazilian researchers to integrate into the international publishing ecosystem, including discounted rates for open-access article processing charges, the Wiley-CAPES transformative agreement, and in-country capacity-building events. While some challenges persist, such as linguistic barriers and funding access, we underscore the importance of meaningful local engagement and the coordinated actions among publishers and funding agencies that are supporting a more equitable publishing ecosystem.
    Keywords:  Global South; capacity‐building; decolonial science; global collaboration; scientific publishing
    DOI:  https://doi.org/10.1002/ece3.71964
  17. Eur J Ophthalmol. 2025 Sep;35(5): 1525-1526
      Since January 2025, the European Journal of Ophthalmology has entered a phase of change aimed at strengthening its role as a platform for European researchers. We have restructured the editorial board into specialized sections, introduced new content areas focused on AI, public health, and evidence-based medicine, and we are replacing case reports with a new section dedicated to clinical images. We have also launched official social media accounts to share content more widely and are working on ways to better recognize the contributions of our reviewers. These changes reflect a long-term commitment to building a journal that represents and supports European ophthalmology on the global stage.
    Keywords:  Health economics < glaucoma; cornea / external disease; glaucoma; lens / cataract; pediatric ophthalmology; retina
    DOI:  https://doi.org/10.1177/11206721251367117
  18. Artif Life. 2025 Sep 04. 31(3): 249
      
    DOI:  https://doi.org/10.1162/ARTL.e.11