bims-skolko Biomed News
on Scholarly communication
Issue of 2025–11–09
seventeen papers selected by
Thomas Krichel, Open Library Society



  1. Eur Urol Oncol. 2025 Nov 03. pii: S2588-9311(25)00282-2. [Epub ahead of print]
      Artificial intelligence (AI) could help in optimising patient care, but trust and ethical concerns are limiting its clinical adoption. Current practices for reporting of evidence-based guidelines, in terms of a lack of open access and ambiguity in phrasing, represent a barrier to mitigation of these concerns. We discuss the issues involved and suggest a path forward to improve AI adoption in evidence-based health care.
    DOI:  https://doi.org/10.1016/j.euo.2025.10.008
  2. Sci Rep. 2025 Nov 04. 15(1): 38604
      Scientific journals often rely on informal methods to evaluate reviewers, such as editor ratings and author feedback. Reviewer self-assessment offers a promising, yet underexplored, approach to improving the peer-review process. This study examined the factors associated with reviewers' self-assessments. We surveyed 642 reviewers and editors from three Information Systems (IS) conferences (January-February 2020), and 144 responses were analyzed using quantitative inferential statistics. Most respondents were male (72.2%) and based in Europe (59%). We found no significant association between self-assessment and conventional experience markers (reviewing and publishing experiences). In contrast, significant associations were observed between higher self-assessment and the perceived importance of feedback from editors (χ2 = 19.689, p ≈ 0.002), feedback from authors (χ2 = 25.168, p < 0.001), and formal training (χ2 = 14.64, p ≈ 0.047). Although our sample comes from IS settings, these mechanisms are process-based; therefore, these findings could be extended to a broader peer review ecosystem. Overall, organizational interventions, structured feedback from editors and authors, and formal training are more closely related to reviewers' self-assessments than accumulated publishing or reviewing experience.
    Keywords:  Empirical study; Peer-review; Reviewer experience; Self-assessment; Survey questionnaire
    DOI:  https://doi.org/10.1038/s41598-025-22352-0
  3. Nature. 2025 Nov 07.
      
    Keywords:  Computer science; Ethics; Publishing
    DOI:  https://doi.org/10.1038/d41586-025-03664-7
  4. R Soc Open Sci. 2025 Nov;12(11): 251805
      Science relies on integrity and trustworthiness. But scientists under career pressure are lured to purchase fake publications from 'paper mills' that use AI-generated data, text and image fabrication. The number of low-quality or fraudulent publications is rising to hundreds of thousands per year, which-if unchecked-will damage the scientific and economic progress of our societies. The result is editor and reviewer fatigue, irreproducible experiments, misguided experiments, disinformation and escalating costs that devour funding from taxpayers intended for research. It is high time to reevaluate current publishing models and outline a global plan to stop this unhealthy development. A conference was therefore organized by the Royal Swedish Academy of Sciences to draft an action plan with specific recommendations, as follows. (i) Academia should resume control of publishing using non-profit publishing models (e.g. diamond open-access). (ii) Adjust incentive systems to merit quality, not quantity, in a reputation economy where the gaming of publication numbers and citation metrics distorts the perception of academic excellence. (iii) Implement mechanisms to prevent and detect fake publications and fraud which are independent of publishers. (iv) Draft and implement legislations, regulations and policies to increase publishing quality and integrity. This is a call to action for universities, academies, science organizations and funders to unite and join this effort.
    Keywords:  AI; artificial intelligence; fake publications; fraud; libraries; paper mill; scholarly academies; science integrity
    DOI:  https://doi.org/10.1098/rsos.251805
  5. Naturwissenschaften. 2025 Nov 03. 112(6): 85
      DeepSeek and Grok 3 appear as strong competitors to AI models, particularly the widely accepted model, ChatGPT. The accuracy of the utilization of data in retracted scientific articles has proven to be a significant challenge for AI as an assistant in scientific research. It is critical to understand whether and how three AI models handle information from retracted articles when they answer scientific questions. We collected retracted articles and used AI models to generate questions and analyzed the answers. The answers were compared and evaluated among three AI models. Here we show that these three models utilized 84 out of 93 retracted articles in their answers about stem cells. ChatGPT4o retrieved 74 out of 93 (80%) articles and recognized the retract status for 46 (62%) of them. DeepSeek only found one retracted article and did not realize its retraction status. Grok 4 retrieved 69 (74%) articles and recognized the retraction status of 46 (67%) of them. In cases when the retracted articles were not identified, ChatGPT fabricated articles 5 times out of 19 (26%) for its answers. Grok 3 fabricated 15 articles out of 24 (63%) for its answers. In 82 times of 93 (88%) answers, DeepSeek fabricated the articles in various forms. The answering styles from ChatGPT4o, DeepSeek, and Grok 3 are characterized by accurate and straightforward, a tangential structure and guesswork, and comprehensive and detailed answers, respectively. Analysis with non-retracted articles revealed the similar patterns of these models. This study suggests that, while no model is perfect, DeepSeek performed the worst when facing in-depth scientific real-world challenges. Much improvement has to be made before any of these AI models become problem-free and valuable for scientists.
    Keywords:  AI; Article; Cell; ChatGPT; DeepSeek; Grok3; Publication; Retraction; Stem
    DOI:  https://doi.org/10.1007/s00114-025-02036-5
  6. Glob Adv Integr Med Health. 2025 Jan-Dec;14:14 27536130251384272
       Background: Detailed intervention reporting is essential to interpretation, replication, and eventual translation of music-based interventions (MBIs) into practice. Despite availability of Reporting Guidelines for Music-based Interventions (RG-MBI, published 2011), multiple reviews reveal sustained problems with reporting quality and consistency. To address this, we convened an interdisciplinary expert panel to update and improve the utility and validity of the existing guidelines using a rigorous Delphi approach. The resulting updated checklist includes 12-items across eight areas considered essential to ensure transparent reporting of MBIs.
    Methods: The purpose of this explanation and elaboration document is to facilitate consistent understanding, use, and dissemination of the revised RG-MBI. Members of the interdisciplinary expert panel collaborated to create the resulting guidance statement.
    Results: This guidance statement offers: (1) the scope and intended use of the RG-MBI, (2) an explanation for each checklist item, with examples from published studies, and (3) two published studies with annotations indicating where the authors reported each checklist item.
    Conclusion: Broader uptake of the RG-MBIs by study authors, editors, and peer reviewers will lead to better reporting of MBI trials, and in turn facilitate greater replication of research, improve cross-study comparisons and meta-analyses, and increase implementation of findings.
    Keywords:  guidance statement; intervention; music; music therapy; reporting guidelines
    DOI:  https://doi.org/10.1177/27536130251384272
  7. Glob Adv Integr Med Health. 2025 Jan-Dec;14:14 27536130251384199
       Background: Detailed intervention reporting is essential to interpretation, replication, and translation of music-based interventions (MBIs). The 2011 Reporting Guidelines for Music-Based Interventions were developed to improve transparency and reporting quality of published research; however, problems with reporting quality persist. This represents a significant barrier to advances in MBI scientific research and translation of findings to practice.
    Methods: The purpose of this study was to update and validate the 2011 reporting guidelines using rigorous Delphi approach that involved an interdisciplinary group of MBI researchers; and to develop an explanation and elaboration guidance statement to support dissemination and usage. We followed the methodological framework for developing reporting guidelines recommended by the EQUATOR Network and guidance recommendations for developing health research reporting guidelines. Our three-stage process included: (1) an initial field scan, (2) a consensus process using Delphi surveys (2 rounds) and Expert Panel meetings, and (3) development and dissemination of an explanation and elaboration document.
    Results: First-Round survey findings revealed that the original checklist items were capturing content that investigators deemed essential to MBI reporting; however, it also revealed problems with item wording and terminology. Subsequent Expert Panel meetings and the Second-Round survey centered on reaching consensus for item language. The revised RG-MBI checklist has a total of 12-items that pertain to 8 different components of MBI interventions including name, theory/scientific rationale, content, interventionist, individual/group, setting, delivery schedule, and treatment fidelity.
    Conclusion: We recommend that authors, journal editors, and reviewers use the RG-MBI guidelines, in conjunction with methods-based guidelines (eg, CONSORT) to accelerate and improve the scientific rigor of MBI research.
    Keywords:  interventions; music; music therapy; reporting guidelines; reporting quality
    DOI:  https://doi.org/10.1177/27536130251384199
  8. Am J Vet Res. 2025 Oct 22. pii: ajvr.86.11.editorial. [Epub ahead of print]86(11):
      
    DOI:  https://doi.org/10.2460/ajvr.86.11.editorial
  9. Soc Stud Sci. 2025 Nov 02. 3063127251386079
      Concerns over the complexity and costs of drug development have led some to consider whether practices of open science should be extended to pharmaceuticals, a space known for entrenched intellectual property regimes. In this article, I trace the emergence of collective action to apply open science to the research and making of drugs, an area I call open pharma. Drawing on in-depth interviews with open pharma leaders and document analysis of journal articles, organizational policies, and websites, I show that open pharma resembles other scientific/intellectual movements in developing new research practices and transmitting new ideas for sharing data. At the same time, the sociotechnical space of pharmaceuticals is deeply entwined in capitalist political economic structures (legal, regulatory, and financial markets) that shape how actors frame and organize their work. I identify key narratives that actors use to frame the movement and mobilize others, often drawing on market logics. I illustrate the active building and institutionalizing of open pharma infrastructure through the establishment of organizations and open science policies. And I describe structural barriers to open pharma in universities with publishing and commercialization imperatives-which are frequently translated into patent imperatives. 'Open' is often defined and operationalized in particular ways, prioritizing public data sharing of early research (which may later be privatized) over such interventions as public clinical trials and commercialization, raising the question of where, when, and for whom open pharma is beneficial.
    Keywords:  biomedical advancement; open science; patents; pharmaceutical industry
    DOI:  https://doi.org/10.1177/03063127251386079
  10. Indian J Med Ethics. 2025 Jul-Sep;X(3):X(3): 244-245
      Medical students face authorship issues as they are increasingly involved in research. Senior researchers often claim undue credit, while students lack support and awareness of their rights. The fear of retaliation and power imbalance worsens the issue. Solutions such as ethics training, student representation on research committees, and mandatory formal authorship agreements have been proposed. These can create a more ethical research environment for future medical professionals.
    DOI:  https://doi.org/10.20529/IJME.2025.039
  11. Health Care Sci. 2025 Oct;4(5): 355-358
      
    Keywords:  FDA policy; clinical trials; data security; genetic engineering; scientific collaboration
    DOI:  https://doi.org/10.1002/hcs2.70038
  12. NPJ Precis Oncol. 2025 Nov 06. 9(1): 341
      Translational research in metastatic cancer is limited by insufficient metastatic samples. Post-mortem tissue donation programs address this issue by facilitating comprehensive sample collection. Sustaining such programs requires an open science environment (OSE) to ensure multidisciplinary collaboration, research standards, and patient privacy. While often seen at publication, we demonstrate the benefit of developing upstream phases by presenting the OSE from our institutional post-mortem tissue donation program UPTIDER (NCT04531696). It contains (i) an electronic case report form to capture >750 clinical features including treatment lines and metastases, (ii) a lab information management system to track >100 metadata features from logistical to anatomical information, (iii) a code versioning system, (iv) long-term data and sample storage, and (v) code and data sharing upon publication. By ensuring latest access to information, our OSE reflects the potential to accelerate translational research. While our OSE was tailored for UPTIDER, we believe our experiences can inspire others.
    DOI:  https://doi.org/10.1038/s41698-025-01110-5
  13. Int J Qual Methods. 2025 Jan-Dec;24:24
      Qualitative research data, such as data from focus groups and in-depth interviews, are increasingly made publicly available and used by secondary researchers, which promotes open science and improves research transparency. This has prompted concerns about the sensitivity of these data, participant confidentiality, data ownership, and the time burden and cost of de-identifying data. As more qualitative researchers (QRs) share sensitive data, they will need support to share responsibly. Few repositories provide qualitative data sharing guidance, and currently, researchers must manually de-identify data prior to sharing. To address these needs, our QDS team worked to identify and reduce ethical and practical barriers to sharing qualitative research data in health sciences research. We developed specific QDS guidelines and tools for data de-identification, depositing, and sharing. Additionally, we developed and tested Qualitative Data Sharing (QuaDS) Software to support qualitative data de-identification. We assisted 28 qualitative health science researchers in preparing and de-identifying data for deposit in a repository. Here, we describe the process of recruiting, enrolling, and assisting QRs to use the guidelines and software and report on the revisions we made to our processes and software based on feedback from QRs and curators and observations made by project team members. Through our pilot project, we demonstrate that qualitative data sharing is feasible and can be done responsibly.
    Keywords:  data confidentiality; data repositories; qualitative data anonymization; qualitative data sharing; qualitative researchers; research compliance; social sciences; software
    DOI:  https://doi.org/10.1177/16094069251329607