bims-skolko Biomed News
on Scholarly communication
Issue of 2025–08–17
thirty-six papers selected by
Thomas Krichel, Open Library Society



  1. Account Res. 2025 Aug 10. 1-22
       BACKGROUND: There is a growing concern about the scale of journal retractions across the globe science system, and about the implications of the increase in retractions for scientific record and research integrity. This systematic review aims to further our understanding of existing research on retractions and offer recommendations for further research.
    METHOD: This systematic review employs a topographical review approach. It examines the volume and growth trajectory of the journal literature on retractions since the first research paper on retractions published in 1998 and offers insights into the publication trends and patterns over this period, focusing on the composition of this knowledge base in terms of research contexts, research methods, and research themes.
    RESULTS: Vast majority of the scholarship on retractions involves quantitative overviews, often relying on basic descriptive statistical analyses of retraction trends and patterns. Results clearly demonstrate sensitivities and stigma around retractions mean that there have been very few published qualitative studies, and little attention to the perspectives and experiences of the retracted scholars themselves. Almost no papers have explored the links between the career pressures placed on researchers, the commercial focus of many academic publishers, and the role of 'paper mills' in facilitating authorship in indexed journals.
    CONCLUSIONS: The paper concludes with a call for more holistic and qualitative research on these aspects of retractions and makes a series of practical and policy recommendations.
    Keywords:  Retractions; higher education; publishing; research integrity
    DOI:  https://doi.org/10.1080/08989621.2025.2542203
  2. Eur J Nucl Med Mol Imaging. 2025 Aug 16.
       PURPOSE: To assess nuclear medicine researchers' experiences and attitudes toward image fraud, as well as their perspectives on preventive measures.
    METHODS: This survey targeted corresponding authors who published in three nuclear medicine journals between 2021 and 2024. Participants were asked about their experiences related to medical image fraud, as well as their views on its prevalence, causes, and potential preventive measures.
    RESULTS: Of the 2,837 corresponding authors invited, 284 (10.0%) completed the survey. Most of the 284 respondents were mid-career European male MDs with over 10 years of research experience. While 91% reported never feeling pressured to falsify medical images, 13.7% admitted doing so in the past five years, and 38.7% had witnessed colleagues engaging in such practices. Common forms included cherry-picking, unauthorized image reuse, and misleading enhancements. In the past five years, 1.1% admitted using AI to falsify medical images, while 2.8% reported witnessing colleagues do so. No demographic factors were significantly associated with misconduct. Key drivers cited were publication pressure, competition, and aesthetic expectations. Respondents emphasized the need for greater transparency, oversight, and cultural change. Current safeguards were generally considered ineffective. Stricter policies, increased awareness, and AI tools were suggested as potential solutions.
    CONCLUSIONS: Image fraud in nuclear medicine research appears to be relatively prevalent. It is more frequently witnessed among other colleagues than self-reported by individual researchers. The findings highlight the need to fostering a culture of research integrity and for stronger preventive measures, including greater awareness, stricter journal policies, and improved control.
    Keywords:  Fraud; Nuclear medicine; Research; Scientific misconduct
    DOI:  https://doi.org/10.1007/s00259-025-07515-5
  3. Proc Natl Acad Sci U S A. 2025 Aug 19. 122(33): e2507394122
      Scientific institutions like funding agencies and journals rely on peer reviewers to select among competing submissions. How does the geographical diversity of reviewers affect which authors are selected? If reviewers typically favor submissions from their own countries, but reviewers from only some countries are well represented in the reviewer pool, this can create a "geographical representation bias" favoring authors from those well-represented countries. Using administrative data on 204,718 submissions to 60 STEM journals from the Institute of Physics Publishing, we find support for representation bias. Reviewers from the same country as the corresponding author are 4.78 percentage points more likely to review positively compared to other reviewers of the same manuscript. Authors from the United States of America, China, and India are 8 to 9 times more likely to be evaluated by same-country reviewers compared to less-represented countries with similar incomes. Furthermore, an instrumental variables analysis of an anonymization policy shock shows that anonymizing submissions does not significantly reduce same-country homophily. Thus, investments in reviewer diversification may be necessary to mitigate the structural advantage of authors from major science-producing countries and avoid blind spots in collective knowledge.
    Keywords:  anonymization; bias; geographic inequality; peer review; publishing
    DOI:  https://doi.org/10.1073/pnas.2507394122
  4. Naunyn Schmiedebergs Arch Pharmacol. 2025 Aug 14.
      Traditional peer review (TPR), despite being touted as the bedrock by which scientific knowledge is screened, vetted, and validated, is riddled with biases, limitations, and abuses, reducing not only trust in this publishing model, but overall in the scientific record that claims to be peer-reviewed. Two models that were proposed to fortify the TPR model, open peer review (OPR) and preprints, have themselves shown biases, limitations, and risks of abuse. OPR journals that claim to be peer reviewed should only be rewarded-in terms of indexing and metrics-when they can prove that they have conducted peer review-i.e., when peer review reports are open, named, and transparent, ensuring that authors, editors, and journals (encompassing publishers) are accountable for what has been published. In this narrative review, it is argued that classifying a journal as peer reviewed is complex because peer reports might lie between superficial and detailed on one axis, and between useless and informative, on another axis. A theoretical classification is proposed that separates journals into six categories, five of which would render a journal "whitelisted" while the sixth category renders a journal "blacklisted" or "predatory". However, this simplistic classification risks clustering any journal that claims to be peer reviewed into a single basket, amplifying the reputational risk factor underlying TPR and OPR, and accentuating how deep the peer review crisis really is.
    Keywords:  Accountability; Editorial responsibilities; Incentives; Open peer review; Post-publication peer review; Predatory publishing; Scholarly communication; Transparency
    DOI:  https://doi.org/10.1007/s00210-025-04486-0
  5. Cureus. 2025 Jul;17(7): e87817
      The peer review system, fundamental to scientific quality control, faces a significant crisis. As journal editors, we often need to send up to 35 invitations just to secure two reviewers, confronting daily the collapse of voluntary participation. This reflects a critical imbalance: while publication pressure intensifies, willingness to evaluate diminishes, creating "literature elephantiasis", i.e., an overwhelming proliferation of papers exceeding human processing capacity. Current compensation models, relying on token recognition and database access, fail to incentivize quality engagement and may encourage ethically problematic practices like excessive self-citation. The unchecked infiltration of artificial intelligence into peer review, with minimal enforcement, further undermines system integrity. We propose transforming peer reviewers into professional referees, modeled on sports officiating. This radical solution involves formal training and certification for reviewers, equipping them to assess scientific merit, methodology, and ethics comprehensively. Like sports referees supported by assistants, scientific referees would collaborate with specialists - including statisticians, methodology experts, and reference checkers - ensuring thorough evaluation while distributing workload effectively. Funding would come from publishers or research funders, recognizing peer review as an essential, compensated component of the research lifecycle. Implementation faces challenges including publisher resistance and funding allocation, which we address through phased transition strategies. This professionalization addresses current inequities where conscientious scientists shoulder disproportionate reviewing burdens while others contribute minimally. Professional reviewers would view evaluation as valued career development rather than unwelcome obligation. Critics citing independence concerns overlook the sports analogy: referees maintain impartiality through professional standards despite league compensation. Quality scientific evaluation requires dedicated expertise, adequate training, and fair remuneration. Science deserves better than a system dependent on goodwill and guilt - it needs professional referees now.
    Keywords:  peer review in journals; peer reviewers; reviewers; scientific publishing; workers’ compensation
    DOI:  https://doi.org/10.7759/cureus.87817
  6. JMIR Res Protoc. 2025 Aug 14. 14 e64640
    GAMER Working Group
       BACKGROUND: The integration of artificial intelligence (AI) has revolutionized medical research, offering innovative solutions for data collection, patient engagement, and information dissemination. Powerful generative AI (GenAI) tools and other similar chatbots have emerged, facilitating user interactions with virtual conversational agents. However, the increasing use of GenAI tools in medical research presents challenges, including ethical concerns, data privacy issues, and the potential for generating false content. These issues necessitate standardization of reporting to ensure transparency and scientific rigor.
    OBJECTIVE: The development of the Generative Artificial Intelligence Tools in Medical Research (GAMER) reporting guidelines aims to establish comprehensive, standardized guidelines for reporting the use of GenAI tools in medical research.
    METHODS: The GAMER guidelines are being developed following the methodology recommended by the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network, involving a scoping review and expert Delphi consensus. The scoping review searched PubMed, Web of Science, Embase, CINAHL, PsycINFO, and Google Scholar (for the first 200 results) using keywords like "generative AI" and "medical research" to identify reporting elements in GenAI-related studies. The Delphi process involves 30-50 experts with ≥3 years of experience in AI applications or medical research, selected based on publication records and expertise across disciplines (eg, clinicians and data scientists) and regions (eg, Asia and Europe). A 7-point-scale survey will establish consensus on checklist items. The testing phase invites authors to apply the GAMER checklist to GenAI-related manuscripts and provide feedback via a questionnaire, while experts assess reliability (κ statistic) and usability (time taken, 7-point Likert scale). The study has been approved by the Ethics Committee of the Institute of Health Data Science at Lanzhou University (HDS-202406-01).
    RESULTS: The GAMER project was launched in July 2023 by the Evidence-Based Medicine Center of Lanzhou University and the WHO Collaborating Centre for Guideline Implementation and Knowledge Translation, and it concluded in July 2024. The scoping review was completed in November 2023. The Delphi process was conducted from October 2023 to April 2024. The testing phase began in March 2025 and is ongoing. The expected outcome of the GAMER project is a reporting checklist accompanied by relevant terminology, examples, and explanations to guide stakeholders in better reporting the use of GenAI tools.
    CONCLUSIONS: GAMER aims to guide researchers, reviewers, and editors in the transparent and scientific application of GenAI tools in medical research. By providing a standardized reporting checklist, GAMER seeks to enhance the clarity, completeness, and integrity of research involving GenAI tools, thereby promoting collaboration, comparability, and cumulative knowledge generation in AI-driven health care technologies.
    INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/64640.
    Keywords:  ChatGPT; Delphi method; chatbots; generative AI; large language models; reporting guidelines; transparency
    DOI:  https://doi.org/10.2196/64640
  7. Stroke. 2025 Aug 15.
       BACKGROUND: Large language models (LLMs) are artificial intelligence (AI) tools that can generate human expert-like content and be used to accelerate the synthesis of scientific literature, but they can spread misinformation by producing misleading content. This study sought to characterize distinguishing linguistic features in differentiating AI-generated from human-authored scientific text and evaluate the performance of AI detection tools for this task.
    METHODS: We conducted a computational synthesis of 34 essays on cerebrovascular topics (12 generated by large language models [Generative Pre-trained Transformer 4, Generative Pre-trained Transformer 3.5, Llama-2, and Bard] and 22 by human scientists). Each essay was rated as AI-generated or human-authored by up to 38 members of the Stroke editorial board. We compared the collective performance of experts versus GPTZero, a widely used online AI detection tool. We extracted and compared linguistic features spanning syntax (word count, complexity, and so on), semantics (polarity), readability (Flesch scores), grade level (Flesch-Kincaid), and language perplexity (or predictability) to characterize linguistic differences between AI-generated versus human-written content.
    RESULTS: Over 50% of the stroke experts who reviewed the study essays correctly identified 10 (83.3%) of AI-generated essays as AI, whereas they misclassified 7 (31.8%) of human-written essays as AI. GPTZero accurately classified 12 (100%) of AI-generated and 21 (95.5%) of human-written essays. However, the tool relied on only a few key sentences for classification. Compared with human essays, AI-generated content had lower word count and complexity, exhibited significantly lower perplexity (median, 15.0 versus 7.2; P<0.001), lower readability scores (Flesch median, 42.1 versus 26.4; P<0.001), and higher grade level (Flesch-Kincaid median, 13.1 versus 14.8; P=0.006).
    CONCLUSIONS: Large language models generate scientific content with measurable differences versus human-written text but represent features that are not consistently identifiable even by human experts and require complex AI detection tools. Given the challenges that experts face in distinguishing AI from human content, technology-assisted tools are essential wherever human provenance is essential to safeguard the integrity of scientific communication.
    Keywords:  artificial intelligence; essay; humans; large language models; natural language processing
    DOI:  https://doi.org/10.1161/STROKEAHA.125.051913
  8. Ann Med Surg (Lond). 2025 Aug;87(8): 5353-5355
      Large language models (LLMs) have transformed medical research and scientific publishing by facilitating manuscript preparation, literature synthesis, and editorial processes, yet pose significant threats to research integrity through generation of potential pseudoscientific content. Current AI detection algorithms demonstrate inconsistent reliability, particularly against paraphrased or humanized content, while LLM integration in peer review compromises expert critical evaluation and homogenizes scientific discourse. These systems exhibit documented bias against non-male, non-white researchers, compounding ethical concerns. Heterogeneous editorial policies regarding AI disclosure across medical journals create regulatory gaps enabling undetected misconduct. However, excessive focus on detection over content quality risks establishing counterproductive "AI phobia" that impedes legitimate technological integration. Preserving research credibility requires standardized disclosure frameworks, enhanced detection algorithms, comprehensive privacy safeguards, and mandatory AI watermarking systems to maintain scientific integrity while accommodating technological advancement in research practices.
    Keywords:  AI regulation; artificial intelligence; authorship; large language models; medical ethics
    DOI:  https://doi.org/10.1097/MS9.0000000000003498
  9. J Nutr Biochem. 2025 Aug 09. pii: S0955-2863(25)00209-8. [Epub ahead of print] 110046
      The use of Artificial Intelligence (AI) for peer review is gaining interest by journals and editorial boards because of the length of time required for the scientific peer review process and large numbers of new submissions. The application of AI using a large language model (LLM) like OpenAI's ChatGPT is a valid, rapid means to search published articles that examine diets in rodent studies. The information gathered can be used to evaluate rodent diets and nutrients during peer review or in developing studies and preparing appropriate experimental designs for future nutrition and biomedical research with rodents. However, it is vital that AI be used only to supplement and assist the human process of peer review and the final decision for publication. The use of ChatGPT has great potential to improve scientific peer review and assist researchers in developing experimental designs for nutrition research. The target of our AI application is improving understanding of why dietary and ingredient effects impact the interpretation of findings in metabolism, biochemistry, molecular and gene expression, physiology, health, and disease research in rodents. AI applications in validating diet approaches used in rodent studies can complement the human peer review process of scientific journals.
    Keywords:  Artificial intelligence; ChatGPT; Diets; Health; Metabolism; Nutrients; Rodents
    DOI:  https://doi.org/10.1016/j.jnutbio.2025.110046
  10. Clin J Oncol Nurs. 2025 Aug 04. 29(4): 268-269
      CJON has maintained editorial and scholarly integrity, with a commitment to publication ethics, across three decades. CJON has educated readers about the impact of ghostwritten manuscripts, authorship criteria, and embedded a.
    Keywords:  artificial intelligence; oncology nursing; publishing; technology
    DOI:  https://doi.org/10.1188/25.CJON.268-269
  11. ACS Appl Mater Interfaces. 2025 Aug 13.
      The evolution of large language models (LLMs) is reshaping the landscape of scientific writing, enabling the generation of machine-written review papers with minimal human intervention. This paper presents a pipeline for the automated production of scientific survey articles using Retrieval-Augmented Generation (RAG) and modular LLM agents. The pipeline processes user-selected literature or citation network-derived corpora through vectorized content, reference, and figure databases to generate structured, citation-rich reviews. Two distinct strategies are evaluated: one based on manually curated literature and the other on papers selected through citation network analysis. Results demonstrate that increasing the input materials' diversity and quantity improves the generated output's depth and coherence. Although current iterations produce promising drafts, they fail to meet top-tier publication standards, particularly in critical analysis and originality. Results were obtained for a case study on a particular topic, namely, Langmuir and Langmuir-Blodgett films, but the proposed pipeline applies to any user-selected topic. The paper concludes with suggestions of how the system could be enhanced through specialized modules and discusses broader implications for scientific publishing, including ethical considerations, authorship attribution, and the risk of review proliferation. This work represents an opportunity to discuss the advantages and pitfalls introduced by the possibility of using AI assistants to support scientific knowledge synthesis.
    Keywords:  AI; large language models; machine written; scientific review writing
    DOI:  https://doi.org/10.1021/acsami.5c08837
  12. Ir J Med Sci. 2025 Aug 11.
       BACKGROUND: Open access (OA) and subscription-based (SB) journals are key models in academic publishing. OA journals focus on accessibility, while SB journals often emphasize subscription revenue, leading to variations in metrics such as impact factor and citation performance.
    AIM: To compare metrics of OA and SB physical therapy and rehabilitation journals using Scimago Journal & Country Rank (SJCR).
    METHODS: The study analyzed 266 journals, including 92 OA and 174 SB journals. Metrics such as impact factor, H-index, Scimago Journal Rank (SJR), total citations, citations-to-documents ratio, quartile rankings, and article processing charges (APCs) were examined.
    RESULTS: SB journals showed significantly higher H-index values (p < 0.001). Higher APCs in OA journals were strongly correlated with higher impact factors (r = 0.703, p < 0.001), SJR (r = 0.727, p < 0.001), total citations (r = 0.586, p < 0.001), and H-index (r = 0.520, p < 0.001). Quartile rankings indicated better performance for OA journals (p < 0.05).
    CONCLUSION: While SB journals exhibited higher H-index values, OA journals performed similarly in other metrics. The correlation between higher APCs and improved performance in OA journals underscores the role of financial investment. Both models are crucial for disseminating research in physical therapy and rehabilitation, highlighting the importance of editorial standards and rigorous peer review.
    Keywords:  Academic publishing; Journal metrics; Open access journal; Physical therapy; Rehabilitation; Subscription-based journal
    DOI:  https://doi.org/10.1007/s11845-025-04038-8
  13. JMA J. 2025 Jul 15. 8(3): 1018-1019
      
    Keywords:  duplicate publication; manuscript; paper; retraction; transparency
    DOI:  https://doi.org/10.31662/jmaj.2025-0242
  14. Postgrad Med J. 2025 Aug 12. pii: qgaf126. [Epub ahead of print]
      The title and abstract are the most visible and frequently accessed components of a scientific article, often determining whether a manuscript is read, cited or even considered for peer review. Alongside title and abstract, the inclusion of well-chosen keywords significantly enhances searchability and visibility in academic databases. A well-crafted title encapsulates the essence of the study, ensuring clarity, conciseness and searchability, while an effective abstract provides a succinct yet comprehensive summary of the research. Despite their critical role in scientific communication, many authors struggle with optimizing these elements. This article reviews best practices in scientific communication, with a practical orientation. We discuss how structured frameworks (Preferred Reporting Items for Systematic Reviews and Meta-Analyses, Consolidated Standards of Reporting Trials, Strengthening the Reporting of Observational Studies in Epidemiology) enhance rigor and visibility. New authors often overlook the importance of journal-readership alignment, targeting appropriate scopes or the "So what?" test for relevance: critical questions all researchers should pose before submission. Despite being grounded in conventional best practices, this work aims to bridge persistent gaps in real-world manuscript preparation. By addressing common pitfalls and integrating examples and models, this article helps researchers improve both the impact and acceptance potential of their manuscripts through more strategic use of titles, keywords and abstracts.
    Keywords:  education and training; general medicine; medical education & training
    DOI:  https://doi.org/10.1093/postmj/qgaf126
  15. Acad Med. 2025 Aug 13.
       ABSTRACT: Various resources exist for conducting program evaluations, but these resources do not specify features of a scholarly program evaluation. In this commentary, the authors use Glassick's criteria for scholarship (clear goals, adequate preparation, appropriate methods, significant results, effective presentation, and reflective critique) to define what counts as a scholarly program evaluation. Then they use a hypothetical scenario common in medical education to describe how an educator could design and share program evaluation findings for a Research in Medical Education Research Report submission, an Innovation Report submission to Academic Medicine , and a MedEdPortal submission.
    DOI:  https://doi.org/10.1097/ACM.0000000000006192
  16. PLoS Comput Biol. 2025 Aug;21(8): e1013283
      Many-author non-empirical papers include recommendations or consensus statements, catalogs of ideas, roadmaps for future research, calls to action, or "how to" articles. These papers have great potential to change the conversation or address unmet needs within research communities. Large, diverse authorship teams can create valuable resources that no individual co-author could create independently. Achieving these goals, however, requires a very different approach than researchers typically use to prepare papers with fewer authors. In the process we describe, a small team of lead writers typically leads the content generation and writing processes. Many co-authors collaborate to create content and provide feedback throughout the writing process. Lead writers face many challenges, including defining the content and structure of the paper, coordinating complex logistics, preparing themselves and co-authors for a unique writing experience, and managing high-volume feedback. Here, we outline ten simple rules for leading a many-author non-empirical paper. These rules guide readers through the content generation and writing processes and highlight practical solutions to common challenges. While these rules were developed by preparing non-empirical papers with at least 30 authors, some rules may apply to research papers or non-empirical papers with fewer authors. Lead writers can also use our companion paper, which shares ten simple rules for being a co-author on a many-author non-empirical paper, to prepare co-authors for an efficient and effective collaborative process.
    DOI:  https://doi.org/10.1371/journal.pcbi.1013283
  17. Med Hypothesis Discov Innov Ophthalmol. 2025 ;14(2): 40-49
       Background: Systematic reviews and meta-analyses (SRMAs) are central to evidence-based ophthalmology, influencing clinical guidelines and treatment decisions. However, the rapid increase in SRMA publications has exposed serious ethical concerns, including selective reporting, duplicate publication, plagiarism, authorship misconduct, and undeclared conflicts of interest. Despite established frameworks such as Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA), International Prospective Register of Systematic Reviews (PROSPERO), and International Committee of Medical Journal Editors (ICMJE), ethical compliance remains inconsistent, undermining the credibility of synthesized evidence. We aimed to examine the ethical landscape of SRMAs with a particular focus on ophthalmology, highlighting common pitfalls, evaluating current guidelines, and providing practical recommendations to ensure that these reviews are conducted and reported with the highest ethical standards-ultimately safeguarding the integrity of the evidence base that underpins clinical eye care.
    Methods: A structured literature search was conducted in PubMed, Scopus, Web of Science, and Google Scholar through May 2025 using combinations of the terms "systematic review," "meta-analysis," "ethics," "research integrity," and "ophthalmology." Relevant guidelines, peer-reviewed studies, and editorials were synthesized to identify ethical pitfalls and propose best practice solutions.
    Results: We illustrate these challenges with ophthalmology-specific examples and highlight the downstream impact of unethical SRMAs on clinical practice and public trust. We also propose actionable recommendations for researchers, editors, and institutions to enhance the ethical quality of SRMAs, including improved training in research integrity, stricter enforcement of reporting guidelines, and increased editorial oversight. By addressing these ethical dimensions, the ophthalmic community can ensure that SRMAs not only meet methodological benchmarks but also reflect the core values of scientific honesty, accountability, and patient-centeredness. Approximately one-third of ophthalmology SRMAs fail to assess bias or comply with PRISMA guidelines. Industry-sponsored reviews have shown a tendency to favor commercially linked interventions, raising objectivity concerns. Key ethical concerns include: lack of protocol registration, selective inclusion of studies, inclusion of retracted or flawed trials, duplicate or plagiarized data, and authorship and disclosure misconduct.
    Conclusions: To protect the integrity of ophthalmic evidence synthesis, SRMAs must adhere to the highest ethical standards. Researchers should commit to transparent, methodologically rigorous, and ethically sound practices. Journals and institutions must enforce compliance, provide oversight, and support education in research integrity. Field-specific adaptations of reporting standards may further support ethical clarity. Ultimately, ethical SRMAs are critical to preserving trust, guiding responsible care, and fulfill their intended role as trustworthy instruments in advancing evidence-based ophthalmology.
    Keywords:  ethics in; ethics in publishing; meta-analysis as topic; mixed treatment meta-analysis; ophthalmology; publishing; research misconduct; review; systematic; systematic reviews as a topic
    DOI:  https://doi.org/10.51329/mehdiophthal1522
  18. Curr Res Physiol. 2025 ;8 100157
      Most biomedical science students arriving at UK universities have very limited experience of writing scientifically and have little insight into the process involved in producing a peer-reviewed academic publication. To help support them, we created an interactive, online tutorial to help improve their scientific writing through looking at aspects including the construction of a logical argument, use of figures and referencing, as well as providing an overview of the publication process. The tutorial was delivered in an in-person teaching workshop at the University of Bristol and offered as an optional, online-only activity at the University of Cambridge, in both cases to first-year physiology students. In Bristol, 68 % of 152 students and in Cambridge, 67 % of 561 students engaged with the interactive tutorial. These students were invited to complete before-and-after surveys, with questions relating to their confidence in and understanding of the topics covered. Feedback from students in both institutions was overwhelmingly positive, with a statistically significant increase in reported confidence and understanding following completion of the tutorial. We propose the use of similar interactive tutorials as a simple, low-investment way in which training in scientific writing can be included in undergraduate science curricula, to help students prepare for what is expected in coursework, exam essays and in their postgraduate careers.
    DOI:  https://doi.org/10.1016/j.crphys.2025.100157
  19. Adv Health Sci Educ Theory Pract. 2025 Aug 11.
      This article is the third in a series exploring the research supervision relationship. In the first two articles, the authors presented an introduction to the mentor-mentee relationship and presented some of the trickier conceptual issues involved in supporting academic writing. In this article, the authors turn to the more practical question of concrete ways by which mentors can effectively provide useful feedback to mentees about their written work.
    DOI:  https://doi.org/10.1007/s10459-025-10467-y
  20. Learn Publ. 2024 Jan;37(1): 22-29
      Funders, publishers, scholarly societies, universities, and other stakeholders need to be able to track the impact of programs and policies designed to advance data sharing and reuse. With the launch of the NIH data management and sharing policy in 2023, establishing a pre-policy base-line of sharing and reuse activity is critical for the biological and biomedical community. Toward this goal, we tested the utility of mentions of research resources, databases, and repositories (RDRs) as a proxy measurement of data sharing and reuse. We captured and processed text from Methods sections of open access biological and biomedical research articles published in 2020 and 2021 and made available in PubMed Central. We used natural language processing to identify text strings to measure RDR mentions. In this article, we demonstrate our methodology, provide normalized baseline data sharing and reuse activity in this community, and highlight actions authors and publishers can take to encourage data sharing and reuse practices.
    DOI:  https://doi.org/10.1002/leap.1586
  21. Integr Med Res. 2025 Sep;14(3): 101199
       Background: Detailed intervention reporting is essential to interpretation, replication, and translation of music-based interventions (MBIs). The 2011 Reporting Guidelines for Music-Based Interventions were developed to improve transparency and reporting quality of published research; however, problems with reporting quality persist.
    Methods: The purpose of this study was to update and validate the 2011 reporting guidelines using rigorous Delphi approach that involved an interdisciplinary group of MBI researchers; and to develop an explanation and elaboration guidance statement to support dissemination and usage. We followed the methodological framework for developing reporting guidelines recommended by the EQUATOR Network and guidance recommendations for developing health research reporting guidelines. Our three-stage process included: (1) an initial field scan, (2) a consensus process using Delphi surveys (two rounds) and Expert Panel meetings, and (3) development and dissemination of an explanation and elaboration document.
    Results: First-Round survey findings revealed that the original checklist items were capturing content that investigators deemed essential to MBI reporting; however, it also revealed problems with item wording and terminology. Subsequent Expert Panel meetings and the Second-Round survey centered on reaching consensus for item language. The revised RG-MBI checklist has a total of 12-items that pertain to eight different components of MBI interventions including name, theory/scientific rationale, content, interventionist, individual/group, setting, delivery schedule, and treatment fidelity.
    Conclusion: We recommend that authors, journal editors, and reviewers use the RG-MBI guidelines, in conjunction with methods-based guidelines (e.g., CONSORT) to accelerate and improve the scientific rigor of MBI research.
    Keywords:  Interventions; Music; Music therapy; Reporting Quality; Reporting guidelines
    DOI:  https://doi.org/10.1016/j.imr.2025.101199
  22. Adv Health Sci Educ Theory Pract. 2025 Aug 13.
      In this editorial the editor considers the form of scholarly conversations and commentaries, their qualities and limitations, and the implications for scholarly communication in health professions education.
    DOI:  https://doi.org/10.1007/s10459-025-10466-z
  23. J Arthroplasty. 2025 Sep;pii: S0883-5403(25)00903-9. [Epub ahead of print]40(9): 2215-2216
      
    DOI:  https://doi.org/10.1016/j.arth.2025.07.039
  24. BJPsych Open. 2025 Aug 13. 11(5): e179
      In celebrating the 10th anniversary of BJPsych Open, this editorial review serves as a personal reflection and an overview of the birth, growth, expansion and excellence of the Journal as well as an introduction to the BJPsych Open 10th Anniversary Thematic Series. Specific emphasis is placed on changes and advances in productivity, the editorial board, publishing, thematic series, topical articles and focus on ethics. Further, articles of importance to our stakeholders are noted (top cited/downloaded, highlighted articles, articles of the month). The remit and vision for BJPsych Open remains unchanged: a general psychiatric journal with high-quality, methodologically rigorous and relevant publications, with relevance to the advancement of clinical care, patient outcomes, the scientific literature, research and policy. The Journal's continued quality, growth and international recognition speak to its place in scientific literature, to the RCPsych mission to disseminate knowledge and to its bright future. As Editor-in-Chief, I note the debt of gratitude owed to an exemplary multidisciplinary team and the honour and privilege of serving in this role.
    Keywords:  BJPsych Open anniversary; academic psychiatry; editorial board; ethics; methodological rigour and quality; metrics; publishing and productivity; remit, vision and future; reviews/research articles/editorials/commentaries; thematic series
    DOI:  https://doi.org/10.1192/bjo.2025.10773