bims-skolko Biomed News
on Scholarly communication
Issue of 2026–05–10
twenty-two papers selected by
Thomas Krichel, Open Library Society



  1. BMJ Open. 2026 May 08. 16(5): e104128
      Open Science aims to fight misinformation and improve trust in scientific research; it encourages the reliability and accessibility of evidence, reduces inequalities through the democratisation of scientific knowledge and focuses scientific endeavours on issues of societal significance. As a multisponsor collaboration committed to driving positive change, Open Pharma has a multifaceted vision for scientific research publications funded by pharmaceutical companies ('company research publications') that aligns strongly with Open Science tenets. This new vision statement outlines our forward-looking principles for company research publications, both for short-term attainment ('immediate') and long-term commitment ('ultimate'). Together, the principles provide a framework for positive collective action by all stakeholders involved in the development and dissemination of peer-reviewed company research publications. Underpinned by our central commitment to transparency for company research publications, we outline goals for: universal access to these publications; provision of peer-reviewed plain language summaries of the publications to aid comprehension among non-specialist readers; leveraging author and institutional metadata to advance transparency, discoverability and research impact; working towards FAIR (Findability, Accessibility, Interoperability, and Reuse) data principles through cross-sector consensus and action, and disclosure of patient involvement in research and its reporting to support transparency and encourage a research ecosystem attuned to patient centricity. We call on all stakeholders to realise the Open Pharma Vision and achieve an open and trusted future for company research publications that will ultimately advance patient care and improve global health.
    Keywords:  Clinical Trial; ETHICS (see Medical Ethics); Patient Participation
    DOI:  https://doi.org/10.1136/bmjopen-2025-104128
  2. Trials. 2026 May 08.
       BACKGROUND: Ideally, evidence-based decisions about healthcare interventions should be informed by access to up-to-date information from all relevant RCTs, making it essential that the reports are published soon after study completion. However, studies have consistently shown that between 25 and 50% of clinical trials remain unpublished or are only published many years after completion. The WHO has noted a slow but steady increase in the number of RCTs since the mid-2000s, particularly in sub-Saharan Africa (SSA). However, the extent of publication bias of SSA RCTs remains unknown. Therefore, our study objectives were to assess (1) the proportion of completed RCTs from SSA that have been published and (2) the time from completion to publication.
    METHODS: This cross-sectional study, consisting of a retrospective analysis of registered SSA RCTs, aims to report the proportion of completed and terminated SSA RCTs registered in ClinicalTrials.gov and the Pan African Clinical Trials Registry (PACTR) and their time to publication.
    RESULTS: Our search yielded 7896 records, of which 3026 RCTs met our inclusion criteria for analysis. We identified journal publications for 1983 (65.5%) RCTs. The overall median time to publication from the primary completion date was 34.2 months (95% CI: 32.4 to 35.5).
    CONCLUSIONS: Overall, we found a substantial proportion (34.5%) of unpublished SSA RCTs. Moreover, the median time to publication from primary completion was 34.2 months. The persistence of publication bias threatens the integrity of evidence-based healthcare practice, particularly given that consumers depend on peer-reviewed journal publications as conventional and trusted sources to stay informed. Our findings underscore the importance of continued research to test and implement preventative strategies to mitigate publication bias.
    Keywords:  Publication bias; Randomised controlled trials; Research waste
    DOI:  https://doi.org/10.1186/s13063-026-09753-w
  3. J Am Geriatr Soc. 2026 May 05.
      
    Keywords:  artificial intelligence disclosure; augmented intelligence; author accountability
    DOI:  https://doi.org/10.1111/jgs.70495
  4. Science. 2026 May 07. 392(6798): 569
      Advanced AI systems are shown to make up data and "p-hack" their results.
    DOI:  https://doi.org/10.1126/science.aei6154
  5. Nature. 2026 May 05.
      
    Keywords:  Computer science; Peer review; Technology
    DOI:  https://doi.org/10.1038/d41586-025-03504-8
  6. Med Teach. 2026 May 04. 1-3
      Health professions' education (HPE) research has evolved within an academic ecosystem. This paper works to put forth that generative artificial intelligence (AI) and autonomous AI agents are exposing the underbelly of HPE. Two responses have emerged: disclosure frameworks that report AI use, and formation frameworks that assess researchers' readiness to critically examine evolving knowledge systems being reshaped by AI. The field has emphasized the former while neglecting the latter. We argue that HPE necessitates more than disclosure policies. Training should assess researcher formation, including the ability to question evidence hierarchies, peer review, technology disruptions, and publication systems, rather than relying on research outputs alone for scholarly development.
    Keywords:  Artificial Intelligence; Attention; Critical Thinking; Evidence-Based Medicine; Medical Education; Research Training
    DOI:  https://doi.org/10.1080/0142159X.2026.2667279
  7. Nature. 2026 May;653(8113): 7
      
    Keywords:  Funding; Machine learning; Policy; Publishing
    DOI:  https://doi.org/10.1038/d41586-026-01422-x
  8. J Pain Symptom Manage. 2026 May 01. pii: S0885-3924(26)00763-3. [Epub ahead of print]
      
    Keywords:  editorial; generative artificial intelligence; palliative care; peer review; psychological burnout
    DOI:  https://doi.org/10.1016/j.jpainsymman.2026.04.614
  9. Recenti Prog Med. 2026 May;117(5): 216-218
      We base this commentary on a direct experience: ten months after the initiation of peer review for one of our manuscripts submitted to a first-quartile journal, the process remains ongoing. This situation prompted us to reflect more broadly on a growing systemic problem in scientific publishing: reviewer fatigue and the increasing difficulty editors face in identifying qualified, willing reviewers. While peer review remains the cornerstone of scientific quality assurance, we believe its sustainability is increasingly threatened by an inherently imbalanced system that strongly incentivizes manuscript submission while offering little formal recognition for reviewing activity. In our view, reviewer fatigue is a multifactorial phenomenon. A key driver is the unpaid nature of peer review, which is typically performed during personal time in the context of rising clinical and administrative workloads, particularly in anesthesiology. This burden is compounded by the proliferation of scientific journals and the exponential growth in manuscript submissions, a trend further accelerated by the widespread adoption of artificial intelligence tools that lower barriers to manuscript production. Increasing subspecialization further narrows the pool of eligible reviewers, concentrating the reviewing burden on a limited number of already overextended experts. We also consider insufficient editorial triage an important and often underappreciated contributor. When manuscripts with fundamental methodological or conceptual flaws are routinely sent for external review, reviewer motivation declines and editorial timelines are unnecessarily prolonged. Additional factors - including limited training in peer review, lack of feedback, and absence of academic recognition - further erode the perceived value of reviewing. We discuss several potential strategies, including formal recognition systems, targeted use of AI for preliminary manuscript screening, and stricter desk rejection policies. In conclusion, we view reviewer fatigue as a systemic threat to the integrity and efficiency of peer review that demands urgent, balanced, and concrete action by the scholarly community.
    DOI:  https://doi.org/10.1701/4698.47104
  10. J ISAKOS. 2026 Apr 30. pii: S2059-7754(26)00062-3. [Epub ahead of print] 101126
      The integration of Artificial Intelligence (AI) into medicine has progressed from discriminative models to Generative AI (GenAI), which can synthesize novel content. For orthopaedic surgeons, scientific publication remains a vital marker of academic success but is often constrained by clinical workload. This review proposes a structured, practical framework to help orthopaedists effectively harness AI tools, transitioning from opaque, "black box" generation to grounded, verifiable research assistance through Retrieval-Augmented Generation (RAG). A PubMed search was conducted to explore the application of GenAI in the context of orthopaedic scientific research. An interactive review with experts in GenAI was also conducted, from which the proposed structure was developed. From this synthesis, a three-phase workflow is proposed: (1) Evidence selection using semantic discovery systems to identify and map relevant literature beyond keyword matching; (2) Data extraction and synthesis employing RAG-based systems to anchor AI responses to verified PDF sources, thereby minimizing hallucinations; and (3) Drafting and refining using Large Language Models (LLMs) for structured composition, linguistic clarity, and iterative manuscript improvement. The workflow integrates platform features to enhance efficiency, accuracy, and accessibility in orthopaedic research. When applied within a controlled, evidence-grounded environment, these systems can automate literature synthesis, expedite data extraction, and assist with scientific writing, while preserving authorial intent and accountability. However, challenges remain. Risks include algorithmic bias, "hallucinations", privacy concerns, and ethical issues related to authorship. Despite these limitations, AI represents a paradigm shift in orthopaedic scholarship, functioning as a cognitive exoskeleton that augments rather than replaces human expertise. With vigilant human oversight and adherence to journal ethics, orthopaedic surgeons can leverage AI to enhance research productivity, reproducibility, and quality while upholding the highest standards of scientific integrity.
    Keywords:  Artificial intelligence; Generative AI; Large language models; Orthopaedic research; Retrieval-Augmented Generation; Scientific writing
    DOI:  https://doi.org/10.1016/j.jisako.2026.101126
  11. Innovation (Camb). 2026 May 04. 7(5): 101270
    Innovation editorial teameditorial-office@the-innovation.org
      
    DOI:  https://doi.org/10.1016/j.xinn.2026.101270
  12. Orthopadie (Heidelb). 2026 May 07.
      With the emergence of generative AI models such as ChatGPT, a new phase of scientific work is also beginning in orthopedics and trauma surgery. As a language-based deep learning model (LLM), ChatGPT offers a wide range of possible applications-especially in the creation, translation, and optimization of scientific texts. It supports authors in finding ideas, linguistic elaboration, and can even be used to check for plagiarism. It is a particularly valuable tool for non-native speakers. However, despite all the opportunities, its use involves considerable risks; studies show a high rate of incorrect or invented references. In addition, journals are sometimes flooded due to mass publication as a result of easier text generation. The scientific discourse, therefore, calls for clear rules on the use of LLM-particularly with regard to transparency, authorship, and the integrity of scientific work.
    Keywords:  Artificial intelligence; Authorship; Chatbot; Ethics; Publishing
    DOI:  https://doi.org/10.1007/s00132-026-04833-w
  13. Chirurgia (Bucur). 2026 Apr;pii: 7. [Epub ahead of print]121(2): 182-187
      Background: Despite its importance, the ability to produce high-quality scientific manuscripts is often perceived as the domain of academics and researchers. Traditional medical writing courses often focus on the critical appraisal of existing articles, which may not let participants develop a manuscript. Methods: A prospective, non-randomized intervention study was conducted over three years medical writing courses (2022 2024). Each course included manuscript drafting, peer collaboration, expert review, and a final workshop. Results: All 18 participants contributed to a clinical manuscript, resulting in three peer-reviewed publications following minor revisions with a mean time to publication of 85 days. Questionnaire responses (10-point Likert scale) showed high satisfaction: overall course rating 9.0 +-0.8, skill improvement 8.4+-1.2, and expert benefit 7.8+-1.5. Seventy-two percent reported significant improvement in writing skills. All participants expressed willingness to attend again and recommend the course. Conclusions: Hands-on medical writing is an effective, replicable model for improving medical writing skills among clinicians.
    Keywords:  education; medicalwriting; research; surgery
    DOI:  https://doi.org/10.21614/chirurgia.3283
  14. Croat Med J. 2026 Apr 30. 67(2): 66-71
       AIM: To assess whether ChatGPT can autonomously generate and select "human touch" elements (anecdotes, beliefs, and old sayings) and produce writing comparable to human-authored manuscripts.
    METHODS: A disagreement letter was composed and then tasked ChatGPT-5 with writing a new disagreement letter. The model was instructed to select suitable anecdotes from a candidate list and generate new ones. Both letters were compared. Eight experienced researchers independently assessed whether the letters were appealing.
    RESULTS: ChatGPT was able to select appropriate elements from the candidate list and, importantly, generate new ones. The human-generated letter was found to be more appealing by five of eight reviewers, and the ChatGPT-generated letter by three reviewers. None of the researchers reported that they found the use of human touch inappropriate or disruptive. Conclusion Although a single case was studied, these findings may help inform reflection on the use of LLMs in medical writing.
  15. Fam Med. 2026 Feb;58(2): 160-163
      Qualitative methods draw from diverse traditions, from social science to nursing. Heterogeneity in approach and discipline make qualitative methodologies a vibrant form of scientific inquiry. At the same time, the range of knowledge, familiarity, and comfort with qualitative methods varies. The authors of this piece are social scientists with extensive qualitative writing experience, as well as experiences running writing groups, serving as peer reviewers, and being a journal editor. This brief article presents useful strategies and actionable tips for developing qualitative articles for peer-review and publication. It includes qualitative writing recommendations organized by (a) the common structure of qualitative articles, (b) the writing process, and (c) the end product and the peer review process. The authors' goal is to provide accessible pathways for navigating qualitative article writing and publication for interdisciplinary audiences.
    DOI:  https://doi.org/10.22454/FamMed.2026.972963
  16. J Am Heart Assoc. 2026 May 06. e048584
      Open science practices, including data sharing, open access, and prospective study registration, have been increasingly recognized to improve transparency, reproducibility, and accessibility in research, yet their uptake and implementation by cardiovascular research funders is unclear. We conducted a scoping review of publicly available policies, guidance, and grant instructions from 12 members of the Global Cardiovascular Research Funders Forum to assess expectations, monitoring, and support for open science in cardiovascular research. We included 105 documents from 9 funders; no relevant documents were identified for 3 funders. Data sharing (67%) and open access (58%) were the most common mandates by funders, followed by prospective registration (50%) and patient and public involvement (50%). Requirements for other practices, including code sharing, use of reporting guidelines, preprints, and open peer review, were uncommon. Monitoring compliance was inconsistent, with many funders not specifying any mechanisms, even for widely required practices. Where available, support was most often provided through financial assistance, guidance, or infrastructure, particularly for open access, data sharing, and patient and public involvement. These findings suggest that while cardiovascular funders are engaging with open science, policies remain uneven in scope, monitoring, and support. Coordinated efforts to strengthen and harmonize open science expectations, particularly around compliance monitoring and researcher training, will be essential to realizing the full potential of open science in cardiovascular research.
    Keywords:  data management; health policy; open access publishing; open science; patient participation; policy; reproducibility of results
    DOI:  https://doi.org/10.1161/JAHA.125.048584
  17. Zhonghua Yi Xue Za Zhi. 2026 May 12. 106(17): 1661-1666
      Academic journal peer review lacks disciplinary support due to uneven quality of peer review and inconsistent comments. Based on long-term experience in peer review and professional journal publishing, the authors put forward some suggestions on establishing the discipline of reviewology. According to the requirements of the discipline system, this paper puts forward the basic framework of the theoretical system, methodology system and technical system of reviewology for academic discussion. The basic framework is mainly based on the universal rules involved in academic journal peer review, supplemented by special cases of medical research. This paper discusses the background of reviewology emergence, problems and challenges it faces, the social significance and academic value, as well as issues related to artificial intelligence.
    DOI:  https://doi.org/10.3760/cma.j.cn112137-20251203-03166