bims-skolko Biomed News
on Scholarly communication
Issue of 2026–02–01
sixteen papers selected by
Thomas Krichel, Open Library Society



  1. Nature. 2026 Jan;649(8099): 1099-1101
      
    Keywords:  History; Language; Machine learning; Technology
    DOI:  https://doi.org/10.1038/d41586-026-00245-0
  2. Integr Med Res. 2026 Jun;15(2): 101288
      This article provides an overview of spin in general and the current state of spin research in acupuncture studies. We introduce the definition and historical background of spin and provide an overview of its types with illustrative examples. Spin, reporting strategies that emphasise benefits or distract from non-significant results, may still be unfamiliar to many researchers, despite its critical impact on physicians and decision-makers by distorting interpretation and misrepresenting findings. We discuss not only the current state of spin research in the acupuncture field, but also specific types of spin commonly observed in acupuncture publications. Finally, we issue a call to action for researchers, journal editors, reviewers, and decision-makers to prevent spin in research articles. Reducing spin requires multifaceted efforts, including strict adherence to reporting guidelines, rigorous editorial and peer review processes, and increased awareness and training among researchers. By adopting a multidimensional approach, acupuncture researchers can become more alert to misleading reporting practices and ultimately improve the overall quality of reporting.
    Keywords:  Acupuncture; Narrative bias; Reporting bias; Spin
    DOI:  https://doi.org/10.1016/j.imr.2025.101288
  3. Science. 2026 Jan 29. 391(6784): 432-433
      First-time posters to arXiv now need an endorsement from an established author.
    DOI:  https://doi.org/10.1126/science.aef8896
  4. Nature. 2026 Jan 29.
      
    Keywords:  Machine learning; Publishing; Scientific community; Technology
    DOI:  https://doi.org/10.1038/d41586-026-00229-0
  5. J Dent Sci. 2026 Jan;21(1): 679-680
      
    Keywords:  Academic writing; Artificial intelligence; Fabricated references; Large language models
    DOI:  https://doi.org/10.1016/j.jds.2025.10.024
  6. Tomography. 2025 Dec 22. pii: 1. [Epub ahead of print]12(1):
      The scientific publishing crisis represents a complex problem, mainly stemming from the "publish or perish" culture that prioritizes quantity over quality, which leads to the proliferation of low-quality research manuscripts and research misconduct, including data fabrication (making up data or results), falsification (manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record), or even plagiarism (the appropriation of another person's ideas, processes, results, or words without giving appropriate credit) [...].
    DOI:  https://doi.org/10.3390/tomography12010001
  7. Nature. 2026 Jan 29.
      
    Keywords:  Peer review; Publishing; Technology
    DOI:  https://doi.org/10.1038/d41586-025-04146-6
  8. Knee Surg Sports Traumatol Arthrosc. 2026 Jan 28.
    ESSKA Artificial Intelligence Working Group
      Research communication is undergoing a paradigm shift. The traditional linear manuscript-foundational for centuries-increasingly reveals limitations in the digital era, struggling with information overload, delayed dissemination, and rigid formats. We propose a transition towards 'living publications': interactive, artificial intelligence (AI)-enhanced platforms that evolve with new evidence. Unlike static papers, these systems utilise large language models (LLMs) and vector databases to interpret context, synthesise real-time findings, and map interdisciplinary connections. This shift promises to democratise knowledge, accelerate validation, and enable dynamic evidence synthesis. However, it necessitates robust frameworks for verification, 'version of record' tracking, and peer review to maintain rigor. Successfully navigating this transition requires balancing technological innovation with preservation of academic values-ensuring that increased speed and accessibility enhance rather than diminish the quality of scientific discourse. As interactive platforms mature, they may potentially reshape how knowledge is shared, discovered, and applied, ideally accelerating scientific advancement through more dynamic, accessible, and collaborative research communication. LEVEL OF EVIDENCE: NA.
    Keywords:  artificial intelligence; digital publishing; knowledge dissemination; natural language processing; scientific communication
    DOI:  https://doi.org/10.1002/ksa.70286
  9. Sex Health. 2026 Jan 27. pii: SH25277. [Epub ahead of print]
      No abstract is required for Letters to the Editor or Comment/Response pieces.
    DOI:  https://doi.org/10.1071/SH25277
  10. Front Psychol. 2025 ;16 1642718
      As a core link in academic quality control, peer review has been proven to improve the quality of papers in paper publishing, but this is often limited to the micro level of individual papers, the macro-level impact of peer review on authors' academic development is not yet clear. This study adopts a longitudinal case study to analyze the publication process of two international journal papers with a 14 year interval by the same author. Combining review comments, author responses, and interview reflections, the study systematically examines the development trajectory of three dimensions: paper quality, social skills and academic participation. Research has found that peer review interaction drives deep transformation through a triple mechanism: the internalization of paper standards sparks a shift from mechanical imitation to intentional construction, the practice of responding to comments promotes a shift from unidirectional acceptance to negotiated revision, and the change in academic participation undergoes a shift from external regulation to intrinsic motivation. The research results provide an empirical basis for academic writing teaching and a dynamic analysis for the study of occluded genres.
    Keywords:  academic growth; academic writing; novice and expert authors; occluded genres; peer review interaction
    DOI:  https://doi.org/10.3389/fpsyg.2025.1642718
  11. Postgrad Med J. 2026 Jan 27. pii: qgag005. [Epub ahead of print]
       BACKGROUND: Peer review is central to maintaining scientific quality and helps editors make decisions. However, the volume of scientific publications continues to rise, placing pressure on the peer review system. With the rise of generative AI, its role in supporting peer review is gaining attention. This study aims to compare human-written and AI-generated peer review reports.
    METHODS: We analysed 398 peer review reports linked to 119 research articles published in BMJ Open in 2024. Publicly available reports and manuscripts were included. Editorials, corrections, and protocols were excluded. AI reports were generated using ChatGPT. All reports were anonymized and assessed by two independent reviewers. We conducted a hybrid thematic analysis. Frequencies of themes were calculated and compared by reviewer type. For quantitative comparison, we used the Mann-Whitney U test to assess differences in review quality scores and Fisher's exact test to compare the distribution of themes. All analyses were conducted using R software.
    RESULTS: Human reviewers gave more detailed and diverse comments. They addressed deeper issues like interpretation, originality, and applicability. AI reviews covered more sections but focused on routine or structural elements. AI outperformed slightly in format-related domains. Co-occurrence analysis showed human reviews linked diverse themes, while AI comments were structurally clustered. Shannon Index confirmed that human reviews were more thematically diverse.
    CONCLUSIONS: AI can support peer review by screening for basic errors. However, it lacks insight, critical judgment, and contextual awareness. Human input remains essential for meaningful review. Review-specific AI tools that preserve confidentiality are needed for future integration.
    Keywords:  generative AI; peer review; review quality; reviewer comparison; scholarly publishing; thematic analysis
    DOI:  https://doi.org/10.1093/postmj/qgag005
  12. Rev Esp Salud Publica. 2026 Jan 28. pii: e202601005. [Epub ahead of print]100
       OBJECTIVE: The identification of cancer stem cells (CSCs), involved in therapy resistance, tumor progression, and metastasis, has led to exponential growth in cancer research. The aim of this study was to analyze the knowledge, beliefs, and attitudes of Spanish research and clinical staff working on CSC regarding the sharing of research data.
    METHODS: Semi-structured interviews were conducted with a sample consisting of the key actors in CSC research in Spain, based on three variables: type of research (basic, clinical, translational), research experience (junior with less than ten years of experience, and senior with more than ten years), and type of workplace (public and private). The analysis procedure used was based on Grounded Theory. Following a criterion of theoretical saturation, the final sample included sixteen interviewees.
    RESULTS: The interviewees reported having limited knowledge and identified a lack of available training on these topics. They agreed that the obligation to publish raw research data in journals would eventually create a habit, provided certain challenges, such as the cost of Article Processing Charges (APCs), are addressed. They valued making raw data available to other teams positively, although they identified significant competitiveness in the scientific field, leading to some reluctance to share.
    CONCLUSIONS: The study on data sharing in CSC reveals that knowledge is limited, and beliefs highlight some perceived threats, although attitudes toward sharing research data are generally positive.
    Keywords:  Information dissemination; Neoplastic stem cells; Open Access publishing; Spain
  13. Front Res Metr Anal. 2025 ;10 1707881
      Open Science aims to make research more transparent, reusable, and socially valuable, yet adoption may lag where assessment emphasizes journal prestige over openness. This study examines how research-assessment incentives align with Open Science practices in Ecuador and identifies policy levers associated with change. Using a mixed-methods design, we combine a review of national and institutional policies, a bibliometric analysis of Ecuador-affiliated outputs from 2013-2023, and a nationwide researcher survey (n ≈ 418), analyzed with multilevel logistic models, multinomial logit, and negative binomial regressions. Scientific output increased markedly, peaking at 5,070 articles in 2023; 66.7% were open access, predominantly via gold routes. In 2021, 59.3% of citations were self-citations. Despite high familiarity with Open Science (85%), implementation was limited: 22% reported depositing data and 35% publishing via diamond or gold routes. Greater reliance on journal-centric metrics was associated with lower odds of adopting open practices (odds ratio ≈ 0.72), while comprehensive institutional support-repositories with deposit mandates, research-data services, and licensing guidance-was associated with higher odds (odds ratio ≈ 1.65). Sensitivity to article processing charges was associated with shifts toward green and diamond routes. Findings suggest that socio-institutional factors dominate barriers and that aligning rules, services, and responsible assessment may help make openness the default, improving quality, equity, and reuse.
    Keywords:  Ecuador; Open Science; open access; research assessment; science policy
    DOI:  https://doi.org/10.3389/frma.2025.1707881
  14. BMC Biol. 2026 Jan 29.
      Classifying and ranking academic authorship lists is complex in practice, despite existing frameworks, and can lead to conflict. We propose Dragon Kill Points, adapted from multiplayer gaming, to track contributions throughout a project's lifecycle. Dragon Kill Points is based on five principles: granularity, responsibility, equity, autonomy, and transparency (GREAT). These ensure detailed task records, clear criteria, equitable rules, contributor flexibility, and shared documentation. By applying Dragon Kill Points, teams can reduce disputes, promote inclusivity, and recognise all contributions, including middle authorship. This scalable system offers a practical solution for managing authorship in collaborative research.
    Keywords:  Accountability; Coauthorship; Collaborative; Contributorship; Credit; Publishing
    DOI:  https://doi.org/10.1186/s12915-026-02521-x