bims-skolko Biomed News
on Scholarly communication
Issue of 2026–01–18
forty-nine papers selected by
Thomas Krichel, Open Library Society



  1. Nature. 2026 Jan;649(8097): 530
      
    Keywords:  Media; Scientific community; Society
    DOI:  https://doi.org/10.1038/d41586-026-00075-0
  2. PLoS Negl Trop Dis. 2026 Jan 14. 20(1): e0013914
       BACKGROUND: Aedes aegypti mosquitoes transmit multiple arboviruses, including dengue, Zika, chikungunya, and yellow fever, resulting in a large global disease burden. Vector control remains the key strategy to prevent transmission due to the absence of widely available vaccines or treatments. Many studies evaluate control approaches, yet only a subset are published in peer-reviewed journals. One potential contributor to selective reporting, or publication bias, could be a conflict of interest (COI), defined as employment by a for-profit company conducting the trial, or a financial interest tied to the tool's intellectual property.
    METHODOLOGY/PRINCIPAL FINDINGS: We conducted a systematic literature review of Ae. aegypti control trials from 2010 to 2022 to test the hypothesis that published trials with author-declared COI report a higher average level of Ae. aegypti suppression than publications whose authors declare no COI. Inclusion criteria required entomological outcomes (adult abundance or immature indices) with baseline and post-intervention data for both treated and untreated areas. Studies limited to laboratory, semi-field, or virus-only outcomes were excluded. We identified 51 publications that met the inclusion criteria. The studies with declared COI reported a 56.7% reduction in Ae. aegypti population, significantly higher than the 34.5% reduction in studies declaring no COI. The 51 studies were published in 26 different journals and eight (30.7%) did not have standard publishing policies that include the reporting of authors' COI statements in the published articles.
    CONCLUSIONS/SIGNIFICANCE: Our findings suggest that author-reported COI is associated with higher mosquito population suppression. This association may reflect the use of more effective interventions in COI-affiliated studies or publication bias. We also observed inconsistencies in COI policies and the display of COI statements across journals, underscoring the need for standardized and transparent reporting.
    DOI:  https://doi.org/10.1371/journal.pntd.0013914
  3. Infection. 2026 Jan 17.
      Irreproducible and fraudulent research is an enormous problem that decreases the public's trust in biomedical science. Unfortunately, infectious disease (ID) research has not escaped the reproducibility crisis and investigator maleficence. This article describes the scope of the problem, explores some of the reasons why investigators commit research fraud, and discusses the surprising lack of oversight by relevant stakeholders including the National Institutes of Health (NIH), scientific journals, and academic institutions. Finally, a novel solution for tackling fraud in ID research is proposed.
    DOI:  https://doi.org/10.1007/s15010-026-02730-0
  4. Am J Phys Med Rehabil. 2025 Nov 28.
       OBJECTIVES: Systematic reviews (SRs) are crucial for evidence-based medicine, but authors may add spin. This study investigated the prevalence of spin in abstracts and main texts of SRs published in rehabilitation journals and explored associated factors.
    DESIGN: This meta-epidemiological study secondary analysis 200 SRs from rehabilitation journals (2020-2022) focusing on pairwise meta-analyses of health interventions. Two independent reviewers extracted data. Spin was defined as reporting that highlights a beneficial effect greater than shown by results. We classified spin into misleading reporting, misleading interpretation, inappropriate extrapolation, and multiple spins based on analysis of the main text and abstract.
    RESULTS: Spin was present in 154 (77.0%) SRs in the main text and 151 (75.5%) in the abstract. Misleading interpretation was the most common category (86.4% in main text, 85.4% in abstract). PRISMA 2020 use was associated with reduced spin (Odds ratio (OR) 0.27 [95% CI; 0.13-0.57] for main text; OR 0.39 [95% CI: 0.20-0.76] for abstract).
    CONCLUSION: There is a high prevalence of spin in SRs published in rehabilitation journals. To avoid spin in the SRs of rehabilitation journals, the authors must adhere to guidelines, such as PRISMA 2020.
    Keywords:  Bias; Guideline Adherence; Meta-Research; Rehabilitation
    DOI:  https://doi.org/10.1097/PHM.0000000000002860
  5. JACC Cardiovasc Interv. 2026 Jan 12. pii: S1936-8798(25)02728-1. [Epub ahead of print]19(1): 142-143
      
    DOI:  https://doi.org/10.1016/j.jcin.2025.10.048
  6. Account Res. 2026 Jan 14. 2616765
       BACKGROUND: Failure to declare a conflict of interest (COI) may bias research outcomes and undermine the integrity of readers' decision-making. This study aims to examine common practices in health sciences where COIs were inadequately disclosed.
    METHODS: We identified and analyzed papers with post-publication COI issues by searching PubMed/MEDLINE, Web of Science and Retraction Watch Databases.
    RESULTS: A total of 328 medical papers were identified with COI issues. Among them, 128 (39.0%) articles were retracted, 53 (16.2%) received expressions of concern, and 147 (44.8%) were corrected. Most actions (224, 68.2%) were initiated by editors or publishers. Despite these issues, papers reached a median of 4 post-publication citations. Of 189 papers failing to declare financial COIs, 33.8% were retracted, while 61.2% received corrections or expressions of concern.
    CONCLUSIONS: Journals should adopt more detailed guidelines for COI disclosures and standardizing retraction notices to improve transparency. There is an urgent need for robust mechanisms to address potential COI issues effectively and to encourage authors to disclose COI transparently. Furthermore, to mitigate the risk of retractions, expressions of editorial concern, or corrections, these disclosure protocols must remain enforceable even post-publication.
    Keywords:  Conflict of interest (COI); correction; expression of concern; medical papers; retraction
    DOI:  https://doi.org/10.1080/08989621.2026.2616765
  7. Account Res. 2026 Jan 14. 2614062
       PURPOSE/SIGNIFICANCE: This study investigates the awareness, perceptions, and responses of library and information science (LIS) researchers toward retracted papers, aiming to inform the improvement of research integrity governance.
    METHOD/PROCESS: A questionnaire survey of 280 LIS researchers examined their sources of retraction information, understanding of causes, perceived consequences, and attitudes toward evaluation. The influence of academic background, publication volume, and discipline was also explored.
    RESULT/CONCLUSION: Findings indicate generally low retraction awareness and a primary reliance on informal channels. Critically, the analysis reveals several nuanced patterns: (1) Significant disciplinary differences exist in perceiving retraction causes; (2) Opinions are sharply divided on including retraction records in research evaluation, reflecting concerns about uniform responsibility attribution; (3) A considerable proportion of researchers mistakenly view retraction's impact as reversible. These attitudes are strongly associated with educational background and publication experience. In response, this paper proposes five key recommendations: establishing authoritative retraction platforms, improving journal retraction mechanisms, differentiating retraction types in evaluation, strengthening integrity education, and building a coordinated governance framework. These measures contribute to fostering a more transparent, fair, and sustainable scholarly correction ecosystem.
    Keywords:  Retraction; evaluation; institutional governance; research integrity; scientific research
    DOI:  https://doi.org/10.1080/08989621.2026.2614062
  8. Nature. 2026 Jan 14.
      
    Keywords:  Computer science; Machine learning; Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-04092-3
  9. J Craniomaxillofac Surg. 2026 Jan 13. pii: S1010-5182(26)00025-9. [Epub ahead of print]54(3): 104468
       OBJECTIVE: As generative AI tools like ChatGPT-4 gain traction in academic writing, questions arise regarding their credibility, scientific depth, and detectability. This study aimed to evaluate whether experienced oral and maxillofacial surgeons (OMFS) can reliably distinguish between AI- and human-authored manuscripts, and to compare both in terms of coherence, scientific rigor, citation accuracy, and overall quality.
    MATERIALS AND METHODS: Three core OMFS topics-impacted third molar surgery, cyst enucleation, and TMJ arthroscopy-were selected. For each topic, two manuscripts (∼2500 words each) were independently written: one by ChatGPT-4 and one by senior OMFS clinicians. Twenty board-certified OMFS reviewers, blinded to authorship, evaluated these manuscripts using a validated 25-item questionnaire assessing five domains: readability, scientific depth, reference accuracy, writing quality, and methodological rigor. Reviewers also attempted to identify the authorship source. Citation accuracy was verified through manual PubMed cross-checking. Statistical analysis included paired t-tests, chi-square tests, and ANOVA.
    RESULTS: Human-authored manuscripts outperformed AI-generated ones in scientific depth (4.5 ± 0.4 vs. 3.9 ± 0.6, p < 0.01), reference accuracy (4.9 ± 0.1 vs. 4.4 ± 0.7, p < 0.001), and overall writing quality (4.7 ± 0.4 vs. 4.1 ± 0.5, p < 0.01). Coherence and readability scores were comparable (human: 4.8 ± 0.4; AI: 4.6 ± 0.5; p = 0.07). Reviewers correctly identified manuscript authorship only 54 % of the time (p = 0.68), suggesting AI-generated texts are often indistinguishable from human ones in surface fluency.
    CONCLUSION: ChatGPT-4 is capable of producing readable and structurally sound OMFS manuscripts. However, deficiencies in scientific reasoning and citation fidelity underscore the need for expert oversight. As AI tools integrate into academic workflows, transparent disclosure and editorial safeguards are imperative to uphold scientific integrity.
    Keywords:  Artificial intelligence; ChatGPT; Double-blind evaluation; Generative AI; Oral and maxillofacial surgery; Scientific writing
    DOI:  https://doi.org/10.1016/j.jcms.2026.104468
  10. eNeuro. 2026 Jan;pii: ENEURO.0470-25.2025. [Epub ahead of print]13(1):
      
    DOI:  https://doi.org/10.1523/ENEURO.0470-25.2025
  11. Nature. 2026 Jan 14.
      Developments in artificial intelligence (AI) have accelerated scientific discovery1. Alongside recent AI-oriented Nobel prizes2-9, these trends establish the role of AI tools in science10. This advancement raises questions about the influence of AI tools on scientists and science as a whole, and highlights a potential conflict between individual and collective benefits11. To evaluate these questions, we used a pretrained language model to identify AI-augmented research, with an F1-score of 0.875 in validation against expert-labelled data. Using a dataset of 41.3 million research papers across the natural sciences and covering distinct eras of AI, here we show an accelerated adoption of AI tools among scientists and consistent professional advantages associated with AI usage, but a collective narrowing of scientific focus. Scientists who engage in AI-augmented research publish 3.02 times more papers, receive 4.84 times more citations and become research project leaders 1.37 years earlier than those who do not. By contrast, AI adoption shrinks the collective volume of scientific topics studied by 4.63% and decreases scientists' engagement with one another by 22%. By consequence, adoption of AI in science presents what seems to be a paradox: an expansion of individual scientists' impact but a contraction in collective science's reach, as AI-augmented work moves collectively towards areas richest in data. With reduced follow-on engagement, AI tools seem to automate established fields rather than explore new ones, highlighting a tension between personal advancement and collective scientific progress.
    DOI:  https://doi.org/10.1038/s41586-025-09922-y
  12. J Thorac Oncol. 2026 Jan;pii: S1556-0864(25)02885-0. [Epub ahead of print]21(1): 11-13
      
    DOI:  https://doi.org/10.1016/j.jtho.2025.10.015
  13. J Korean Med Sci. 2026 Jan 12. 41(2): e24
       BACKGROUND: The integration of artificial intelligence, specifically large language models, into editorial processes, is gaining interest due to its potential to streamline manuscript assessments, particularly regarding ethical and transparency reporting in public health journals. This study aims to evaluate the capability and limitations of ChatGPT-4.0 in accurately detecting missing ethical and transparency statements in research articles published in high-ranked (Q1) versus low-ranked (Q4) public health journals.
    METHODS: Articles from top-tier (Q1) and low-tier (Q4) public health journals were analyzed using ChatGPT-4.0 for the presence of essential ethical components, including ethics approval, informed consent, animal ethics, conflicts of interest, funding notes, and open data sharing statements. Performance metrics such as sensitivity, recall, and precision were calculated.
    RESULTS: ChatGPT exhibited high sensitivity and recall across all evaluated components, accurately identifying all missing ethics statements. However, precision varied significantly between categories, with notably high precision for data availability statements (0.96) and significantly lower precision for funding statements (0.16). A comparative analysis between Q1 and Q4 journals showed a marked increase in missing ethics statements in the Q4 group, particularly for open data sharing statements (4 vs. 50 cases), ethics approval (2 vs. 5 cases), and informed consent statements (3 vs. 8 cases).
    CONCLUSION: ChatGPT-4.0 in preliminary screening shows considerable promise, providing high accuracy in identifying missing ethics statements. However, limitations regarding precision highlight the necessity for additional human checks. A balanced integration of artificial intelligence and human judgment is recommended to enhance editorial checks and maintain ethical standards in public health publishing.
    Keywords:  Artificial Intelligence; ChatGPT-4.0; Editorial Policies; Ethics; Natural Language Processing; Public Health
    DOI:  https://doi.org/10.3346/jkms.2026.41.e24
  14. Nature. 2026 Jan;649(8097): 555
      
    Keywords:  Machine learning; Research management; Scientific community
    DOI:  https://doi.org/10.1038/d41586-026-00121-x
  15. Knee Surg Sports Traumatol Arthrosc. 2026 Jan 15.
      The integration of artificial intelligence (AI), the rise of mega-journals, and the manipulation of impact factors present challenges to scientific integrity. These trends threaten the core principles of objectivity, reproducibility, and transparency. This editorial highlights two categories of threats: (1) external pressures, such as AI misuse and metric-driven publishing models, and (2) internal systemic flaws, including the 'publish or perish' culture and methodological fragility. Mega-journals, characterized by high-volume publishing and broad interdisciplinary scopes, improve accessibility and accelerate dissemination. However, the emphasis on publication volume might weaken the rigour of peer review. To navigate these challenges, the authors propose a balanced approach that harnesses innovation without compromising scientific integrity. Proposed solutions include mandating AI transparency through frameworks like Consolidated Standards of Reporting Trials-Artificial Intelligence, and redefining impact metrics to emphasize reproducibility, mentorship, and societal impact alongside citations. Scientific journals should promote career opportunities less on publication quantity and more on quality. Global cooperation, via initiatives like the San Francisco Declaration on Research Assessment and the Committee on Publication Ethics, is essential to standardize ethics and address resource disparities. This editorial proposes solutions for researchers, journals, and policymakers to realign academic incentives and uphold the ethical foundation of science. By fostering transparency, accountability, and equity, the scientific community can preserve its ethical foundations while embracing transformative tools, ultimately advancing knowledge and serving society. LEVEL OF EVIDENCE: Level V.
    Keywords:  artificial; bibliometrics; ethics in publishing; intelligence; peer reviews; periodicals as topic
    DOI:  https://doi.org/10.1002/ksa.12717
  16. Dermatology. 2026 Jan 13. 1-7
       INTRODUCTION: The rapid integration of generative artificial intelligence (GenAI) into academic research has prompted ethical and regulatory concerns, particularly regarding its responsible use in scholarly publishing. Despite emerging recommendations from international organizations such as the Committee on Publication Ethics (COPE) and the International Committee of Medical Journal Editors (ICMJE), journal-specific guidance remains inconsistent.
    METHODS: This study evaluated the presence and characteristics of GenAI-related policies across 92 dermatology journals indexed in the 2024 Journal Citation Reports. Four reviewers independently assessed author instructions and publisher policies, collecting journal metrics and applying logistic regression to explore associations with guideline adoption.
    RESULTS: GenAI-specific guidance was found in 82.6% of journals, with 60.5% linking to publisher-level policies. Most journals (90.8%) prohibited GenAI authorship and required author accountability, yet only 2.6% referenced ICMJE guidance. Disclosure of GenAI use was mandated by 98.7%, although only a minority required specification of tool version (28.0%) or manufacturer (17.3%). GenAI image generation was addressed in 55.3% of policies, with ChatGPT mentioned by 46.1% of journals. COPE membership and use of COPE AI guidance were significantly associated with the presence of journal-level GenAI policies. While journals with GenAI guidance exhibited higher impact and citation metrics in univariable analysis, no predictors remained significant in multivariable models.
    CONCLUSION: These findings highlight broad yet uneven adoption of GenAI policies in dermatology publishing. Gaps in specificity, transparency, and alignment with international standards may pose risks to research integrity, emphasizing the need for clearer, standardized, and field-specific editorial guidance on GenAI use.
    DOI:  https://doi.org/10.1159/000550366
  17. PRiMER. 2025 ;9 59
       Introduction: Securing peer reviewers for scholarly manuscripts is essential to journal operations but has become increasingly challenging. Previous PRiMER data suggested that reviewer responsiveness has been declining, and we examined whether this decline persisted over time.
    Methods: We conducted a retrospective secondary analysis of reviewer invitation outcomes for all research manuscripts submitted to PRiMER from 2017 to July 31, 2025. Only invitations for research briefs (n=2,951) across 459 manuscripts were analyzed. Data were extracted from the ScholarOne editorial database. χ2 tests compared acceptance rates over time. Fixed-effect binary logistic regression controlled for individual reviewer behavior to assess linear trends in invitation acceptance and review completion rates.
    Results: Invitation acceptance peaked at 56.14% in 2020, then declined to 35.71% in 2024 and 38.58% in 2025-the lowest since journal inception. Logistic regression revealed a significant negative annual trend (OR=0.656, P<.001). Review completion rates declined from 89.31% in 2021 to 76.19% in 2025 (OR=0.707, P<.001). Late review rates ranged from 11.97% (2021) to 18.09% (2018) with no significant time trend.
    Conclusion: Reviewer responsiveness to PRiMER invitations has continued to decline, both in accepting invitations and completing reviews. If similar patterns exist across journals, innovative strategies to recruit, engage, and incentivize reviewers will be necessary to sustain peer review in its current form.
    DOI:  https://doi.org/10.22454/PRiMER.2025.831615
  18. Brain Commun. 2026 ;8(1): fcaf498
      Our editors discuss the importance of fair and constructive peer review while extending thanks to all those who have contributed their expertise in reviewing manuscripts for Brain Communications.
    DOI:  https://doi.org/10.1093/braincomms/fcaf498
  19. J Diabetes Sci Technol. 2026 Jan 14. 19322968251391819
      Sharing research code in an open access version-controlled repository offers significant benefits for both science as a whole and for individual researchers. In this article, we focus on this practice, which is fully aligned with the NIH's Gold Standard Science (GSS) program as well as FAIR (findable, accessible, interoperable, reusable) and TRUST (transparency, responsibility, user focus, sustainability, technology) principles. Gold Standard Science supports open science by emphasizing transparency, reproducibility, and the use of best practices that enable others to verify and extend research. Pairing a research article's cited data snapshot with a versioned, environment-specific code release, deposited in a companion code repository, ensures that, upon submission to a medical journal, readers and reviewers can directly verify results. An executable and updatable companion code repository complements, rather than replaces, established research data repositories. When code underlying medical research results is made openly available, then other scientists can inspect, run, and validate analyses. These activities enhance reproducibility, which is a core aim of GSS. Shared code also facilitates collaborative innovation by allowing researchers to extend the utility of the code to new datasets and applications. For researchers, code sharing can increase visibility, credibility, and citation impact. Demonstrating transparency through shared executable and updatable code builds trust with journal readers, peer reviewers, funders, and peers. Shared code in an open access repository signals adherence to high standards of scientific integrity and attracts opportunities for collaboration. A researcher who shares code receives recognition as a leader in reproducible, trustworthy research consistent with NIH's GSS principles.
    Keywords:  Gold Standard Science; code; data; diabetes; repository; reproducibility
    DOI:  https://doi.org/10.1177/19322968251391819
  20. J Stomatol Oral Maxillofac Surg. 2026 Jan 13. pii: S2468-7855(26)00009-1. [Epub ahead of print] 102717
       OBJECTIVES: English dominates scientific communication, yet non-native speakers face significant barriers in publishing. Artificial intelligence (AI) translation tools offer a potential solution, but their efficacy requires systematic evaluation. The aim of this paper is to evaluate the performance of generative AI tools with a focus on their suitability for non-native English-speaking researchers.
    MATERIALS AND METHODS: Thirty Non-English texts (150-300 words) across technical, academic, and descriptive genres were translated by six AI tools (ChatGPT-4.0, Claude 3.7, Copilot, Gemini 2.0, DeepSeek-V3, Perplexity) using standardized prompts. Translations were assessed via Grammarly® for correctness, clarity, engagement, and delivery. Statistical analysis (ANOVA, Kruskal-Wallis) compared performance.
    RESULTS: DeepSeek achieved the highest overall score (mean=92.9, p<0.001), significantly outperforming Claude (p=0.006) and Copilot (p=0.048), while matching Gemini (p=0.989). Gemini ranked second but frequently declined revisions, citing "already perfect" texts. Correctness varied significantly (p=0.0078), with Copilot excelling, while DeepSeek led in clarity, engagement, and delivery (p<0.01).
    CONCLUSION: DeepSeek emerged as the most robust translator, with Gemini as a close second. AI translation can help reduce barriers but requires transparency and ongoing refinement to balance efficiency with academic rigor.
    Keywords:  Artificial Intelligence; Maxillofacial Injuries; Medical Writing; Orthognathic Surgery; Scientific Writing
    DOI:  https://doi.org/10.1016/j.jormas.2026.102717
  21. J Stomatol Oral Maxillofac Surg. 2026 Jan 08. pii: S2468-7855(26)00005-4. [Epub ahead of print] 102713
       BACKGROUND: Case reports play a critical role in Oral and Maxillofacial Surgery (OMFS) by documenting rare presentations, unexpected complications, and complex intraoperative events. However, contemporary publication patterns suggest a bias toward reporting "ideal" outcomes, potentially limiting opportunities for complication-based learning.
    METHODS: A narrative review of bibliometric studies and large-scale clinical reports published between 2008 and 2025 was conducted using journal archives and database searches (PubMed, Scopus). The review synthesised trends in case report publication across major OMFS journals and compared these with real-world complication rates reported in high-volume clinical studies.
    RESULTS: Case report publications in OMFS journals have progressively declined over the past decade. In BJOMS, complication- or novelty-focused short communications decreased from 52% in 2008-2009 to 35% in 2010-2011. In contrast, large clinical cohorts document substantial adverse event rates, ranging from 3.6% in third molar surgery to 43.7% in orthognathic surgery. This discrepancy highlights a growing gap between clinical reality and published literature.
    CONCLUSION: The underrepresentation of complication-focused case reports skews perceptions of OMFS practice and limits experiential learning. Transparent, structured reporting of complications can enhance patient safety, strengthen surgical education, and better align published evidence with real-world outcomes. Journals should adopt policies that encourage and normalize the publication of clinically meaningful adverse events.
    Keywords:  Case Reports; Complications; Negative Outcomes; OMFS; Publication Bias
    DOI:  https://doi.org/10.1016/j.jormas.2026.102713
  22. ALTEX. 2026 ;43(1): 3-23
      Reporting standards have proliferated across biomedicine, yet incomplete methods reporting remains routine - less because the community doubts the value of transparency, but rather because compliance checking is tedious, inconsistently enforced, and poorly integrated into everyday writing and review. As a sequel to the Good In Vitro Reporting Standards (GIVReSt) argument that better reporting is essential infrastructure, this article explores a pragmatic next step: translating standards from static checklists into interactive, always-on guidance. We describe the development of three specialized "compliance copilots" built as custom GPT-based assistants - one aligned with the emerging GIVReSt, one reflecting the established ToxRTool reliability framework, and one mapped to ARRIVE for animal studies. The tools are designed to point to specific text evidence, flag missing essential information, and provide actionable suggestions while the manuscript is being written. Early benchmarking against expert assessments suggests that this approach can approx-imate human judgments for many checklist items in a fraction of the time and with high consistency. We also highlight why "strict" versus "lenient" interpretations matter, and why these systems should be framed as decision-support, not decision-makers. The central claim is cultural, not technical: arti-ficial intelligence (AI) will matter most when it makes rigorous reporting the path of least resistance, turning standards into routine practice rather than aspirational add-ons.
    Keywords:  large language models; reporting standards; reproducibility
    DOI:  https://doi.org/10.14573/altex.2601011
  23. Dermatologie (Heidelb). 2026 Jan 12.
      This CME training course focuses on various aspects of how clinical case reports (with or without images) can be published in compliance with data protection regulations-without compromising scientific integrity. In this article, practical guidance for medical specialists who wish to document, publish, or use cases for educational purposes in journals, lectures, or online formats is offered.
    Keywords:  Health data; Image data; Informed consent; Publication; Scientific integrity
    DOI:  https://doi.org/10.1007/s00105-025-05618-6
  24. Am Surg. 2026 Jan 12. 31348261416440
      Single case reports remain a common form of scholarly submission, particularly from residents, students, and clinician-educators. Many are thoughtfully written and describe interesting clinical problems, yet most do not reach publication. The reason is rarely a lack of effort or clinical insight. More often, it is a problem of scope. Over the past several years, The American Surgeon has worked with authors to transform narrowly focused case reports into case-based reviews that contribute meaningfully to the literature. When this succeeds, a clinical observation shifts from description to synthesis, and from recounting an individual event to offering guidance that informs practice and is cited by others. Making this transition requires a deliberate change in framing. A practical approach begins with defining a broader clinical question and grounding it in current literature. A structured review-particularly of articles and reviews published in the last three to five years-helps focus the discussion on contemporary standards and areas of debate. When the literature includes multiple reports or collected series, an updated systematic review may be a more appropriate strategy. Successful reviews are organized around how surgeons approach clinical problems and make decisions. They address a knowledge gap not yet resolved by existing literature and use an illustrative case to anchor a broader discussion. A case report describes one patient. A publishable paper must speak to many surgeons.
    Keywords:  case based review; case report; manuscript reframing; surgical publishing
    DOI:  https://doi.org/10.1177/00031348261416440
  25. Acad Radiol. 2026 Jan 12. pii: S1076-6332(25)01171-7. [Epub ahead of print]
       RATIONALE AND OBJECTIVES: To determine the publication rates and characteristics of oral scientific presentations from the European Society of Gastrointestinal and Abdominal Radiology (ESGAR) meetings held between 2019 and 2022, and to identify factors associated with subsequent publication.
    MATERIALS AND METHODS: This retrospective observational study analyzed 407 oral abstracts from ESGAR meetings (2019-2022). Abstract data were categorized by country, subspecialty, study design, and collaboration type. Publication searches were performed in PubMed. Publication time, journal name, journal impact factor (JIF), and citation counts were recorded. Statistical analyses included chi-square, logistic regression and Kruskal-Wallis tests.
    RESULTS: Of 407 oral presentations, 215 (52.8%) were subsequently published in PubMed-indexed journals, significantly higher than rate from ESGAR 2000-2001 (39.5%) (P < .001). Median publication time was 11.3 months. Country of origin was significantly associated with publication outcome (P < .001). No significant differences were found in publication rates among subspecialties (P = .577). Prospective studies had higher JIF than retrospective studies (P = .004). International collaborations had higher JIF than local collaborations (P = .027).
    CONCLUSION: More than half of ESGAR oral presentations achieved publication within 3 years, showing a clear increase compared with earlier meetings and reflecting enhanced research productivity and dissemination in gastrointestinal and abdominal radiology.
    Keywords:  Abdominal imaging; Gastrointestinal radiology; Publication outcome; Research productivity; Scientific congress
    DOI:  https://doi.org/10.1016/j.acra.2025.12.036
  26. Turk Patoloji Derg. 2026 ;42(1): 1-6
       OBJECTIVE: Despite the legal requirement to complete a thesis during residency training in Türkiye, the extent to which these theses are translated into high-quality scientific publications remains unclear. Disciplinary differences in research culture, resource availability, and clinical workload may influence these outcomes.
    MATERIAL AND METHODS: This cross-sectional study analyzed 1245 open access residency theses completed between 2018 and 2022 in the fields of pathology (n=344), endocrinology (n=525), and urology (n=376). Theses were retrieved from the National Thesis Center of the Council of Higher Education. Their publication status was identified via searches in PubMed and Google Scholar. Data collected included journal index status (SCI-E, ESCI, ULAKBIM), Journal Impact Factor™ (JIF), citation count, and time to publication. Statistical comparisons were made using chi-squared and Kruskal-Wallis tests with p < 0.05 considered significant.
    RESULTS: Among the 1245 residency theses analyzed, 344 (27.6%) were in pathology, 525 (42.2%) in endocrinology and metabolic diseases, and 376 (30.2%) in urology. The conversion rate to publication significantly differed across specialties (p = 0.0002): 86 of 344 pathology theses (25.0%), 115 of 525 endocrinology theses (21.9%), and 139 of 376 urology theses (37.0%) were published. Urology theses had the highest representation in SCI-E indexed journals (72.7%), while endocrinology demonstrated the highest mean Journal Impact Factor (2.3; p < 0.0001). The average number of citations per publication was also highest in urology (4.5), although this difference was not statistically significant (p = 0.0673). Median time to publication ranged from 2.3 to 2.7 years, with no significant difference between specialties (p = 0.1287). Differences in the distribution of Q2, Q3, and Q4 journal publications were statistically significant between specialties.
    CONCLUSION: Endocrinology had the highest number of theses, whereas urology had the highest publication rate and number of citations per publication.
    DOI:  https://doi.org/10.5146/tjpath.2026.14783
  27. Lancet Planet Health. 2026 Jan 12. pii: S2542-5196(25)00291-8. [Epub ahead of print] 101412
      Randomised clinical trials (RCTs) can contribute substantially to carbon dioxide emissions. In this Viewpoint, we explored the extent to which primary publications of RCTs reported environmental sustainability considerations in their study design (eg, resource use and travel movement) and outcomes (eg, the environmental impact of interventions under study). 252 RCTs published between Oct 17, 2022, and Oct 17, 2023, in five prominent medical journals, The Lancet, TheNew England Journal of Medicine, Nature Medicine, TheBritish Medical Journal, and PLOS Medicine, were included. Sustainability-related statements were reported in 29 (12%) of 252 of RCTs, but only four (1·6%) of 252 explicitly referenced sustainability considerations in their study design or outcome. Thus, environmental sustainability aspects of the study design decisions or outcomes collected seem to be rarely reported in primary publications of RCTs. The findings of this Viewpoint highlight the need for strategies for improved awareness of reporting of environmental sustainability considerations in the context of RCTs.
    DOI:  https://doi.org/10.1016/j.lanplh.2025.101412
  28. Nature. 2026 Jan;649(8097): 527
      
    Keywords:  Authorship; Publishing; Research management
    DOI:  https://doi.org/10.1038/d41586-026-00006-z
  29. Arch Argent Pediatr. 2026 Jan 15. e202510924
      The transmission of knowledge is a fundamental aspect of scientific work. In addition to being published, papers are presented at scientific meetings (conferences, etc.). Given the absence of dedicated content on this activity in most undergraduate and graduate curricula, this article offers recommendations for effective oral presentation, including preparation, the use of visual aids with presentation software, and the presentation of the paper. Recommendations are provided for the preliminary organization of the structure, the narrative sequence, the content organization, the calculation of presentation time, the use of legible, clear slides appropriate to the presentation time, and the audience address. All these recommendations can contribute to a better presentation, which is the purpose of this article.
    Keywords:  conferences as a topic; dissemination of information; health communication; medical education; scientific communication anddissemination
    DOI:  https://doi.org/10.5546/aap.2025-10924.eng
  30. Radiol Technol. 2026 Jan-Feb;97(3):97(3): 176-181
      
  31. Clin Ter. 2026 Jan-Feb;177(1):177(1): 12-22
       Introduction: Predatory journals threaten academic integrity, highlighting the need to educate young researchers on identifying and avoiding them. This study aims to develop and validate an educational video to raise awareness of predatory journals and equip future scholars with essential publishing skills.
    Methodology: Between August and November 2024, two Delphi processes were carried out. The first involved validation of the video script, incorporating feedback from 10 experts in academia and publishing. The second focused on refining the audiovisual components with input from two graphic and communication designers. Consensus was established at a threshold of 100% agreement. Additionally, 15 young researchers participated to ensure the video was tailored to the target audience.
    Results: The final video was produced following a three-round Delphi process to validate the script and a two-round process to finalize the audiovisual features. Validation by the target audience contributed to enhancing the video's quality and ensuring it was well-tailored to the end users. The final video has a duration of 10 minutes and 42 seconds.
    Conclusion: This study developed and validated an educational video to raise awareness of predatory journals. Refined through a rigorous Delphi process and audience feedback, the video meets high standards of clarity and usability, offering a valuable tool for young researchers. Future evaluations will assess its effectiveness.
    Keywords:  Awareness; Delphi validation; Educational Video; Medical education; Predatory Journals
    DOI:  https://doi.org/10.7417/CT.2026.1970
  32. mBio. 2026 Jan 12. e0298925
      The bioRxiv and medRxiv preprint servers brought preprinting to the life sciences and played a critical role in disseminating COVID research during the pandemic. Here, I reflect on the birth of bioRxiv and medRxiv and the crucial role so many members of the community played, our experience during the pandemic, and the launch of the new non-profit organization set up to oversee the servers. The pandemic was a stress test for bioRxiv and medRxiv that demonstrated their value and robustness. Under the umbrella of openRxiv, they are now poised to become long-term infrastructure underpinning a new publishing ecosystem.
    Keywords:  COVID; preprints; publishing
    DOI:  https://doi.org/10.1128/mbio.02989-25
  33. J Obstet Gynaecol Can. 2026 Jan 14. pii: S1701-2163(25)00446-3. [Epub ahead of print]48(2): 103200
      
    DOI:  https://doi.org/10.1016/j.jogc.2025.103200
  34. Arch Dis Child Educ Pract Ed. 2026 Jan 14. pii: edpract-2025-330124. [Epub ahead of print]
      
    Keywords:  History Of Medicine; Paediatrics
    DOI:  https://doi.org/10.1136/archdischild-2025-330124
  35. Vis Comput Ind Biomed Art. 2026 Jan 12. 9(1): 1
      Efficient and accurate assignment of journal submissions to suitable associate editors (AEs) is critical in maintaining review quality and timeliness, particularly in high-volume, rapidly evolving fields such as medical imaging. This study investigates the feasibility of leveraging large language models for AE-paper matching in IEEE Transactions on Medical Imaging. An AE database was curated from historical AE assignments and AE-authored publications, and extracted six key textual components from each paper title, four categories of structured keywords, and abstracts. ModernBERT was employed locally to generate high-dimensional semantic embeddings, which were then reduced using principal component analysis (PCA) for efficient similarity computation. Keyword similarity, derived from structured domain-specific metadata, and textual similarity from ModernBERT embeddings were combined to rank the candidate AEs. Experiments on internal (historical assignments) and external (AE Publications) test sets showed that keyword similarity is the dominant contributor to matching performance. Contrarily, textual similarity offers complementary gains, particularly when PCA is applied. Ablation studies confirmed that structured keywords alone provide strong matching accuracy, with titles offering additional benefits and abstracts offering minimal improvements. The proposed approach offers a practical, interpretable, and scalable tool for editorial workflows, reduces manual workload, and supports high-quality peer reviews.
    Keywords:  Associate editor assignment; Large language model; ModernBERT; Semantic similarity; Visualization
    DOI:  https://doi.org/10.1186/s42492-025-00212-y
  36. Cancer Epidemiol Biomarkers Prev. 2026 Jan 14. OF1-OF5
      The secondary use of data can accelerate innovation and drive discoveries, tool development, and reproducibility. Though data sharing can be challenging, policies and incentives can help facilitate and reward these efforts. To maximize the value of data produced through NIH awards, the NIH recently set forth comprehensive, modern data sharing requirements for all funded research with the 2023 NIH Data Management and Sharing Policy. In light of this new comprehensive policy, the NCI's Division of Cancer Control and Population Sciences (DCCPS) saw an opportunity to reward exemplary data sharers and add an incentive for future data sharing through the development of an award. In 2024, DCCPS established the "Paul Fearn Award for Excellence in Data Sharing" to promote recognition and celebrate data sharing within the field of population sciences. In its inaugural year, the award was given to researchers from four different projects, each exemplary in the sharing of DCCPS-funded work. Each awardee went above and beyond data sharing requirements to make data reusable for others. Their distinct data sharing achievements included making valuable historical datasets available, preparing accessible and easily downloadable datasets, and developing a novel data enclave and/or researcher data platform.
    DOI:  https://doi.org/10.1158/1055-9965.EPI-25-1482
  37. Health Info Libr J. 2026 Jan 14.
      This editorial explains the history of the Health Information and Libraries Journal from 1984 to 2025. Since its first issue, the Health Information and Libraries Journal has published over 1400 manuscripts, from reviews and original articles to editorials, brief communications, regular features, and obituaries of key members of the health library sector with links to the journal. The contributions of its four Editor-in-Chiefs are celebrated: Shane Godbolt (1984-1994), Judy Palmer (1999-2002), Graham Walton (2003-2008), and Maria J. Grant (2009-2025).
    Keywords:  librarians, clinical; librarians, embedded; librarians, health science; librarians, international; librarians, medical
    DOI:  https://doi.org/10.1111/hir.70009