bims-skolko Biomed News
on Scholarly communication
Issue of 2026–01–25
25 papers selected by
Thomas Krichel, Open Library Society



  1. PeerJ. 2026 ;14 e20502
      Researchers who serve on grant review and hiring committees have to make decisions about the intrinsic value of research in short periods of time, and research impact metrics such Journal Impact Factor (JIF) exert undue influence on these decisions. Initiatives such as the Coalition for Advancing Research Assessment (CoARA) and the Declaration on Research Assessment (DORA) emphasize responsible use of quantitative metrics and avoidance of journal-based impact metrics for research assessment. Further, our previous qualitative research suggested that assessing credibility, or trustworthiness, of research is important to researchers not only when they seek to inform their own research but also in the context of research assessment committees. To confirm our findings from previous interviews in quantitative terms, we surveyed 485 biology researchers who have served on committees for grant review or hiring and promotion decisions, to understand how they assess the credibility of research outputs in these contexts. We found that concepts like credibility, trustworthiness, quality, and impact lack consistent definitions and interpretations by researchers, which had already been observed in our interviews. We also found that, in our sample, assessment of credibility is very important to the majority (90%, 95% CI [87-92%]) of researchers serving in these committees but fewer than half of participants are satisfied with their ability to assess credibility. This gap between importance of an assessment and satisfaction in the ability to conduct it was reflected in multiple aspects of credibility we tested, and it was greatest for researchers seeking to assess the integrity of research (such as identifying signs of fabrication, falsification, or plagiarism), and the suitability and completeness of research methods. Non-traditional research outputs associated with open science practices-research data, code, protocols, and preprints-are particularly hard for researchers to assess, despite the potential of Open Science practices to signal trustworthiness. A substantial proportion of participants (57% [52%, 61%] of participants) report using journal reputation and JIF to assess credibility of research articles and outputs, despite journal reputation and JIF being proxies for credibility that rely on characteristics of research outputs that are extrinsic, rather than intrinsic, to the output itself. While our results only describe the practices and perspectives of our sample, they may suggest opportunities to develop better guidance and better signals to support the evaluation of research credibility and trustworthiness-and ultimately support research assessment reform, away from the use of proxies for impact and towards assessing the intrinsic characteristics and values researchers see as important.
    Keywords:  Open science; Research assessment; Scholarly communication; Survey data
    DOI:  https://doi.org/10.7717/peerj.20502
  2. Acad Med. 2026 Jan 22. pii: wvag005. [Epub ahead of print]
      Like a patient with a serious medical crisis arriving in an emergency department, an article submitted to a journal has a story to tell. Just as a doctor must develop trust with a patient to provide the opportunity for a story to unfold and allow for unexpected twists and turns, an editor of a journal must develop trust to communicate effectively with authors and help them tell their stories. The vital heartbeats of a journal are the ideas that become stories told by authors and are nurtured by reviewers, editors, and staff. As Academic Medicine celebrates its 100-year anniversary, the important stories told by previous authors will continue to resonate like the continued beating of a heart and provide relevance for the current articles that will influence future authors, students, and the health professions education community.
    Keywords:  future of academic medicine; medical education; medical faculty; medical humanities; scholarly publishing
    DOI:  https://doi.org/10.1093/acamed/wvag005
  3. Science. 2026 Jan 22. 391(6783): 332-333
      Thousands have penned more than one-third of a journal issue, raising conflict-of-interest concerns.
    DOI:  https://doi.org/10.1126/science.aef6706
  4. Nature. 2026 Jan 19.
      
    Keywords:  Careers; Lab life; Publishing; Research management
    DOI:  https://doi.org/10.1038/d41586-025-04061-w
  5. Arthritis Care Res (Hoboken). 2026 Jan 19.
       OBJECTIVE: We aimed to describe the trends and main reasons for study retraction in rheumatology literature.
    METHODS: We reviewed the Retraction Watch database to identify retracted articles in rheumatology. We recorded the main study characteristics, authors' countries, reasons for retraction, time from publication to retraction, and trends over time. Reasons for retraction were classified as scientific misconduct, data/figure errors, or other reasons. Main paper features and cause of retractions in rheumatology were compared with a sample of papers from other medical specialties.
    RESULTS: A total of 381 (79.5% original articles) rheumatology articles were retracted between 1989 and 2024. Most originated from Asia (68.5%), particularly China (50.7%). Scientific misconduct accounted for 75.3% of retractions, followed by data errors (14.9%) and other reasons (7.6%). Common misconduct types included data fabrication, fake peer review, duplication, and authorship issues. The median time from publication to retraction was 18 months (IQR 9 - 46), with one-third of papers requiring more than 36 months to be retracted. Time to retraction did not improve over time. The number of retractions steadily increased over time from 18 in 2000 - 2009, 117 in 2010 - 2019, and 207 in 2020 - 2023 (P < 0.001). Compared with other medical specialties, rheumatology exhibited similar retraction patterns, differing mainly in geographic distribution.
    CONCLUSIONS: Retractions in rheumatology have risen substantially, largely due to misconduct. This trend may reflect both an increase in questionable research practices or improved detection. Strengthening early-career education, institutional oversight, and ethical research culture is essential to enhance transparency and integrity in the field.
    DOI:  https://doi.org/10.1002/acr.80005
  6. Cureus. 2025 Dec;17(12): e99518
      Although submitting identical or substantially overlapping manuscripts to multiple journals constitutes research misconduct, current detection mechanisms rely mostly on chance. This creates a risk-reward landscape in which unethical authors may face minimal consequences even when duplicate submissions are identified, while gaining a considerable advantage when they are not. This case study presents the firsthand discovery of a duplicate submission during peer review and describes major shortcomings in the subsequent editorial handling. On April 28, 2025, while reviewing a manuscript on occupational radiation exposure for an academic journal, a routine literature search showed that a nearly identical article had been published earlier that month in another open-access journal. Despite prompt notification from the editorial office, the journal continued with standard peer review, obtaining four full referee reports before rejecting the submission on May 28, 2025. The decision letter made no mention of an ethics investigation, institutional notification, or any corrective action. The authors appear to have faced no meaningful sanctions, having already published the article in another journal while the same manuscript remained under review elsewhere. This case illustrates how unethical authors can submit the same work to multiple journals simultaneously, wait for the fastest or most favorable review, and abandon other submissions without penalty. At worst, they receive a routine rejection without ethical consequences. These observations suggest that penalties for detected duplicate submissions are minimal, whereas the potential benefits of undetected misconduct remain high. To correct this incentive imbalance, future digital infrastructure initiatives in scholarly publishing should establish robust pre-submission and pre-review screening protocols, standardized investigation procedures for suspected violations, and enforceable sanctions, including institutional notification, time-limited submission bans, and, most importantly, the retraction of published articles found to have been the subject of duplicate submission.
    Keywords:  academic publishing; duplicate submission; peer review integrity; publication ethics; research misconduct; risk-reward asymmetry; simultaneous submission
    DOI:  https://doi.org/10.7759/cureus.99518
  7. J Exp Orthop. 2026 Jan;13(1): e70397
      The integration of artificial intelligence (AI), the rise of mega-journals, and the manipulation of impact factors present challenges to scientific integrity. These trends threaten the core principles of objectivity, reproducibility, and transparency. This editorial highlights two categories of threats: (1) external pressures, such as AI misuse and metric-driven publishing models, and (2) internal systemic flaws, including the 'publish or perish' culture and methodological fragility. Mega-journals, characterized by high-volume publishing and broad interdisciplinary scopes, improve accessibility and accelerate dissemination. However, the emphasis on publication volume might weaken the rigor of peer review. To navigate these challenges, the authors propose a balanced approach that harnesses innovation without compromising scientific integrity. Proposed solutions include mandating AI transparency through frameworks like CONSORT-AI, and redefining impact metrics to emphasize reproducibility, mentorship, and societal impact alongside citations. Scientific journals should promote career opportunities less on publication quantity and more on quality. Global cooperation, via initiatives like the San Francisco Declaration on Research Assessment (DORA) and the Committee on Publication Ethics (COPE), is essential to standardize ethics and address resource disparities. This editorial proposes solutions for researchers, journals, and policymakers to realign academic incentives and uphold the ethical foundation of the science. By fostering transparency, accountability, and equity, the scientific community can preserve its ethical foundations while embracing transformative tools-ultimately advancing knowledge and serving society.
    Level of Evidence: Level V.
    Keywords:  artificial; bibliometrics; ethics in publishing; intelligence; peer reviews; periodicals as topic
    DOI:  https://doi.org/10.1002/jeo2.70397
  8. J Neurosurg. 2025 Dec 19. 1-9
       OBJECTIVE: The rapid development of artificial intelligence (AI) presents an opportunity to streamline the peer-review process and provide key information to guide academic journals, editorial staff, and reviewers, as well as authors. This study aimed to fine-tune several standard large language and transformer models (LLMs) on the basis of the text of peer-reviewer comments and editorial outcome decisions to find text-based associations with journal decisions for acceptance versus rejection.
    METHODS: This study, with participation from the Journal of Neurosurgery Publishing Group (JNSPG), included anonymized final decision and reviewer comments to all article submissions made to the Journal of Neurosurgery (JNS) and subsidiary journals from 2021 to 2023. All final decisions were grouped as binary (acceptance/revision vs rejection/transfer). Leading words (i.e., "acceptance" or "rejection") were removed from textual reviewer comments, which were then analyzed using various machine learning and LLMs, including BERT, GPT-2, GPT-3, GPT-4o, and GRU variants, to predict the final manuscript decision outcome. Performance was measured using receiver operating characteristic (ROC) curves. Shapley Additive Explanations (SHAP) analysis was conducted to evaluate the impact of individual words on model predictions.
    RESULTS: In the ROC analysis, the fine-tuned GPT-4mini and GPT-3 models achieved the highest area under the curve (AUC) values of 0.91, followed by BERT and GPT-2 with AUC values of 0.84. These were followed by bidirectional GRU and GPT-3 (untrained) with AUC values of 0.75 and 0.70, respectively. Unidirectional GRU and GPT-4o (untrained) demonstrated the lowest AUC values of 0.68 and 0.67, respectively. In the SHAP analysis, the logistic regression model identified words like future," "interesting," and "written" as significant positive predictors of acceptance, whereas "clear," "unclear," and "does" were associated with rejections. The GRU model identified "study," "useful," and "journal" as significant positive predictors, and "unclear," "reading," and "incidence" as negative predictors.
    CONCLUSIONS: This proof-of-concept study demonstrates that fine-tuned AI models, particularly GPT-3, can predict manuscript acceptance with reasonable accuracy using only textual reviewer comments. Emerging themes that lend weight to article outcome include article clarity, utility, suitability, cohort size, and diligence in addressing reviewer queries. These findings suggest that, when fine-tuned, AI modeling holds significant potential in assisting and facilitating the peer-review process.
    Keywords:  artificial intelligence; journal; large language model; peer review
    DOI:  https://doi.org/10.3171/2025.8.JNS242667
  9. Int J Low Extrem Wounds. 2026 Jan 22. 15347346261416134
      
    Keywords:  editors; journals; medical writing; publication; publication ethics
    DOI:  https://doi.org/10.1177/15347346261416134
  10. PLoS Biol. 2026 Jan;24(1): e3003574
      Women are underrepresented in academia-especially in STEMM fields, at top institutions, and in senior positions. This is due, at least in part, to the many obstacles that they face compared to their male counterparts. There has been substantial debate as to whether the peer review system is biased against women. Some studies-mostly based on analyses of thousands of Economics research articles-have shown that manuscripts authored by women experience longer peer review times (defined as the time intervened from submission to acceptance) than comparable manuscripts authored by men. Other studies, however, have found no effect of author's gender on acceptance delays, raising questions about whether the gender gap is specific to certain fields. Biomedical and life scientists produce 36% of the research articles published annually worldwide; therefore, a comprehensive understanding of how women are treated by the peer review system requires a thorough examination of biomedicine and the life sciences. By analyzing all articles indexed in the PubMed database (>36.5 million articles published in >36,000 biomedical and life sciences journals), we show that the median amount of time spent under review is 7.4%-14.6% longer for female-authored articles than for male-authored articles, and that differences remain significant after controlling for several factors. The gender gap is pervasive, affecting most disciplines, regardless of how well women are represented in each discipline; however, the gap is absent or even reversed in some disciplines. We also show that authors based in low-income countries tend to experience longer review times. Our findings contribute to explaining the gender gap in publication rates and representation.
    DOI:  https://doi.org/10.1371/journal.pbio.3003574
  11. J Hosp Med. 2026 Jan 22.
      Peer review of research products suffers from poor inter-rater reliability. Few studies examine whether this limitation generalizes to case reports. We conducted a cross-sectional analysis of peer reviews of clinical vignette abstracts submitted to a national hospitalist meeting in 2024 and 2025. Three randomly assigned reviewers scored each vignette on a 1-10 scale. We analyzed variation in scores across abstracts and reviewers and estimated inter-rater reliability via intraclass correlation coefficient (ICC). Two hundred twenty-one reviewers evaluated 1630 abstracts in 2024-2025. Abstract scores varied substantially: 384/1630 (23.6%) abstracts had a difference of 4 or more points (>2 standard deviations) between highest and lowest reviewer scores. Scores varied by reviewer: 2024 reviewer-level mean scores ranged 4.27-8.47 (standard deviation (SD): 0.70-2.80); 2025 scores ranged 4.06-8.59 (SD: 0.62-2.69). Inter-rater reliability was poor (ICC: 0.37). Adjusting final scores based on reviewer scoring tendencies changed the accept/reject category for 183 (11.2%) abstracts, suggesting opportunities for quality improvement.
    DOI:  https://doi.org/10.1002/jhm.70261
  12. Cureus. 2025 Dec;17(12): e99399
      Academic medicine increasingly evaluates faculty through quantifiable productivity metrics, including publication counts, citation indices, and grant funding. Within this environment, a small subset of clinicians and physician-scientists produce scholarly output at rates far exceeding disciplinary norms. These highly productive individuals can accelerate innovation, attract trainees, and elevate institutional reputation. Yet extreme productivity also raises legitimate concerns about authorship practices, data oversight, research integrity, and equitable resource allocation. This editorial proposes a balanced framework for understanding academic "outliers," offering an operational definition, examining structural and individual factors that drive exceptional productivity, and outlining institutional safeguards that ensure rigor, transparency, and fairness. Recognizing differentiated forms of excellence-clinical, educational, scientific, and systems improvement can help academic medicine support high-performing faculty without compromising integrity or equity.
    Keywords:  academic medicine; institutional culture; outliers; physician-scientist; prolific authorship
    DOI:  https://doi.org/10.7759/cureus.99399
  13. Res Integr Peer Rev. 2026 Jan 23. 11(1): 3
       BACKGROUND: Preprints are becoming more common in the health sciences and allow for instant dissemination of research findings; however, with the risk of compromising quality and transparency. Peer review potentially improves reporting and reduces errors, although its actual impact is not known. The objective of this scoping review was to synthesize evidence comparing preprints in the health areas to their peer-reviewed versions and assess preprint publication rates.
    METHODS: We searched Embase, Medline OVID, Scopus, and Web of Science from inception to July 2024 for studies comparing preprints with their peer-reviewed versions and/or investigating preprint publication rates. Two reviewers independently conducted screening and extracted data on study characteristics, parameters compared, and preprint publication rates. We conducted a narrative synthesis.
    RESULTS: We included 40 studies (published 2019-2024; 92% peer-reviewed). The median number of studies analyzed per article was 356 (range: 19-73,256). 42% of preprints were eventually published among 33 studies that reported publication rates (IQR: 22%-67%). Preprint searches routinely started on January 1, 2020, with a median of 24.3 months and a median difference of 11.5 months between preprint and peer-reviewed search end dates. Commonly compared parameters were primary outcomes/endpoints (37%) and sample size (30%), with peer-reviewed articles showing improved reporting for funding (13%), conflicts of interest (13%).
    CONCLUSION: While peer review enhances transparency and methodological reporting (e.g., funding, conflicts of interest), the content, outcomes, and conclusions of health-related preprints remain largely consistent with their peer-reviewed versions. Preprints facilitate rapid knowledge dissemination but may benefit from stricter reporting standards to improve credibility. Future efforts should focus on standardizing preprint policies to bridge quality gaps without delaying access.
    Keywords:  Health research; Peer reviewed; Preprint; Publication rates; Reporting quality
    DOI:  https://doi.org/10.1186/s41073-026-00189-z
  14. J Educ Eval Health Prof. 2026 ;23 2
      Reference management software (RMS) represents a cornerstone of modern academic writing and publishing. For decades, programs such as EndNote, Zotero, and Mendeley have played central roles in facilitating citation organization, bibliography formatting, and collaborative scholarship. Although each platform has introduced unique innovations, persistent limitations remain, particularly with respect to usability, accessibility, and accuracy. In parallel, the rise of generative artificial intelligence has introduced an unprecedented challenge: the inadvertent inclusion of fabricated or incorrect references mistakenly incorporated into manuscripts. This phenomenon has exposed a critical limitation of traditional RMS platforms, namely their inability to verify reference authenticity. Against this backdrop, new solutions have emerged. One such example is CiteWell (https://citewell.org/), an artificial intelligence (AI)-era RMS that introduces several notable innovations, including PubMed-integrated verification, an intuitive interface for new users, customizable journal-specific styles, and multilingual accessibility. This review provides a comprehensive historical overview of RMS, evaluates the strengths and weaknesses of major platforms, and positions emerging AI-based tools as a new paradigm that combines traditional reference management with essential safeguards for contemporary academic challenges.
    Keywords:  AI hallucinations; Academic integrity; CiteWell; PubMed validation; Reference management software
    DOI:  https://doi.org/10.3352/jeehp.2026.23.2
  15. J Eval Clin Pract. 2026 Feb;32(1): e70367
      
    Keywords:  indexing; journal publication; medical students; predatory journals; quality indicator
    DOI:  https://doi.org/10.1111/jep.70367
  16. Rev Cient Odontol (Lima). 2025 Oct-Dec;14(1):14(1): e275
       Objective: This cross-sectional study evaluated the frequency of adherence to sharing dental research data over the last ten years (2013-2023).
    Methods: Data was obtained by searching the articles published in the five high-impact factor multidisciplinary journals in Dentistry, Oral Surgery & Medicine. A total of 300 dental articles published in three time periods (2013/2018/2023) were randomly selected (n=900). Two researchers performed the study selection and extracted the data. The main outcome was data sharing (yes/no). Comparative evaluation of data sharing distribution was performed with the Chi-square test and the contribution of variables on the data sharing with adjusted logistic regression.
    Results: Of the total studies included (n=900), only 20 records reported data sharing practices (data sharing prevalence: 2.2%). A significantly higher prevalence of data sharing was identified among studies published as "open access" [Odds Ratio: 2.97; 95% Confidence Interval: 1.10-8.02], than those published in subscription format.
    Conclusion: Low adherence to data sharing practices has been identified in the multidisciplinary dental literature. The results indicated that the type of publication was associated with the outcome, but other aspects, such as the year of publication, continent, and number of citations were not associated with the practice of data sharing.
    Keywords:  data sharing; dentistry; open science; research integrity; transparency
    DOI:  https://doi.org/10.21142/2523-2754-1401-2026-275
  17. Am Surg. 2026 Jan 18. 31348261416462
      Requests for major revision generate more anxiety than almost any other editorial decision, in part because authors struggle to interpret what the journal is signaling. Some view major revision as near acceptance and rush to make changes, while others interpret it as a softened rejection and respond incompletely. Both approaches miss the central purpose of major revision. A request for major revision represents a conditional investment by editors and reviewers. The topic is relevant and the question appropriate for the journal, but the manuscript is not yet ready for publication. This editorial provides practical guidance on how authors should respond, emphasizing judgment over persistence. Key principles include reading reviews with distance, understanding the structural issues underlying reviewer comments, and avoiding a checklist mentality. The editorial highlights the importance of using the response-to-reviewers form correctly, making revisions easy to identify, and respecting the significant time reviewers devote to thoughtful critique. Guidance is provided on responding without defensiveness, prioritizing core concerns related to framing and contribution, and reassessing whether the manuscript truly advances the field or has become redundant. Situations in which authors may reasonably decline to pursue revision, as well as how to disagree productively with reviewers, are also addressed. Major revision is neither a promise nor a rejection. When approached as collaboration rather than negotiation, it often results in a manuscript that is clearer, stronger, and more valuable to practicing surgeons.
    Keywords:  editorial decision making; major revision; manuscript revision; peer review process; scholarly publlshing
    DOI:  https://doi.org/10.1177/00031348261416462
  18. J Dent Res. 2026 Jan 21. 220345251406260
      
    Keywords:  Open Science; data management; editorial policies; peer review; research; science
    DOI:  https://doi.org/10.1177/00220345251406260
  19. Sch Psychol. 2026 Jan;41(1): 1-4
      In this introductory editorial, I outline the primary focus of School Psychology in advancing equity, diversity, and inclusion through rigorous science. I reflect on the seminal mark made by my predecessors and my plan to continue the important work of those who came before me. The incoming 2026 editorial leadership team is introduced. Equity, diversity, and inclusion action steps are outlined. Information related to journal types and the submission process is provided. Updates related to journal actions in open science and a joint special series are provided. (PsycInfo Database Record (c) 2026 APA, all rights reserved).
    DOI:  https://doi.org/10.1037/spq0000736