bims-skolko Biomed News
on Scholarly communication
Issue of 2025–11–30
25 papers selected by
Thomas Krichel, Open Library Society



  1. PLoS One. 2025 ;20(11): e0336492
      This study examines the impact of methodological variations in publication-based rankings on the evaluation of individual research performance in business administration. Drawing on a unique dataset comprising complete personal publication lists of 233 professors from Austrian public universities (2009-2018), we apply ten distinct ranking variants that differ in their treatment of data sources, co-authorship, publication languages, article lengths, and journal qualities. These variants are categorized into purely quantity-focused and predominantly quality-focused rankings. Our results demonstrate that researcher rankings are susceptible to specification choices. While quantity-focused rankings produce relatively small performance differentials and high variability, quality-focused variants consistently identify a stable group of leading researchers. These scholars publish more frequently in English, in journals indexed by Web of Science (WoS), and in top-tier outlets according to the JOURQUAL ranking. Notably, leading researchers publish over twice as many articles in high-ranking journals as their peers. The findings underscore the significant implications of ranking design for career advancement and research strategy. For early-career researchers, aligning publication efforts with the logic of quality-focused rankings-favoring English-language publications in highly ranked, peer-reviewed journals-is crucial for enhancing academic visibility and competitiveness. Moreover, our study offers a methodological stress test for ranking systems, revealing the extent to which technical design influences outcomes. By leveraging comprehensive and multilingual publication data and systematically comparing multiple ranking methodologies, this study contributes to both the academic evaluation literature and practical guidance for researchers navigating the demands of a metric-driven academic environment.
    DOI:  https://doi.org/10.1371/journal.pone.0336492
  2. Clin Dermatol. 2025 Nov 24. pii: S0738-081X(25)00319-0. [Epub ahead of print]
      Dissemination of research findings is a crucial part of academic progress; however, many barriers to publishing in dermatology may prevent equitable researcher involvement and representation. The term Independent Journal (IJ) is not yet widely established in academic literature. IJs refer to self-initiated and potentially self-sponsored journals. IJs are typically initiated by individual clinicians or small groups rather than well-established organizations or commercial entities and have the goal of disseminating new research ideas or filling a current gap in the literature. While many IJs in dermatology have gained respect and contribute meaningfully to the field, there are ethical concerns regarding potential founding-sponsorship bias, peer-review protocols, and the resources used to ensure academic quality. To avoid potential ethical concerns regarding IJs' credibility, the goals of the journal and the peer-review processes should be clearly delineated. IJs have the opportunity to make research more accessible and represent an opportunity to transform the publishing industry. Herein, we define and delineate IJs from other journals, while exploring ethical issues and directions for clinicians to consider when studying or creating journals.
    Keywords:  Ethics; International Committee of Medical Journal Editors (ICMJE); conflict of interest; dermatology; equity; independent journals; journalology; justice; research; reviewers; traditional publishing; truthfulness
    DOI:  https://doi.org/10.1016/j.clindermatol.2025.11.003
  3. Int J Med Inform. 2025 Nov 20. pii: S1386-5056(25)00407-1. [Epub ahead of print]207 106190
       BACKGROUND: Despite the increasing use of AI tools like ChatGPT, Claude, and Gemini in scientific writing, concerns remain about their ability to generate accurate, high-quality, and consistent abstracts for research publications. The reliability of AI-generated abstracts in dental research is questionable when compared to human-written counterparts. This study aimed to develop a framework for evaluating AI-generated abstracts and compare the performance of ChatGPT, Claude, and Gemini against human-written abstracts in dental research.
    METHODS: The DAISY framework was developed to evaluate AI-generated abstracts across five domains: Data accuracy (D), Abstract quality (A), Integrity and consistency (I), Syntax and fluency (S), and Yield of human likelihood (Y). Reliability of the framework was assessed using Cohens Kappa (κ = 0.85) and Pearsons's correlation coefficient (0.92) for inter- and intra- expert reliability and was found to be satisfactory. This study adopted a comparative observational study design. Eight research articles belonging to structured (n = 4) and unstructured (n = 4) categories were selected from reputable journals. Researchers trained in scientific writing wrote abstracts for these articles, while AI-generated abstracts were obtained using specific prompts. Ten dental experts evaluated the abstracts using this framework. Statistical analysis was performed using ANOVA and Tukey's post-hoc test.
    RESULTS: Human-written abstracts consistently outperformed AI-generated ones across all DAISY framework domains. Among AI tools, ChatGPT scored highest in all DAISY framework domains, followed by Gemini and Claude. Human-written abstracts achieved the highest human likelihood score (90.25 ± 4.68), while AI-generated abstracts scored below 50%, with Gemini scoring least (3.25 ± 1.75). The differences between the groups were statistically significant (P ≤ 0.05).
    CONCLUSION: The DAISY framework proved reliable for evaluating AI-generated abstracts. While ChatGPT performed better than other AI tools, none matched the quality of human-written abstracts. This indicates that AI tools, though valuable, remain limited in producing credible scientific writing in dental research.
    Keywords:  Artificial Intelligence; ChatGPT; Claude; DAISY Framework; Dental Research; Gemini; Scientific Writing
    DOI:  https://doi.org/10.1016/j.ijmedinf.2025.106190
  4. Am J Orthod Dentofacial Orthop. 2025 Dec;pii: S0889-5406(25)00378-6. [Epub ahead of print]168(6): 657-658
      
    DOI:  https://doi.org/10.1016/j.ajodo.2025.09.005
  5. Clin Imaging. 2025 Nov 20. pii: S0899-7071(25)00279-7. [Epub ahead of print]129 110679
      
    Keywords:  Ethics in AI; Generative AI; Large language models; Radiology publishing
    DOI:  https://doi.org/10.1016/j.clinimag.2025.110679
  6. Tomography. 2025 Oct 30. pii: 123. [Epub ahead of print]11(11):
      This editorial provides insights on AI-written scientific manuscripts which represent an increasingly frequent phenomenon that must be managed by authors, reviewers and journal editors [...].
    DOI:  https://doi.org/10.3390/tomography11110123
  7. J Clin Epidemiol. 2025 Nov 20. pii: S0895-4356(25)00417-2. [Epub ahead of print] 112084
       OBJECTIVE: Systematic reviews (SRs) are pivotal to evidence-based medicine. Structured tools exist to guide their reporting and appraisal, such as PRISMA and AMSTAR. However, there is limited data on whether peer reviewers of SRs use such tools when assessing manuscripts. This study aimed to investigate the use of structured tools by peer reviewers when assessing SRs of interventions, identify which tools are used, and explore perceived needs for structured tools to support the peer-review process.
    STUDY DESIGN AND SETTING: In 2025, we conducted a cross-sectional study targeting individuals who peer-reviewed at least one SR of interventions in the past year. The online survey collected data on demographics, use, and familiarity with structured tools, as well as open-ended responses on potential needs.
    RESULTS: 217 peer reviewers took part in the study. PRISMA was the most familiar tool (99% familiar or very familiar) and most frequently used during peer review (53% always used). The use of other tools such as AMSTAR, PRESS, ROBIS, and JBI was infrequent. 17% reported using other structured tools beyond those listed. Most participants indicated that journals rarely required use of structured tools, except PRISMA. A notable proportion (55%) expressed concerns about time constraints, and 25% noted the lack of a comprehensive tool. Nearly half (45%) expressed a need for a dedicated structured tool for SR peer review, with checklists in PDF or embedded formats preferred. Participants expressed both advantages and concerns related to such tools.
    CONCLUSIONS: Most peer reviewers used PRISMA when assessing systematic reviews, while other structured tools were seldom applied. Only a few journals provided or required such tools, revealing inconsistent editorial practices. Participants reported barriers, including time constraints and a lack of suitable instruments. These findings highlight the need for a practical, validated tool, built upon existing instruments and integrated into editorial workflows. Such a tool could make peer review of systematics more consistent and transparent.
    Keywords:  Checklist; Evidence-Based Practice; Peer Review; Quality Assurance; Reporting Guidelines; Systematic Reviews
    DOI:  https://doi.org/10.1016/j.jclinepi.2025.112084
  8. Epilepsia Open. 2025 Nov 27.
       OBJECTIVE: The integration of neurotechnology and artificial intelligence (AI) in epilepsy research has led to significant advancements in diagnosis, monitoring, and treatment. However, the impact of these innovations is often diminished by inadequate and inaccurate reporting, limiting their reproducibility and implementation. This study aimed to identify common peer review concerns and develop reporting recommendations specific to neurotechnology and AI studies in epilepsy.
    METHODS: We conducted a qualitative analysis of peer review comments from original research article submissions to Epilepsia Open over a 2-year period (September 2021-August 2023). We selected manuscripts that focused on neurotechnology or AI applications in epilepsy, excluding those using standard clinical technologies or conventional statistical analyses. Reviewer comments were classified using a validated checklist, categorizing issues into themes and subthemes. Based on recurrent peer review concerns, we developed a set of reporting recommendations for neurotechnology and AI studies.
    RESULTS: Among 329 manuscripts sent for peer review, 67 were classified as neurotechnology or AI studies and included in the analysis. These studies predominantly involved advanced neuroimaging analysis, advanced electroencephalography (EEG) analysis, and neuromodulation systems. Reviewer comments were primarily focused on study methodology (37%), manuscript presentation (19%), discussion (17%), and results (12%). Based on peer review comments, we formulated reporting recommendations, hoping to enhance study transparency, methodological rigor, and reproducibility.
    SIGNIFICANCE: Our reporting recommendations address key concerns raised during peer review, providing guidance to authors and reviewers to improve the quality and clarity of neurotechnology and AI research in epilepsy. These recommendations complement existing reporting standards and contribute to the advancement of robust and impactful research in the field.
    PLAIN LANGUAGE SUMMARY: We studied how researchers report studies on neurotechnology and AI in epilepsy. Many studies face problems during peer review, such as unclear methods, weak study rationale, and errors in statistics or citations. We analyzed reviewer feedback and created recommendations to improve how these studies are reported. Our goal is to help researchers develop and present their work more clearly and accurately, making it easier for others to understand and build upon their findings. This can lead to better use of AI and neurotechnology in epilepsy research and care.
    Keywords:  artificial intelligence; epilepsy; machine learning; neurotechnology; standards
    DOI:  https://doi.org/10.1002/epi4.70194
  9. Cureus. 2025 Oct;17(10): e95357
      Peer review has long stood as the principal safeguard for scientific credibility, yet much of its authority rests on tradition rather than empirical proof of efficacy. In recent years, persistent vulnerabilities, ranging from bias and inconsistency to opaque procedures and protracted delays, have eroded trust in the peer review system. Rising submission volumes, mounting commercial influences, and dwindling reviewer engagement have amplified the strain. Problems span structural and individual levels: an overburdened reviewer base, lack of standardized practices, unclear decision-making, slow turnaround times, and limited diversity in evaluation panels, together with personal pitfalls such as unconscious bias, conflicts of interest, poor accountability, inadequate training, and breaches of confidentiality, are present in the spectrum of issues. This editorial explores practical and ethical reforms to strengthen the process, including elevating reviewing to a recognized profession, introducing meaningful incentives, incorporating artificial intelligence judiciously, embracing transparent yet protective models, expanding reviewer diversity, and streamlining editorial workflows.
    Keywords:  artificial intelligence; bias; health research; peer review; publication; research review; review; review system; reviewer; scientific article
    DOI:  https://doi.org/10.7759/cureus.95357
  10. bioRxiv. 2025 Oct 14. pii: 2025.10.10.681750. [Epub ahead of print]
      Scientific publications have become the backbone of scientific communication since their foundation in 1665. The three main models for publishing are Traditional (or subscription-based), Open Access (OA), and Hybrid. As of July 1, 2025, the NIH requires that Author Accepted Manuscripts resulting from NIH-funded research be immediately publicly available. To comply with this new requirement, authors may be forced to pay an Article Processing Charge (APC) to publish Open Access, ranging from ~$2000 to ~$13,000 per article. With this change to the scientific publishing landscape, publishing costs shift from subscribers to authors causing authors to re-evaluate how they choose which journal to publish in. Here we analyze 75 popular biomedical journals to evaluate the publishing costs compared to the scientific impact (i.e. Impact Factor, CiteScore, SNIP) illustrated by three different Cost-Impact Effectiveness (CIE) metrics (APC/IF, APC/CS and APC/SNIP). To complement the new open access policy, our goal is to provide a resource to help the scientific community evaluate the impact-based cost effectiveness of different Open Access options during their journal selection process.
    DOI:  https://doi.org/10.1101/2025.10.10.681750
  11. Int J Integr Care. 2025 Oct-Dec;25(4):25(4): 15
      
    Keywords:  Co-production; journey mapping; lived experience; publication process; research design
    DOI:  https://doi.org/10.5334/ijic.10240
  12. Science. 2025 Nov 27. 390(6776): 891-893
      A new dataset highlights distinctive contributions of scientists who both publish and patent their research.
    DOI:  https://doi.org/10.1126/science.adx3736
  13. Clin Transl Sci. 2025 Dec;18(12): e70436
      
    Keywords:  author; author order; director; last author; paper
    DOI:  https://doi.org/10.1111/cts.70436