Digit Health. 2026 Jan-Dec;12:12
20552076261430065
Objective: The increasing use of large language models (LLMs) for manuscript preparation and content generation presents both opportunities and risks, creating an urgent need for clear guidance. While many journals have introduced directives, their consistency and scope remain unclear. This study aimed to assess the prevalence and nature of LLM use guidance in emergency medicine publishing.
Methods: We conducted a cross-sectional analysis of emergency medicine journals, reviewing websites for directives on LLM use by authors, and regarding the use of AI in the peer review process by editors and reviewers. Data were extracted on guidance existence, stakeholder requirements, publisher adoption, and association with journal metrics.
Results: Of the 56 journals, 38 (68%) provided a directive on LLM use. While all 38 (100%) permitted LLM use for writing, guidance for authors on image generation was conflicting: 32% permitted it, while 40% explicitly prohibited it. Directives for editors were similarly contradictory, with 24% prohibiting LLM use and one (3%) permitting it. For reviewers, 47% prohibited LLM use, while one (3%) permitted it. Publisher-driven fragmentation was profound, with adoption rates varying from 100% to 18%. Notably, no statistically significant differences were detected between the presence of a directive and journal quality metrics (P > .05).
Conclusions: Emergency medicine publishing demonstrates significant variations and conflicting guidance in its governance of LLM use. Existing directives present contradictory rules for authors, editors, and reviewers on key issues like image generation and use in peer review. To close this critical guidance gap, a comprehensive, standardized framework is urgently needed to resolve these conflicts and foster the responsible integration of digital technologies into scholarly publishing.
Keywords: Large language model; editorial guidance; emergency medicine; journal metrics; publication integrity