JMIR Res Protoc. 2026 May 14. 15
e90588
Background: Artificial intelligence (AI), including large language models (LLMs), is increasingly integrated into systematic review (SR) workflows. AI tools may accelerate searching, screening, data extraction, and reporting, but their effects on methodological quality, reporting completeness, transparency, and reproducibility remain uncertain. Existing evaluations largely examine isolated tasks, and inconsistent disclosure of AI use limits reproducibility and oversight.
Objective: This 4-phase mixed methods meta-research study will (1) compare the methodological quality of AI-assisted versus traditional SRs; (2) refine, finalize, and apply a preliminary AI Transparency and Disclosure Index (AITDI); (3) evaluate reproducibility by comparing outputs across repeated runs of the same AI model, across different AI models, and between AI models and human reviewers at multiple SR stages; and (4) explore knowledge user perspectives on rigor, transparency, and trust in AI-assisted SRs.
Methods: We will conduct a matched cohort analysis of SRs published from 2023 to 2025 in biomedical journals. Each AI-assisted SR will be matched 1:2 with traditional SRs by publication year, clinical domain, review type, and meta-analysis status. Two independent reviewers will apply A Measurement Tool to Assess Systematic Reviews, version 2 (AMSTAR 2; methodological quality), PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 (reporting completeness), and, when applicable, Risk of Bias in SRs (ROBIS; risk-of-bias rigor). A preliminary AITDI will be refined and then applied to all AI-assisted SRs. Reproducibility will be assessed using SR-derived task sets to compare outputs across repeated runs of the same model, across different models, and between AI and human reviewers at key SR stages. Semistructured interviews with authors, editors, clinicians, policymakers, and patient partners will be analyzed using reflexive thematic analysis.
Results: As of December 2025, the study has been preregistered on the Open Science Framework (OSF; DOI: 10.17605/OSF.IO/Q5JRW), the search strategy has been finalized, and title/abstract screening has begun. Data extraction is planned for March-May 2026, followed by AITDI refinement and reproducibility testing from May 2026 to October 2026. Qualitative interviews are anticipated from October 2026 to February 2027, with final analyses by April 2027 and dissemination planned for mid-2027.
Conclusions: This study will provide one of the first empirical comparisons of methodological quality, transparency, and reproducibility of AI-assisted versus traditional SRs in the LLM era. Findings will inform expectations for responsible AI integration and support refinement of reporting and methodological best practices, including future development of AI-specific reporting and appraisal extensions (eg, PRISMA-LLM [Preferred Reporting Items for Systematic Reviews and Meta-Analyses-large language model] and AMSTAR-LLM [A Measurement Tool to Assess Systematic Reviews-large language model]).
Keywords: AMSTAR-2; PRISMA 2020; artificial intelligence; evidence synthesis; large language models; meta-research; reproducibility; systematic review; transparency