bims-aimedu Biomed News
on AI in medical education
Issue of 2026–03–22
six papers selected by
Angela Spencer, Saint Louis University



  1. Eur J Dent Educ. 2026 Mar 16.
      This perspective highlights the urgent need to rethink how academia in health professions education (HPE) engages with artificial intelligence (AI), moving away from a reactive, enforcement-driven mindset toward a more forward-looking, educationally sound approach. The current culture, shaped by suspicion, moral panic, and unreliable detection technologies risks undermining fairness, student trust, and meaningful learning. The current assessment models built for a pre-AI world may no longer be fit for purpose, and universities must redesign assessments to prioritise real-time demonstration of competence, higher-order skills, and authentic learning experiences that AI cannot replicate. Central to this shift is the development of robust AI literacy for both students and faculty to promote responsible and authentic use of AI, and enable learners to critique and verify its outputs, and how to integrate it as a legitimate learning partner rather than a prohibited shortcut. By embracing AI with clarity and purpose, HPE can move from policing to empowering, ensuring that assessments remain credible and learning remains relevant in an AI-driven future clinical environment.
    DOI:  https://doi.org/10.1111/eje.70144
  2. MedEdPublish (2016). 2025 ;15 226
      Background Artificial Intelligence (AI) is reshaping healthcare and medical education, with growing calls to embed it in medical curricula. However, evidence on first-year medical students, perceived benefits and limitations of AI, and views on ethics and professionalism is limited. Methods A qualitative study was conducted using semi-structured interviews to explore the experiences, attitudes, and perceptions of first-year students regarding AI. Convenience sampling yielded the participant cohort. Recruitment and analysis continued until thematic saturation was achieved. Transcripts were coded iteratively using NVivo software, and a reflexive thematic analysis was undertaken. Results Twenty participants were interviewed; 18 were AI users, to varying degrees, and two were non-users. Seven themes emerged: How AI is used; Benefits; Concerns and limitations; Ethical considerations; Advice for peers and professors; Attitudes toward and understanding of AI; and Participation in the project. AI users cited motivations like efficiency, personalization, and support. Benefits included faster access to information, organized content, and tailored explanations. Concerns included AI reliability, over-reliance, and ethical misuse, such as plagiarism. Most supported the inclusion of AI literacy in curricula for responsible, practical, and critical use of AI. Participants with AI literacy demonstrated a deeper understanding of AI. Conclusions We found that students in the medical school we studied are early adopters of AI, using it in various ways, and wish to utilize it effectively and ethically. The findings of this study align with other studies in other jurisdictions that call for early AI literacy.
    Keywords:  AI Literacy; Co-design; Competency Framework; Curriculum Development; Ethical Considerations; Student Perspectives; Thematic Analysis; Undergraduate Medical Education
    DOI:  https://doi.org/10.12688/mep.21319.2
  3. Nurse Educ Today. 2026 Mar 13. pii: S0260-6917(26)00104-8. [Epub ahead of print]162 107076
      The rapid integration of generative artificial intelligence (GenAI) into undergraduate nursing education has prompted significant debate regarding its impact on the development of critical reasoning, inquiry skills, and clinical judgement. While some scholars argue that reliance on GenAI may undermine independent thinking, contextual decision‑making, and autonomous judgement, emerging perspectives suggest that GenAI has the potential to enhance rather than erode these foundational competencies. This commentary examines the evolving role of GenAI in nursing education and argues that its thoughtful integration can strengthen students' preparedness for increasingly complex, technology‑rich clinical environments. Clinical judgement is central to safe nursing practice and is shaped by the nurse's interpretation of patient needs, contextual factors, and professional reasoning. While GenAI can synthesize large amounts of information efficiently, it does not replace human judgement; instead, it provides data that students must interpret within ethical, relational, and contextual dimensions of care. Integrating GenAI into educational contexts allows students to engage with realistic, data‑driven scenarios that mirror contemporary practice environments, supporting deeper analytical thinking and the ability to critique algorithmic outputs and biases. At the same time, the use of GenAI raises epistemological tensions between nursing's humanistic ways of knowing and AI's computational logic. These tensions underscore concerns that tacit knowledge, ethical reasoning, and patient‑centered judgement may be marginalized if GenAI tools are used uncritically. Addressing this challenge requires adapting nursing theory and curriculum to incorporate digital epistemologies while maintaining the profession's ethical and relational foundations. This commentary concludes that rather than discouraging GenAI use, nursing education must embrace it deliberately and ethically. Through intentional curriculum design, faculty development, and emphasis on AI literacy, educators can ensure that nursing students emerge as competent, reflective practitioners capable of navigating GenAI‑enabled healthcare environments with confidence and integrity.
    Keywords:  Artificial intelligence; Clinical judgement; Critical thinking; Nursing education; Transition to practice
    DOI:  https://doi.org/10.1016/j.nedt.2026.107076
  4. Front Psychiatry. 2026 ;17 1741240
      Dissociative Identity Disorder (DID) remains one of psychiatry's most doubted diagnoses, where patients' accounts are dismissed and their experiences forced into ill-fitting diagnostic categories. This article examines how testimonial and hermeneutical injustices manifest in clinical practice, from skepticism about the disorder's validity to documentation that renders patients' trauma histories incoherent. These failures delay accurate diagnosis, erode therapeutic alliances, and create clinical records that now train artificial intelligence systems. As AI tools increasingly shape psychiatric decision-making, we face an urgent reality: if clinicians cannot recognize or document complex trauma accurately, automated systems will scale these failures exponentially. Drawing on DID research and epistemic justice frameworks, I argue for immediate reforms in clinical documentation, psychiatric training, and data governance to prevent algorithmic amplification of longstanding harms.
    Keywords:  artificial intelligence in psychiatry; clinical documentation; diagnostic bias; dissociative identity disorder; electronic health records; epistemic injustice and psychiatry; trauma-informed care
    DOI:  https://doi.org/10.3389/fpsyt.2026.1741240
  5. JMIR Med Educ. 2026 Mar 12. 12 e85228
       Background: Advancements in artificial intelligence (AI) are transforming health care, particularly through AI-driven clinical decision support systems (AI-CDSS) that aid in predicting disease progression and personalizing treatment. Despite their potential, adoption remains limited due to clinician concerns about algorithm misuse, misinterpretation, and lack of transparency.
    Objective: This qualitative study explores the informational needs and preferences of clinicians to better understand and appropriately use AI-CDSS in decision-making. In parallel, this study explores AI experts' perspectives on what information should be communicated to enable safe and appropriate use of AI-CDSS.
    Methods: A qualitative description design study was conducted using semistructured interviews with 16 participants (8 clinicians and 8 AI experts). Discussions focused on experiences with AI, informational needs, and feedback on existing reporting standards, including Model Cards, Model Facts, and the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis-Artificial Intelligence (TRIPOD-AI) checklist. The transcripts were analyzed through codebook thematic analysis.
    Results: Four key themes were identified: (1) clinicians need clear information on training data, its origin, size, and inclusion and exclusion criteria, to judge model applicability; (2) performance metrics must go beyond the area under the curve (AUC) and be clinically relevant to support informed decisions; (3) limitations and warnings about inappropriate use should be specific and clearly communicated to prevent misuse; and (4) information should be presented in layered, customizable formats within existing clinical software, avoiding unnecessary jargon, and allowing optional deeper explanations. While each of the reviewed reporting standards offered strengths, none were considered sufficient alone. Participants recommended a combined and clinician-centered approach to information delivery. Alignment of reporting standards with clinical workflows and decision thresholds was thought to be crucial to bridge the usability gap.
    Conclusions: To improve AI-CDSS adoption in clinical practice, reporting standards must be designed for better clinician comprehension and usability. Enhancing transparency, particularly regarding training data and performance, can likely help clinicians assess AI-CDSS more effectively. Information should be delivered in an accessible, layered format, fitting clinical workflows. Co-creation with clinicians throughout AI-CDSS development was a cross-cutting theme, highlighting its importance in ensuring tools are not only technically sound but also practically usable. Future research should explore how to structurally report on performance and validation metrics for clinician understanding and assess the impact of information provision on AI-CDSS adoption.
    Keywords:  AI implementation; artificial intelligence; co-creation; delivery of health care; informational needs; reporting standard; transparency
    DOI:  https://doi.org/10.2196/85228
  6. Account Res. 2026 Mar 15. 2645390
      In this article, we discuss the growing problem of hallucinated citations produced by Generative Artificial Intelligence (GenAI) in scholarly research and writing. We argue that GenAI hallucinated citations might qualify as a provable instance of research misconduct under the U.S. federal regulations when a) the researcher uses a GenAI tool to produce hallucinated (i.e., nonexistent) citations for a research document; b) the citations function as data because they directly support research findings, as in, for example, review articles or bibliometric studies; and c) the researcher demonstrates indifference to the risk of fabrication of the data (i.e. citations) because they did not check the GenAI's output for veracity and accuracy. Other types of problematic citations such as bibliometrically incorrect citations, or contextually inaccurate citations, are indicative of poor scholarship and irresponsible behavior, but do not qualify as research misconduct. Recognizing that GenAI hallucinated citations could be regarded as research misconduct in certain cases will hopefully encourage researchers to take this problem more seriously than they do now. In partnership with scientific institutions, funders and professional societies, the scholarly community should work on establishing, promoting, and enforcing standards for responsible use of AI in research, including standards pertaining to citation practices.
    Keywords:  Hallucinated citations; fabrication; generative artificial intelligence; publication ethics; research misconduct
    DOI:  https://doi.org/10.1080/08989621.2026.2645390