Capturing the Potential of Generative AI’s Use in Health and Medicine Requires Collaboration and Oversight, Consideration of Risks, Says NAM Special Publication
News Release
By Dana Korsen
Last update April 10, 2025
WASHINGTON — While the integration of generative AI in health care holds significant potential to transform the practice of medicine and the health and well-being of patients, successful, ethical, and equitable implementation of generative AI requires careful consideration of the associated risks, says a new special publication from the National Academy of Medicine. Collaboration among health care providers, patients, policymakers, ethicists, and researchers, along with a cross-sector commitment to maximizing the benefit of generative AI while minimizing the risks, is important for navigating the complexities.
Artificial intelligence encompasses a broad range of technologies designed to perform complex tasks typically associated with human intelligence, such as reasoning, learning, and problem-solving. Generative AI is a type of artificial intelligence that focuses on creating new content by learning patterns from existing data, and it produces various types of content, including text, imagery, and audio. Large language models are a subset of generative AI that specializes in interpreting and generating human language to create text. Users of generative AI guide the creation of text using prompts and post-processing actions to further refine it and, if needed, correct errors, omissions, and fabrications.
Generative AI has demonstrated potential with reducing clinician burden and delays in care, as well as in supporting biomedical discovery in areas such as drug development, diagnostics, and clinical trial management, the publication says. It can also transform complex medical information into understandable formats to help patients and their support networks comprehend diagnoses and treatment plans, leading to better-informed and more engaged patients.
The primary risks that use of generative AI in health care poses include data privacy and security concerns, bias, output limitations, algorithmic brittleness, and hallucinations — which are when a tool produces information that is incorrect, misleading, or not based on fact despite seeming plausible. To mitigate these risks, the publication says, it is important for stakeholders to work together to establish guidelines and regulations that protect patient data, ensure fairness and transparency, and evaluate the effectiveness and safety of generative AI applications in health care.
“The path forward for integrating generative AI into health care should involve collaboration to ensure that its deployment and use are intentional, coordinated, and ethically sound,” said Thomas M. Maddox, chair of the authoring group, professor of cardiology at Washington University School of Medicine, and executive director of the Healthcare Innovation Lab, a joint effort of the School of Medicine and BJC HealthCare. “As these technologies evolve, the policies and best practices that guide trustworthy and responsible use of generative AI tools in health care should as well.”
Accountability mechanisms could include creating and implementing a governance framework across health systems and organizations; enhancing standards for testing and training large language models and generative AI on diverse datasets; and requiring health care providers to have training and certification on use of large language models and generative AI in clinical settings. Furthermore, periodic local testing and oversight of generative AI in real-world clinical settings will be essential, the publication says.
“These innovations show promise for improving health care delivery, advancing medical research, and augmenting the capacity of clinicians to provide personalized care at an unprecedented scale, yet they also raise concerns about privacy, bias, and the role of human judgment in clinical decision-making,” said Victor J. Dzau, president of the National Academy of Medicine. “By enhancing professional education, addressing workforce needs, and reimagining patient engagement, this publication provides a road map for ensuring that the integration of generative AI in health and medicine benefits everyone — patients, providers, and the broader health ecosystem.”
This special publication was completed with support from the Doris Duke Foundation and the Gordon and Betty Moore Foundation. The special publication was authored by experts who were assembled under the charge of the National Academy of Medicine. The views presented in the publication are those of individual contributors and do not represent formal consensus positions of the sponsoring organizations, authors’ organizations, the National Academy of Medicine, or the National Academies of Sciences, Engineering, and Medicine.
The National Academy of Medicine, established in 1970 as the Institute of Medicine, is an independent organization of eminent professionals from diverse fields including health and medicine; the natural, social, and behavioral sciences; and beyond. It serves alongside the National Academy of Sciences and the National Academy of Engineering as an adviser to the nation and the international community. Through its domestic and global initiatives, the NAM works to address critical issues in health, medicine, and related policy and inspire positive action across sectors. The NAM collaborates closely with its peer academies and other divisions within the National Academies of Sciences, Engineering, and Medicine.
Contact:
Dana Korsen, Director of Media Relations
Office of News and Public Information
202-334-2138; email news@nas.edu
Featured Publication
Nam_special_pub
·2025
The integration of large language models (LLMs) and generative artificial intelligence (AI) in health care holds the potential to transform the practice of medicine, the work and experiences of health care providers, and the health and well-being of patients. Generative AI can support clinical decis...
View details