As artificial intelligence (AI) becomes more publicly available, it may become a tool that is more commonly embedded into workflows across sectors and may be able to make increasingly autonomous decisions. In health care, researchers and clinicians are beginning to explore the role of AI in clinical diagnosis, disease monitoring, and predicting treatment outcomes. Decisions made by AI in clinical settings can have life-changing consequences, like in screening tissue samples for breast cancer (McKinney et al., 2020) or selecting patients for kidney transplantation (Deshpande, 2024). In response to increasing AI use, regulators are likely to create guidance for the safe, responsible, and equitable use of AI (Levin and Downes, 2023). In 2021 the National Academies of Sciences, Engineering, and Medicine held a workshop on AI trustworthiness, at which panelists discussed concerns regarding the accuracy, trustworthiness, and fairness of decisions made by AI systems (NASEM, 2021). The transparency of algorithms’ decision-making processes, considerations for user interfaces, stakeholder involvement, and personal data safety and privacy are all active topics of ethical debate (Sankaran et al., 2021).
___________________
1 The planning committee’s role was limited to planning the workshop, and the Proceedings of a Workshop has been prepared by the workshop rapporteur as a factual summary of what occurred at the workshop. Statements, recommendations, and opinions expressed are those of individual presenters and participants and are not necessarily endorsed or verified by the National Academies of Sciences, Engineering, and Medicine, and they should not be construed as reflecting any group consensus.
According to the World Health Organization (WHO), conditions affecting the brain and nervous system are the leading cause of illness and disability globally (Steinmetz et al., 2024). Because of that prevalence, it may be crucial to consider the effects of AI guidelines and policies on neurological clinical care and neuroscience research, given their uniquely intertwined relationship. Cognitive neuroscience discoveries were the foundation for the deep learning models that drive AI today (Hassabis et al., 2017). The earliest artificial neural networks were directly influenced by the brain’s neural networks and learning mechanisms (Hebb, 1949; McCulloch and Pitts, 1943; Rosenblatt, 1958). Parallel distributed processing (PDP), which laid the foundation for current state-of-the-art neural network models, was also inspired by dynamic interactions between neurons in the brain (Rumelhart et al., 1986). Reinforcement learning (RL), a widely used algorithm that trains AI models to optimize decisions, was originally generated based on animal learning behavior (Barto and Sutton, 1997).
In turn, machine learning tools are increasingly used in neuroscience research. AI techniques are powerful tools for data analysis and can also model brain functions and help generate new testable hypotheses (Kohoutová et al., 2020). For instance, neuroscientists were able to map the activity of dopaminergic neurons in the midbrain to RL algorithms and have since continued to use RL as a computational model of human learning and decision making (Subramanian et al., 2022).
Given the intertwined nature of AI and neuroscience, the National Academies’ Forum on Neuroscience and Nervous System Disorders hosted a workshop on March 25–26, 2024,2 to examine challenges and opportunities in interdisciplinary research, building community trust, and policy development across neuroscience and AI.
This workshop convened experts, thought leaders, and individuals with lived experience with central nervous system (CNS) disorders to explore how neuroscience has shaped AI development, the application of AI in neuroscience research, regulatory concerns surrounding the use of AI for brain health, and strategies for increasing public understanding of AI (see Box 1-1). By exploring how the brain has historically inspired AI advancements and the potential of AI models to push the field of neuroscience, the workshop considered opportunities for future interdisciplinary research and clinical application.
___________________
2 For more information on the workshop, see https://www.nationalacademies.org/event/41351_03-2024_exploring-the-bidirectional-relationship-between-artificial-intelligence-and-neuroscience-a-workshop (accessed June 11, 2024).
The workshop also discussed ethical and regulatory considerations surrounding neurotechnology development and how to ensure the responsible use of AI while continuing to encourage scientific innovation. Participants considered areas where AI could be integrated into medical and clinical environments and how best to serve diverse populations in that process. Sessions explored how to involve partners more effectively across sectors—especially people who have lived and living experience with a CNS disorder—in technology development and how to effectively communicate with the public about AI. Finally, workshop participants discussed how neuroscientists and computer scientists can aid policymakers in making key decisions about the responsible use of AI moving forward.
Frances Jensen, professor of neurology and chair of neurology at the Perelman School of Medicine, University of Pennsylvania and codirector of Penn Translational Neuroscience Center, and John Krystal, Robert L. McNeil Jr. Professor of Translational Research, chair of the Department of Psychiatry at the Yale University School of Medicine, and chief of psychiatry at Yale New Haven Hospital, outlined the workshop’s main goals—to better understand how neuroscience informs AI, where AI can in turn inform neuroscience, and the emerging role of AI in brain health care. Jensen emphasized that neuroscientists are uniquely situated to help expand the appropriate use of AI in brain health, as the field of AI continues to rapidly develop.
Magali Haas, founder and board chair of Cohen Veterans Bioscience, said that, for the sake of the workshop, neuroscience is broadly defined as including the study of basic neural mechanisms, cognitive and behavioral neuroscience, and psychology. Here, artificial intelligence incorporates all machine learning techniques and statistical approaches used to build intelligent machines. Many fundamental concepts in AI, like RL (Sutton and Barto, 1998) and modularity (Amer and Maul, 2019) are borrowed from the brain, which has been a model system since the earliest days of machine learning. In benchmark tests, AI systems are often measured against human performance. “Many aspects of brain function are still a black box to us,” Haas said, so “the frontier of neuroscience has become the frontier of artificial intelligence.”
Simulations run by artificial neural networks can test, validate, and even generate neuroscience hypotheses. Haas recalled presenting this idea at a membership meeting of the National Academies’ Forum on Neuroscience and Nervous System Disorders 14 years ago—at the time, it seemed “very far-fetched,” but today, the field has progressed in building systems models of the brain so that AI can meaningfully contribute to neuroscience research.
Haas underscored the importance of harnessing AI to solve big problems in neuroscience. The WHO predicts that the global toll of brain disorders will exceed that of all other diseases by 2030 (Steinmetz et al., 2024), and Haas said that health care systems need to be capable of handling that burden. Integrating AI into health care, Haas said, can enhance operational efficiencies, reduce costs, improve diagnostics, uncover new therapeutics, and allow direction engagement with patients. However, as AI begins to more closely approximate and model the brain, it raises new questions around regulation, policy, and neuroethics. This workshop, she concluded, aimed to address these topics through a series of discussions.
To contextualize the workshop in the broader history of neuroscience and AI, Terrence Sejnowski, Francis Crick Professor at the Salk Institute for Biological Studies, codirector of the Institute for Neural Computation at the University of California San Diego, and president of the Neural Information Processing Systems Foundation, highlighted deep learning and RL as two machine learning approaches with strong parallels to the architecture of the brain. These approaches are now ubiquitous in the world of AI and have contributed to its dramatic advancement in the last two decades. For example, convolutional neural networks, which use deep learning, dramatically outperformed prior image recognition systems in 2012 (Krizhevsky et al., 2012). The hierarchical structure of these models mirrors that of the primate visual processing system, as studied by Jim DiCarlo, professor of systems and computational neuroscience at the Massachusetts Institute of Technology (MIT) and director of MIT’s Quest for Intelligence initiative, in his research group (Cadieu et al., 2014).
While deep learning reflects the cerebral cortex, Sejnowski said that RL models capture dopamine pathways in the basal ganglia (Montague et al., 1996). Combined, these two systems can perform at human levels (or better) at challenging tasks. “If you put together those two algorithms—the cortex and the basal ganglia—you could create a computer program that could beat the world champion in Go,” he said. More recently, the technological advancements of AlphaFold and ChatGPT have revolutionized AI and shifted the field’s expectations of what’s possible for the future of machine learning (Callaway, 2020). In fact, he pointed out that, given the rapid pace of advancement, the field can’t predict what the next transformative application will be.
Despite impressive recent progress, AI is still in its early stages, and Sejnowski believes it will take decades more research before we have a clear understanding of how artificial neural networks operate. That said, Sejnowski urged participants to avoid the term “black box” in reference to the hidden inner workings of machine learning models:
These neural networks are not black boxes. You’ve heard this over and over again in the press. They’re not—they’re completely open. You can look into them. You can see every single connection, every single activity for every
single input. There is no excuse. The only thing that’s a black box about it is our brains aren’t able to understand what’s in the box. But that’s about to change, because of the fact that mathematicians are getting interested.
He concluded by expressing optimism about the exciting AI advancements to come in the next decade and underscored the workshop’s overarching aim to explore how AI can be used to help people.
This proceedings summarizes the presentations and discussions from the workshop (see Appendix B for the workshop agenda). Chapter 2 focuses on the unique role of neuroscience in the past, present, and future of AI; harnessing AI for neuroscientific discovery; and navigating the intersection of AI and neuroscience. Chapters 3–6 provide speaker perspectives on research and development considerations for neuroscience and AI (Chapter 3); integrating AI into health care (Chapter 4); AI regulation and public engagement (Chapter 5); and AI regulation and policy advocacy (Chapter 6). The workshop concluded with a panel discussion synthesizing the previous sessions and highlighted potential opportunities for moving the field forward (Chapter 7). The references are listed in Appendix A.
A planning committee of the National Academies of Sciences, Engineering, and Medicine will host a 1.5-day public workshop to explore the application of artificial intelligence (AI) in neuroscience research and discuss how neuroscientific discoveries have aided in the development and advancement of AI technologies. This workshop will convene a diverse group of leaders and experts across sectors within the neuroscience and AI ecosystems to further the conversation on current and potential use of AI in neuroscience and strategies to enhance public and regulatory understanding and implications of AI utilization.
The invited presentations and discussions may
A planning committee developed the agenda for the workshop, selected and invited speakers and discussants, and moderated the discussions. Following the workshop, proceedings of the presentations and discussions will be prepared by a designated rapporteur in accordance with institutional guidelines.