In the final discussion, panelists considered how to create infrastructure that will sustain collaborative research across neuroscience and artificial intelligence into the future.
Cognitive science and neuroscience research can inform the development of artificial intelligence (AI) algorithms. Cognitive scientists and neuroscience researchers through their hypotheses and experiments can deepen understanding of natural intelligence, human behavior, biases, and constraints within the brain, which can all inform the development of artificial intelligence. This may require a culture that facilitates and prioritizes interdisciplinary collaboration between cognitive science, neuroscience, computer science, and artificial intelligence. (Patel, Pavlick, Sejnowski)
Public education will be important for expanding AI literacy. Developers can reach community members where they are to ensure that they have enough understanding of AI tools to provide informed consent to emerging technologies. (Gonzales, Saha, Sejnowski, Wilbanks)
More focus and attention to the design of human–AI user interfaces may be critically important to generate trust and embed AI into society. The interpretable, ethical, and trusted natures of AI depend significantly on how humans interact with it, similar to how a screen providing clear directions is as important as a good mapping algorithm in a GPS. Achieving the goals of AI adoption
may require combining algorithms and user interfaces as an integrated system. (Mantas, Miller, Schaich Borg)
Academic researchers would benefit from shared resources and a unified vision. Panelists suggested that funders prioritize sustainable, long-term infrastructure for data management and sharing. By pooling resources and homing in on a clear scientific mission, academic researchers may be able to operate with a level of efficiency and computational power that parallels industry leaders. (Anderson, Fang, Hill, Jirsa, Litt, Pavlick, Weicken)
Developing tools alongside the communities they aim to serve can make those tools more effective. Several panelists suggested centering the wants and needs of target users from the beginning of the design process for new technologies to help establish trust and increase the likelihood of public acceptance. (French, Gonzales, Guggemos, Hoque, Mantas, Wilbanks)
The neuroscience and AI research communities may want to consider speaking directly to regulators and policymakers. Emerging technology has the potential to transform brain science and health care, and neuroscientists and AI researchers are well positioned to share what they think regulators and policymakers should prioritize. (Cohen, Haas, Jensen, Shen)
__________________
NOTE: This list is the rapporteurs’ summary of points made by the individual speakers identified, and the statements have not been endorsed or verified by the National Academies of Sciences, Engineering, and Medicine. They are not intended to reflect a consensus among workshop participants.
Sejnowski opened the discussion by asking whether the existence of countries without formal artificial intelligence (AI) regulation posed an “existential threat” to humanity. Hill responded that many experts believe artificial general intelligence,1 if developed, will indeed pose an existential threat. He noted that society is already experiencing the effects of generative AI on the spread of disinformation, fake images and videos, and job loss. Hill emphasized that this risk increases as AI technology advances.
In the past, the Nuclear Test Ban Treaty2 served as one way to unify countries around regulating a shared concern, Sejnowski said. Given the
___________________
1 Artificial general intelligence refers to the idea of AI systems that can use human-like intellect to solve complex tasks. For more information and additional references, see https://www.sciencedirect.com/topics/computer-science/artificial-general-intelligence (accessed July 9, 2024).
2 For more information on the Nuclear Test Ban Treaty, see https://www.archives.gov/milestone-documents/test-ban-treaty (accessed June 11, 2024).
number of AI researchers who are worried about AI superintelligence, he suggested that people should begin thinking about the risks these systems pose and maintain an off switch that humans could use if necessary.
Building on Sejnowski’s suggestion for future AI development and risk prevention, Haas asked panelists to highlight the most memorable points from the workshop and what should be kept in mind as AI continues to develop (see the Highlights box at the beginning of this chapter). Hill emphasized the importance of integrating perspectives and recognizing opportunities for people with lived experience, such as those with central nervous system disorders and their caregivers. Jesús Mantas, independent director and chair of the Compensation and Management Development Committee on the board of Biogen and global managing partner in IBM Consulting, added that while developers often focus on the algorithm powering an application, the design of the user interface between humans and the application interfaces is usually more important than the algorithm itself. He suggested that spending more time on design—and potentially applying neuroscience principles in the design process—is critical to achieve the types of systematic improvements in business and society that are sought with AI.
Kevin Miller, research scientist in the neuroscience team at Google DeepMind and honorary lecturer at University College London, was struck by the importance of interpretability, both for highly trained experts interpreting the inner workings of AI and for users hoping to understand the basics of an algorithm’s decision-making process. He also pointed out that there is a vast amount of brain data available, and interpretable AI can assist with generating insights that are understandable to humans. Therefore, Miller said, AI systems could be used as the hypotheses themselves.
Education is a particularly promising area for AI application, Sejnowski said. AI tutors can provide children with one-on-one instruction, and he said that this could help bring more educational opportunities to a wide range of traditionally underserved people. This isn’t necessarily a new concept, Cohen pointed out—computer programs and cognitive models based on them have been used in the classroom for many years, and cognitive science shows that students can learn well this way.
Anindita Saha, associate director for strategic initiatives for the Digital Health Center of Excellence at the Food and Drug Administration, agreed that different people need different levels of explanation and that communication efforts should be tailored to their intended audience—especially when working with multidisciplinary teams. Saha proposed that bringing accurate representations of AI into the public domain could help to demys-
tify it and increase public trust in emerging technology. Hill emphasized the importance of educating people about the strengths and limitations of AI, as well as promoting literacy in data-oriented thinking, to establish a baseline public understanding of AI’s capabilities and limitations. He highlighted that certain aspects of AI models like generative pretrained transformers, such as their infinite patience and ability to keep track of progress, make them excellent tutors. However, he pointed out that there is still a lack of understanding about AI’s limits. Hill suggested that to advance use of AI in education, educators need to adapt their courses to incorporate AI tools and make sense of their limits. This represents an opportunity to set a new baseline of expectations while innovating to encourage deeper thinking.
Panelists discussed how to launch a large-scale AI initiative to address the need for shared computing resources, shared data, auditing, and interdisciplinary research. Hill emphasized the importance of multiscale data integration, bringing together diverse data produced by programs like the Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) to build brain-inspired foundation models (BRAIN Working Group, 2014). Saha proposed creating a digital marketplace for shared data to allow people to search for and apply subsets of data toward their own projects “rather than starting from scratch.”
One challenge, Haas said, is that traditional funding mechanisms are typically project-driven, with grants awarded for specific time-bound research projects. This doesn’t translate to creating long-lasting infrastructure, so the panel discussed alternative funding strategies. Sejnowski mentioned the Manhattan Project as one historical example of getting many different people across sectors to collaborate and invest time toward solving a large, urgent problem.3 Jirsa said that in Europe, countries collaborate to fund digital infrastructure like EBRAINS (Schirner et al., 2022), which coordinates an international network of neuroscience data, models, and tools. The intention to integrate across data scales must be set from the outset, he said, so coordinating systems can be organized and scaled up effectively.
Mantas said that trust is required for people to adopt new technology, but earning the public’s trust, especially in neurotechnology, is an ongoing challenge. Mantas shared a finding from the 2024 Edelman Trust Barometer Report (Edelman, 2024): When asked who they trust the most
___________________
3 For more information on the Manhattan Project, see https://www.nps.gov/mapr/learn/manhattan-project.htm (accessed June 11, 2024).
to integrate innovation into society, members of the public by far trusted businesses over the government. Sejnowski countered that trust in large tech companies is declining (Kates et al., 2023). Several panelists suggested that academic researchers and regulatory bodies need to collaborate with the private sector to bring emerging technologies to end users.
Mantas also noted that, as a comparative example, GPT-3 has roughly a thousand times fewer parameters than a human brain (considering the 100 trillion synaptic connections a simplified comparison to a parameter in a large language model) yet consumes a million times more energy. The growing carbon footprint of manufacturing AI hardware and training models presents a challenge that neuroscience might be able to address (Patterson et al., 2021). He suggested that neuromorphic computing, or creating hardware that functions more like the brain, could provide clues to ways to approach the growing AI energy crunch. Sejnowski agreed, and added that this technology already exists (e.g., prototypes developed by the Intel Neuromorphic Research Community) but has not been commercialized. Gordon also commented that the concentration of health data by corporate interests presents a major threat to the safety of AI technology in mental health spaces.
As discussed in Chapter 1, the focus of the workshop was to discuss how neuroscience and artificial intelligence can continue to push each field forward and what the implications might be for research, clinical care, public trust, ethics, and regulation. In her concluding remarks, Haas highlighted opportunities to build national infrastructure for data and software management, to reinvent the health care system, and to redefine brain disorders with insights enabled by AI. She concluded the workshop by sharing a “very emphatic call to action” and suggested workshop participants reach out to regulators, policymakers, and politicians to inform them of research and development considerations in AI and neuroscience moving forward. “We have a fundamental role in doing so,” she said. “So please go forward and take action.”
This page intentionally left blank.