For over a century, advances in science and technology have raised concerns spanning ethical, legal, safety, security, and environmental risks, often shaped by the type products, materials, knowledge, or technologies involved. Although many of these concerns are not exclusive to the life sciences and biotechnology, certain issues are uniquely tied to these fields. For instance, the development of techniques to cut and paste DNA between organisms raised significant ethical and safety questions regarding the potential risks of engineered organisms (Berg, 2004, 2008). Currently, a tapestry of policies governs these and other advances, several of which aim to reduce ethical, environmental, safety, and security risks. Some of these policies are described Chapter 2. Within this complex landscape sits advances in biotechnology and biomanufacturing. The 2022 Executive Order on Advancing Biotechnology and Biomanufacturing Innovation for a Sustainable, Safe, and Secure American Bioeconomy1 calls for biotechnology research and development to be conducted in a safe and secure manner and for the benefit of all Americans and the global community. The 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence2 recognizes that “harnessing AI for good and realizing its myriad of benefits requires mitigating its substantial risks” and includes a request to the National Academies to evaluate biosecurity harms, risk mitigation strategies, and benefits for biodefense and preparedness, which are the subjects of an ongoing study3 that is expected to be completed in January 2025. Despite these calls to balance safety and security risks with benefits, few, if any, methodologies or conceptual frameworks exist to assess these risks and benefits in a consistent, defensible, and objective manner; manage these risks to safeguard scientific progress and benefit to various sectors including defense and intelligence sectors; and account for the diversity of risk that may be relevant to the biotechnology. Therefore, the 2022 CHIPS and Science Act requested a National Academies’ study on the ethical, legal, environmental, safety, security, and other issues associated with engineering biology, which was recently initiated (Office of the Federal Register, 2022).
Discussion of the relevance of, scope of, and improvements to existing biotechnology policy frameworks (e.g., the 2018 framework in Biodefense in an Age of Synthetic Biology, which integrated actor and technology assessments to inform risk) will continue as new technologies and techniques to improve manipulation of living systems are developed. However, research involving the use of biological materials, data, and knowledge in ways that are not intended for agricultural or human health applications raise
___________________
1 See Exec. Order No. 14081, 87 C.F.R. 56849 (2022), https://www.federalregister.gov/d/2022-20167 (accessed August 22, 2024).
2 See Exec. Order No. 14110, 88 C.F.R. 75191 (2023), https://www.federalregister.gov/d/2023-24283 (accessed August 22, 2024).
3 See https://www.nationalacademies.org/our-work/assessing-and-navigating-biosecurity-concerns-and-benefits-of-artificial-intelligence-use-in-the-life-sciences (accessed August 22, 2024).
entirely new technical and policy questions. The research environment needed to innovate with biology is accompanied by critical questions about ethics, risk and regulatory uncertainty (NASEM, 2020). Are the risks associated with these advances and products that integrate biological and non-biological components (i.e., biohybrid materials) treated the same as those for engineered organisms? How are the risks and benefits of these advances assessed? Beyond assessing risk from a regulatory perspective, how do these advances relate to existing legal and ethical frameworks, including the U.S. implementation legislation for the Biological and Toxins Weapons Convention?
Failing to ensure that the uses of artificial intelligence/machine learning (AI/ML) with biotechnologies are deployed in a manner that is ethical and socially responsible would damage U.S. strategic national security interests and public support. Ethical issues associated with such concepts as moral agency, human enhancement, or health equity are not new, either in civilian or military contexts. But the intersection of biotechnology with emerging applications of AI/ML and automated experimentation has magnified these issues to the point of needing fresh examination. Ethical use of AI in biotechnology, whether for medical, agricultural, or other uses, will frequently require attention to data integrity and bias; data source ownership and privacy; and equitable access to the benefits of the innovations, regardless of whether the data are held in local, national, or international repositories. In the context of national security, additional ethics concerns revolve around warfighter applications and personal autonomy; the dual-use dilemma; the prospect of an AI-fueled biotechnology arms race; loss of human moral agency over the processes, products, and uses of AI-enabled biotechnology; and the challenge of ensuring trust over processes often obscured from human observation.
Biotechnology is largely regulated with respect to many of its uses, particularly those associated with medical and agricultural products (briefly referenced in the Policy and Governance section of Chapter 3). In addition to those areas, biotechnology is the subject of widely varying ethical norms, which may be further complicated in research using AI/ML with biotechnologies because of separate efforts to define guidelines for ethical use of AI. Several international entities have developed guidance for ethical use of AI, such as UNESCO,4 individual companies,5 nonprofits such as AlgorithmWatch6 and AI Now Institute,7 and academic centers such as CHAI: The Center for Human-Compatible Artificial Intelligence.8 Within the U.S. government, for example, the Defense Advanced Research Projects Agency has placed a new emphasis on ethical, legal, and social implications of emerging science, including AI.9 Oversight and/or incentives to develop and deploy AI ethically and carefully is a central goal of many ongoing efforts to define responsible AI practices. Within this plethora of efforts, certain principles recur, including the following:
___________________
4 See https://www.unesco.org/en/artificial-intelligence/recommendation-ethics (accessed September 27, 2024).
5 See, e.g., https://www.continental.com/en/press/studies-publications/other-publications/code-of-ethics-for-artificial-intelligence/ (accessed September 27, 2024).
6 See https://algorithmwatch.org/en/ (accessed September 27, 2024).
7 See https://ainowinstitute.org (accessed September 27, 2024).
8 See https://humancompatible.ai (accessed September 27, 2024).
9 See https://www.darpa.mil/our-research/ethical-legal-societal-implications-of-research (accessed September 27, 2024).
In addition to these key principles, the use of AI/ML with biological data can challenge personal autonomy and moral agency, transparency, and trust. In the specific context of military applications, attention has already been paid to the challenges of maintaining a sense of moral agency when people are removed from the methods and consequences of action (e.g., a pilot who cannot see the victims of a bombing, a warfighter who is remotely operating a drone or weapon hundreds of miles away (Horowitz, 2016), or a programmer working on algorithms that can be used to more quickly analyze and synthesize novel biologics with dangerous potentials (Loução, 2024)). To help address this challenge, developers or end-users of biotechnology-associated AI-systems that are deployed within a national security context could receive training that sensitizes them to the delicate balance between their professional duties and their personal moral agency, be given the information and tools to evaluate their roles critically and to detect limitations and inaccuracies in the AI-based algorithms that they use, and be granted the opportunity to advocate for uses that are consistent with national and personal values. Generative AI complicates this problem further by its lack of transparency, complicating efforts to trace choice of action to consequences and trust in the system (Marr, 2024).
Trust is key in a field such as biotechnology, in which advances could be used for benefit or harm depending on the actor’s intentions and the research needed for defensive uses can be difficult to distinguish from, or difficult to prevent being diverted to, offensive uses (Urbina et al., 2022). In this context, developing trust likely will require multinational cooperation aimed at maximizing transparency over data and algorithms, well-defined export controls and lists or criteria for problematic areas of research, and education of the humans integral to these systems so that everyone feels personally responsible for the consequences of their contributions to the biotechnology products and uses that emerge from incorporating AI (Lentzos, 2020).
Recommendation 6: For the BioCATALYST network, the U.S. Department of Defense and the Office of the Director of National Intelligence should establish a research program to identify ethical, legal, social, environmental, economic, and other concerns related to research involving artificial intelligence/machine learning, automated experimentation, and biotechnology in the context of national security. They should
- Ensure that the results of such research are appropriately disseminated;
- Require that interdisciplinary artificial intelligence/biotechnology research centers described above conduct activities that address these concerns;
- Develop training materials for developers and end-users to sensitize them to the moral agency challenges;
- Provide for outreach by convening regular and ongoing public discussions;
- Develop evaluation, reporting, and accountability mechanisms that are participatory and transparent.
Recommendation 7: The U.S. Department of Defense and the Office of the Director of National Intelligence should allot a percentage of annual appropriations for the BioCATALYST network to support activities to assessing and addressing ethical, legal, social, environmental, economic, and other concerns related to research involving artificial intelligence/machine learning, automated experimentation, and biotechnology in the context of national security. By dedicating funds for responsible innovation as part of research investment, the BioCATALYST network can ensure that its work does not have any adverse effects on the United States, its people, and its equities.
This recommendation is similar to existing allocations by the National Institutes of Health to support ethical, legal, and societal implications (ELSI) efforts related to biomedical research. Since the mid-1990s, the National Human Genome Research Institute (NHGRI) has allocated 5 percent of its annual extramural budget to “anticipate and address the ethical, legal, and social implications of genetic and genomic research” (Kaufman, 2020). This support was initiated in response to Congressional concern about the social, ethical, and legal issues related to the development of genetic technologies (Cook-Deegan, 1994; U.S. Congress, House of Representatives, 1991).
One approach to reducing the harms from misuse of AI/ML and automation in the life sciences is to shape their development to reduce the risks of deliberate harmful use, while jointly maximizing their utility for beneficial (including defensive) applications. Because these risks exist at the emerging intersection of two distinct disciplines, best practices from both life sciences research oversight and computer science product development may be repurposed. However, the fit between legacy practices and policies is imperfect where the intersection of AI and biotechnologies creates novel challenges, some of which are opportunities to reduce deliberate harmful risks through the applications of AI-enabled biotechnologies themselves.
In response to the 2023 Executive Order on Artificial Intelligence10 and the 2023 UK Government’s request for guidance on AI Safety policies ahead of the UK AI Safety Summit (Department for Science, Innovation and Technology and Donelan, 2023), both frontier AI laboratories in industry and academia released frameworks seeking to reduce the risk of misuse of their models through systems of pre-release capability evaluations to inform mitigations, such as release decisions (Abramson, 2024; Anthropic, 2023; Carter et al., 2024; Christiano et al., 2023; Dragan et al., 2024; EvolutionaryScale, 2024; Inflection AI, 2023; Meta, 2023; Nguyen et al., 2024; OpenAI, 2023). In theory, pre-release evaluations of AI capabilities for unacceptable model behaviors could lead to mitigations through modification of the model and risk-informed release and access decisions. In practice, social, economic, and technical challenges, which themselves are an opportunity for further innovative applications of AI and automation, complicate the respon-
___________________
10 See Exec. Order No. 14110, 88 C.F.R. 75191 (2023), https://www.federalregister.gov/d/2023-24283 (accessed August 22, 2024).
sible development of these frameworks for use across the lifecycle of model creation, access, and ongoing use. Pre-release evaluation of AI model capability is a new science, and methodological limitations may lead to both false-negative and false-positive assessments of AI risk, particularly as AI systems and behaviors become more complex, and therefore, more costly and challenging to systematically interrogate (Chang et al., 2023; Feffer et al., 2024; Ibrahim et al., 2024; Kapoor et al., 2024).
This challenge is heightened for models whose outputs are biological, which unlike digital language or image outputs, cannot be directly assessed by humans for feedback and may require laboratory validation to measure veracity or harmfulness of outputs (Sandbrink, 2023). AI safeguards intrinsic to the model itself, through modifications to pre-training data (Nguyen et al., 2024) or post-training refinements to reduce the capability (Li et al., 2024) or willingness of models to generate risky outputs (Inan et al., 2023), remain in their infancy and brittle, especially when AI model weights are available to would-be malicious actors (Arditi et al., 2024; Pannu et al., 2024; Volkov, 2024). Finally, managing access to AI models is in conflict with academic norms of openness and the economics of inference computation (Callaway, 2024; Leech et al., 2024).
The interface between potential bad actors and the acquisition and use of powerful technologies is an attractive point of control for reducing risk, and a variety of legacy and emerging policy and technical approaches can be leveraged (Carter et al., 2023). If AI model weights are retained, structured access enables identification and refusal of risky behaviors and post-hoc actions to revise models, revoke access, or punish misuse (Shevlane, 2022; Solaiman, 2023). Technology proliferation policies, such as export control regimes, can be updated to manage access of emerging dual-use technologies to potential adversaries (Pannu et al., 2024). However, the considerations for assessing what to add to export control lists can be challenging as experienced following passage of the Foreign Investment Risk Review Modernization Act in 2018 (Office of the Federal Register, 2018).
Recommendation 8: To manage the ongoing risks to U.S. interests and U.S. Department of Defense biotechnology efforts posed by misuse of artificial intelligence models and data, the BioCATALYST network should use evaluations of AI models prior to their release and data risk assessments to inform structured access arrangements, built upon “know-your customer” diligence and AI-enhanced nucleic acid synthesis screening, with the goal of increasing accessibility of tools and data for trusted actors while decreasing the probability of undetected attempts at misuse. The BioCATALYST network should build on efforts by the National Institute of Standards and Technology AI Safety Institutes; private-sector companies developing and deploying AI/machine learning models, particularly those used in biotechnology research and development; and other similar entities working to improve safety and security of AI algorithms. As part of the risk assessments, the BioCATALYST network should evaluate the benefits and trade-offs of using algorithms that are associated with higher risks in a restricted or classified environment.
In response to the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the DoD sponsored a National Academies’ consensus study on Assessing and Navigating Biosecurity Concerns and Benefits of Artificial Intelligence Use in the Life Sciences, which involves evaluating the potential of biological design tools and foundation models used with biology to increase biosecurity risks, possible mitigation strategies for identified risks, and ways in which use of AI with biology can help the United States achieve its biodefense mission (“Assessing and Navigating Biosecurity Concerns and Benefits of Artificial Intelligence Use in the Life Sciences,” n.d.). Following the release of the 2024 U.S. Government Policy for Oversight of Dual Use Research of Concern and Pathogens of Enhanced Pandemic Potential, the National Science Foundation requested the National Academies to conduct a workshop focusing on engaging publishers and editors to discuss approaches for safeguarding scientific progress and benefits and reducing risks of biological research involving in silico models and computational approaches
at the publication stage. Both projects will provide additional information about risks of using AI with biology and measures for reducing risks to ensure maximization of benefits.
Unlike risks stemming from human error, technology, or evolution, deliberate harmful use (often, stated as misuse) implies an intentional actor, often an adversary (U.S. AI Safety Institute, 2024). Various conceptual frameworks exist to systematically map opportunities for interventions that might reduce misuse risks by decreasing the probability or severity of potential harms. Deterrence frameworks focus on reducing risk by decreasing the probability of misuse occurring either by imposing costs on bad nation-state actors (deterrence by punishment) or by decreasing the potential gains that might accrue to such actors through bad behavior (deterrence by denial of benefit) (Mazarr, 2018). Although deterrence focuses on reducing risks from deliberate harmful use of existing capabilities, dissuasion strategies seek to reduce the development, expansion, or diffusion of emerging capabilities of concern (Krepinevich and Martinage, 2008). Finally, “pathway defeat” approaches entail a detailed causal breakdown of the key steps between a bad actor with an intention to create harm, realize the harmful effects in the world, and guide operational or tactical interventions to prevent attacks from succeeding, including those with potential strategic consequences (United States Special Operations Command, 2019). Compartmentalization will support the efficacy of pathway defeat strategies by preventing the aggregation of sensitive information.
For deterrence to be credible, detection, attribution, and punishment for harmful use must be sufficiently timely and certain. Achieving this goal for biological misuse is challenging, particularly in the context of emerging AI-enabled capabilities and uncertainty about the malicious actor’s intent. Timely detection of emerging biological threats is a perennial challenge. Furthermore, timely punishment of would-be malicious users of AI-enabled biotechnology is further complicated by the shrinking expertise and resource footprint necessary for attempting attack, including of “lone wolf” actors who are difficult to identify regardless of threat.
Recommendation 9: To manage the ongoing risks to U.S. interests posed by misuse of artificial intelligence models and data, the U.S. Department of Defense, U.S. Department of Health and Human Services, U.S. Department of Justice, and the Intelligence Community should conduct joint exercises at least annually, coordinated by the National Security Council, Office of Pandemic Preparedness and Response, and Office of Science and Technology Policy, to assess end-to-end bioincident attribution capabilities and to inform annual budget requests. An annual report on these capabilities should be provided to the House Armed Services Committee, Senate Armed Services Committee, House Permanent Select Committee on Intelligence, and Senate Select Committee on Intelligence.
Unlike some other security-relevant technologies, the difference between offensive and defensive or innocuous biological activities can be subtle, depending on the intentions of actors, which are difficult to infer especially under circumstances of mistrust. This ambiguity of behavior can be, and often is, exploited by U.S. adversaries for misinformation and disinformation purposes (Kupferschmidt, 2018). More gravely, the ease of sincere misperceptions of biological activities and intentions leads to a “security dilemma” whereby state actors infer hostile intentions from defensive or even beneficent activity, leading to unilateral arms-racing behavior, which in the context of biological weapons development poses unacceptable risks to the global population. The gravity of this potential result, and the perniciousness of this dilemma, require
systematic and intentional actions to mitigate. Furthermore, the U.S. government works to counter and mitigate any mis- and disinformation generated and/or spread through social media.11
Recommendation 10: To manage the ongoing risks to U.S. interests and U.S. Department of Defense (DoD) biotechnology efforts posed by misperceptions and mis/disinformation, the DoD through the BioCATALYST network and/or biotechnology efforts should be supported by dedicated transparency measures, in coordination with the U.S. Department of State, to demonstrate compliance with the Biological and Toxin Weapons Convention and related obligations, and to minimize the risks to U.S. strategic interests from misperception, including adversary mis- and disinformation efforts.
- DoD should support research through the BioCATALYST network to understand, assess, and develop effective means for countering misperceptions and mis- and disinformation within legal constraints to enable resiliency among the network and its activities.
Biotechnology encompasses a diverse array of scientific disciplines and applications, several playing a critical role in fortifying national security. Their applications span both biological and non-biological domains, offering new opportunities to enhance existing capabilities, address critical vulnerabilities, and unlock new capabilities for the future. To meet the United States’ current and future needs for national security and defense, the committee provides its vision for a national, strategic resource for advancing U.S. capabilities in AI-enabled biotechnology: Biotechnology Coupled with Artificial Intelligence and Transformative Automation for Laboratory Yielding Strategic Technologies, or BioCATALYST, network. This proposed R&D network is envisioned as an interagency endeavor with involvement of universities and industry and led by DoD and ODNI. By integrating high-performance AI, data analytics, and experimental resources, it aims to overcome significant barriers in national security applications of biotechnology. The network will establish standards and processes for interconnecting AI-experimental workflows, coordinate capabilities across DoD and broader U.S. government initiatives, and foster innovation through AI/ML convergence. Additionally, BioCATALYST would address ethical, legal, social, and perception risks associated with biotechnology and ensure compliance with international obligations through dedicated risk management and research programs. Achieving this vision relies on an integrated approach that prioritizes R&D of biotechnologies, leverages expertise and infrastructure across the United States, navigates the complex landscape that governs biotechnologies, and reduces critical vulnerabilities.
___________________
11 Murthy v. Missouri 603 U.S. (2024).