Skip to main content

Toward a Research Agenda for Securing Artificial Intelligence

In progress

Any project, supported or not by a committee, that is currently being worked on or is considered active, and will have an end date.

The National Academies of Sciences, Engineering, and Medicine will develop an issue paper on AI security, focusing on the unique risks and vulnerabilities of AI systems. The paper will highlight where traditional cybersecurity models fall short, explore new threats and defenses, and identify key research opportunities to better protect AI.

Description

The National Academies of Sciences, Engineering, and Medicine will prepare an issue paper exploring the emerging discipline of artificial intelligence (AI) security drawing in part on a public discussion organized under the auspices of its Forum on Cyber Resilience.
The issue paper will explore new security threats and risks to AI and AI-enabled systems resulting from their unique attributes and uses. The emphasis will be on securing AI systems rather than the use of AI for offensive or defensive cybersecurity. Potential issues to be considered include the following:
• The applicability and limitations of conventional cybersecurity models in the context of AI systems;
• Novel vulnerabilities, attack surfaces, and threat models specific to AI systems;
• New defensive techniques and approaches for AI systems; and
• Key knowledge gaps in AI security and promising research directions.
The issue paper authors will prepare an issue paper distilling insights from the Forum discussions and further exploring the challenges of securing AI systems and promising paths forward.

Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.