Completed
Developments in artificial intelligence and machine learning have made the question of trustworthiness an urgent one. Along with the National Institute of Standards and Technology (NIST), CSTB is developing a workshop to explore the contributions that NIST in particular, and measurement sciences more broadly, can make to the analysis and development of trustworthy artificial intelligence. The workshop will consider priority areas in AI, the use of measurement sciences and standardization to ensure AI trustworthiness, potential approaches and frameworks to integrate measurement sciences into this issue, and research opportunities for NIST and other bodies.
Featured publication
Workshop_in_brief
·2021
On March 3-4, 2021, the National Academies of Sciences, Engineering, and Medicine held a workshop to explore both current assessments and current approaches to understanding and enhancing trustworthy artificial intelligence (AI) and to identify potential paths to contribute to improved assessments o...
View details
Description
A National Academies workshop will consider opportunities for measurement science research at NIST that would advance the trustworthiness of AI systems. The workshop will engage experts and stakeholders from academia, the developers and users of AI in industry, and standards bodies. They will consider priority areas in AI, relevant dimensions of AI trustworthiness, potential approaches and frameworks, and research opportunities for NIST, other research performers and sponsors, and standards bodies.
Questions the workshop will explore include:
- Which AI techniques and applications are priorities for measurement science research today, and what are likely future opportunities?
- What attributes--including but not limited to security, robustness, accuracy, predictability, fairness, and explainability--characterize trustworthy AI systems? How do their definition, interpretation, and significance vary according to context and application?
- What is presently known about how to measure or evaluate these attributes and what research would advance our ability to measure them?
- How could these attributes and approaches to measuring them be applied in risk management frameworks for AI systems?
- Given large investments in AI by companies and other federal research sponsors, how can NIST best leverage its mission focus and capabilities in measurement science to advance our understanding of AI trustworthiness--and ultimately the trustworthiness of deployed systems?
- How could NIST measurement science work contribute to the advancement of AI capabilities by filling standards and evaluation gaps, such by developing standard test sets and challenge problems?
A workshop proceedings-in-brief summarizing workshop discussions will be prepared by an Academies rapporteur and published by National Academies Press following Academies peer review.
Contributors
Sponsors
Department of Commerce
Staff
Brendan Roach
Lead
Jon Eisenberg
Lead
Shenae Bradley