Human Accountability and Responsibility Needed to Protect Scientific Integrity in an Age of AI, Says New Editorial
News Release
By Dana Korsen
Last update May, 21 2024
WASHINGTON — In an editorial published today in the Proceedings of the National Academy of Sciences, an interdisciplinary group of experts urges the scientific community to follow five principles of human accountability and responsibility when using artificial intelligence in research. The authors also call for the establishment of a Strategic Council on the Responsible Use of AI in Science to provide ongoing guidance and oversight on responsibilities and best practices as the technology evolves.
The editorial emphasizes that advances in generative AI represent a transformative moment for science — one that will accelerate scientific discovery but also challenge core norms and values of science, such as accountability, transparency, replicability, and human responsibility.
“We welcome the advances that AI is driving across scientific disciplines, but we also need to be vigilant about upholding long-held scientific norms and values,” said National Academy of Sciences President Marcia McNutt, one of the co-authors of the editorial. “We hope our paper will prompt reflection among researchers and set the stage for concerted efforts to protect the integrity of science as generative AI increasingly is used in the course of research.”
The 24 authors of the editorial were convened by the National Academy of Sciences, the Annenberg Public Policy Center of the University of Pennsylvania, and the Annenberg Foundation Trust at Sunnylands to explore emerging challenges posed by the use of AI in research and to chart a path forward for the scientific community. The group included experts in behavioral and social sciences, ethics, biology, physics, chemistry, mathematics, and computer science, as well as leaders in higher education, law, governance, and science publishing and communication. Its discussions were informed by a set of commissioned papers that explored various aspects of AI and its use and governance, published in Issues in Science and Technology.
In the editorial, the authors urge the scientific community to adhere to five principles when conducting research that uses AI: transparent disclosure and attribution; verification of AI-generated content and analyses; documentation of AI-generated data; a focus on ethics and equity; and continuous monitoring, oversight, and public engagement.
For each principle, the authors identify specific actions that should be taken by scientists, those who create models that use AI, and others. For example, for researchers, transparent disclosure and attribution includes steps such as clearly disclosing the use of generative AI in research — including the specific tools, algorithms, and settings employed — and accurately attributing the human and AI sources of information or ideas. For model creators and refiners, it means actions such as providing publicly accessible details about models, including the data used to train or refine them.
The proposed strategic council should be established by the National Academies of Sciences, Engineering, and Medicine, the authors recommended, and should coordinate with the scientific community and provide regularly updated guidance on the appropriate uses of AI. The council should study, monitor, and address the evolving use of AI in science; new ethical and societal concerns, including equity; and emerging threats to scientific norms. It should also share its insights across disciplines and develop and refine best practices.
The Sunnylands Retreat was supported by the Ralph J. Cicerone and Carol M. Cicerone Endowment for NAS Missions.
Contact:
Dana Korsen, Director of Media Relations
Office of News and Public Information
National Academies of Sciences, Engineering, and Medicine
202-334-2138; e-mail news@nas.edu