Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief (2024)

Chapter: Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief

Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
images Proceedings of a Workshop—in Brief

Artificial Intelligence at the Nexus of Collaboration, Competition, and Change

Proceedings of a Workshop—in Brief


The rapid evolution of artificial intelligence (AI) technology underscores the need for public and private institutions to understand the impact of AI on research and development (R&D), workforce development, and policies and practices in every sector of society. AI offers tremendous benefits and opportunities, but also hazards and challenges, many of which remain unknown.

In recognition of the rapidly changing AI landscape, the Government-University-Industry Research Roundtable (GUIRR) of the National Academies of Sciences, Engineering, and Medicine (the National Academies) convened its membership at a workshop on October 10-11, 2023, in Washington, DC. Guests were invited to discuss the effects of the AI revolution on policy, organizational governance, and strategic cooperation among sectors in the research landscape and workforce development.1

FIRESIDE CHAT: AI “IN THE FIRST INNING”

GUIRR co-chair Darryll Pines (University of Maryland) opened the workshop in conversation with Miriam Vogel (President and CEO, EqualAI; Chair, National AI Advisory Committee) about efforts to achieve responsible and inclusive AI.2 When EqualAI was launched as a nonprofit organization more than five years ago to reduce bias and harms in AI, Vogel observed, most companies that used AI in pivotal ways did not understand that they were in fact “AI companies.” Awareness has grown and national and international standards are becoming clearer, but best practices remain uncertain.

Vogel and Pines agreed about the importance of international collaboration and inclusion. She noted many federal agencies are involved in international collaborations, working with counterparts in the European Union, as well as G7, G9, and G20 countries. The National AI Advisory Committee (NAIAC), which is comprised of 26 cross-sectoral experts who advise the Executive Branch, has urged the White House to cooperate with the broad range of countries that are developing and using AI.3

The committee has also prioritized how to involve all Americans through training, inclusive learning, and other efforts. “In the first public meeting, we decided that when we are talking about what AI should aspire to, it would mean AI that is inclusive, effective, responsible, and trustworthy,” she stressed. “We as a country will fall behind if we do not ensure that AI is built by and for a broader cross section of the population. If we are not

__________________

1 For the workshop agenda, presenters’ biographical sketches, and meeting recordings and presentations, go to https://www.nationalacademies.org/event/778_10-2023_guirr-meeting-artificial-intelligence-at-the-nexus-of-collaboration-competition-and-change.

2 For more information on EqualAI, see https://www.equalai.org/. For more information on the National AI Advisory Committee, see https://ai.gov/naiac/.

3 For a copy of the National Artificial Intelligence Advisory Committee’s Year 1 report, see https://ai.gov/wp-content/uploads/2023/05/NAIAC-Report-Year1.pdf.

Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.

inclusive in who is developing AI, studies show it will be biased and not as effective.” She also pointed out the array of skills needed to build AI, not just computer science and engineering.

When Pines asked whether public interest in generative AI might grow or wane, Vogel replied, “We are not even in the first inning! Our entire world will be exponentially upended by AI. We do not need to fear it, but we need to take ownership.” AI will be part of reality going forward, she added, providing examples of how it can save time and contribute to solutions. If we ensure that there is broad participation from those who have previously not had a voice, she concluded, “Democracy wins.”

Discussion

The discussion began with how to make the goal of inclusion in AI development and implementation a reality. A participant pointed to studies that showed that governments were not meaningfully consulting with the public when developing AI policy. This aspiration has not been met, Vogel said. She underscored the need for people with different skills and education to become involved. She also urged looking at each use case and considering who has not been accounted for and for whom it can fail.

Other areas for further exploration raised by participants included the environmental and health impacts of AI. The need for large data centers and other infrastructure will increase energy and water usage, a participant observed. Vogel agreed that AI will impact climate change and expressed surprise that these issues are not being discussed more, given similar conversations about block-chain. However, she expressed hope that AI can also be part of the climate change solution.

She also agreed with a comment that AI can contribute to mental health stresses and instill fear. Education and awareness are important and there are many questions around how to ensure everyone can adapt to AI. “We are learning as we go. Even those who are creating AI are starting to understand its constraints and abilities.”

Another participant posited that much higher public investment in workforce development, research, and other areas is needed to compete globally. Vogel pointed to the National Artificial Intelligence Research Resource (NAIRR) under consideration by Congress as one mechanism to address these concerns.4 However, even with the billions of dollars recommended for NAIRR and other programs, other countries are expressing stronger commitment to and investing more resources in AI.

A participant asked how NAIAC is advising the administration on balancing beneficial uses of AI with vulnerabilities from foreign entities that seek advantage against the US. Bias, legal liability, and national security threats come up in almost every conversation of the NAIAC, she noted, and they are still educating themselves on the benefits and harms of openness in development of AI models in the context of national security. She highlighted the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF, discussed later in the workshop) and added that a new Equal AI impact assessment tool is built on the framework.5

When asked how AI will shape the path of early career professionals, Vogel said, “AI will be part of their journey” in the tasks they perform on the job. She identified four reasons that companies must take “fair AI” seriously: its impact on employees, product integrity, consumer base, and liability.

SHAPING POLICY FOR AI

The first panel, moderated by Pines, brought together U.S. Senate staff members involved in AI policymaking and an Argonne National Laboratory senior leader. They discussed evidence-informed policymaking to advance societal impact of AI, including how a great deal remains unknown, particularly related to generative AI.

Congressional Perspectives

According to Max Katz (office of Sen. Martin Heinrich [D-NM]), AI was a “niche topic” on Capitol Hill until recent advances in technology raised its profile. In his own view, Katz said that Sen. Charles Schumer (D-NY) sees legislation as a priority, but it will be challenging to enact with all the other issues that Congress must consider. He stressed the importance of bipartisanship

__________________

4 NAIRR is part of the CREATE AI Act now under consideration by Congress to provide AI researchers and students with greater access to the complex resources, data, and tools needed to develop safe and trustworthy artificial intelligence.

5 For more information, see https://www.nist.gov/itl/ai-risk-management-framework.

Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.

within the Senate AI Caucus, a group led by Sen. Heinrichs and Sen. Mike Rounds (R-SD).

Katz identified several AI “buckets” that Congress needs to address, including federal R&D, workforce development, and rules of the game. Congress has supported AI-related R&D for years through funding of relevant agencies, he said, but there are new missions to consider. For example, the bipartisan CREATE AI Act would establish a National Artificial Intelligence Research Resource (NAIRR) to make AI available to all. Katz said he and Sen. Heinrichs agree with Vogel that AI is in “the first inning”, despite the difficulty in looking ahead 10 years. Congress is used to governing things that evolve slowly, but the assumption must be that AI will create broad change, so Congress needs to prepare for technology development that remains uncertain. Congress and the Executive Branch will need expertise, resources, as well as sustained legislative attention and governance, he said. Workforce issues and the “rules of the road” on how to govern AI are critical, yet, again, unclear and challenging for Congressional action. Katz gave several examples of the broad uses of AI, from drafting emails to autonomous weapons systems. Automation is useful, but humans need to remain in the decision-making loop to avoid harm in the future, he stressed.

Joel Burke (office of Sen. Mike Rounds [R-SD]) echoed the value of bipartisanship on AI issues and the potential for innovation through AI, which he characterized as one of the most important advances in a generation. He hopes that the government can solve problems caused by AI, but also cautioned that policymakers should better understand the technology before regulating it to avoid unintended consequences. Congress tends to be reactive but must look to the future.

From his personal AI policy principles, the U.S. should lead in AI research and commercialization. Additionally, AI needs a secure supply chain, educated workforce, and supportive regulatory environment. AI policy must be concrete, focused, and pro-innovation, and he warned against the dangers of over-regulation, comparing it with nuclear energy in the U.S. Overregulation caused nuclear energy price inflation, he said, and, unlike U.S. adversaries, a missed opportunity to create strategic partnerships through exporting nuclear technology. Like Katz, he welcomed input from a series of Senate-convened AI Insight Forums to determine a potential role for government. He noted base-level agreement emerged at the Forums on support for basic research, talent, and more equitable access to AI.

Executive Action at Department of Energy

Rick Stevens (Argonne National Laboratory) discussed efforts in the Department of Energy (DOE) to build an AI Initiative for Science, Energy, and Security.6 He noted AI can be a major force for good in terms of jobs, productivity, and the economy. It can be an amplifier of human skills and bridge to non-human skills and intent; synthesize knowledge; find patterns that humans cannot see; and accelerate discovery. A recent OECD report showed how AI could boost research productivity.7 But, he stressed, AI must grow in a “smart way” so that it does not just benefit the countries that are early adopters.

Over the past few years, DOE labs have explored how to use AI to advance basic science, energy technology, and national security. Examples include leveraging existing investments such as exascale computers; managing the hundreds of petabytes of data of low commercial value that is still useful to DOE for building AI systems; leveraging tens of thousands of scientists; and providing a way to train, build, and improve models in a secure environment. He suggested AI adoption can occur faster and smarter through public and private investment and through risk management. Do not be afraid of risk, he urged, harkening back to historical fears of internal combustion engines and aircraft. Risks must be watched, but not take over the decision making, he concluded.

Discussion

When asked how to review AI benefits and risks, Katz emphasized public-sector horizon-scanning capabilities to develop and monitor benchmarks. Burke mentioned an upcoming Senate panel on AI for medical use. Stevens noted several federal agency workshops and other efforts to accelerate and improve the comparison of models.

Burke underscored that AI impacts will be felt in every industry and sector and, in developing regulation, the

__________________

6 For more information, see https://www.anl.gov/ai-for-science-report.

7 OECD. 2023. Artificial Intelligence in Science. https://www.oecd.org/publications/artificial-intelligence-in-science-a8d820bd-en.htm.

Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.

banking sector’s tech-agnostic regulations and use cases are good models. Katz distinguished between AI regulation in narrow, sectoral use cases and in general purpose systems. Views on regulations for the narrow cases may reflect one’s views about regulation, he suggested, but, related to GPT4 and other new technologies, not enough is known to have an answer. He pointed to bipartisan agreement that America cannot be second in building these models.

In response to a question about how AI can enhance democracy and humanity, Burke offered personal hope that AI will improve the quality of life for more people. Stevens noted potential uses of AI to lift people out of poverty by creating new career paths, and rapid experimentation is needed to test whether or not claims that AI will supplant jobs are valid.

Large and small companies have different needs and interests, a participant noted. She urged also listening to smaller, start-up, and innovative companies. Burke said Congress is cognizant of this issue and welcomed perspectives from all companies and individuals. Katz also noted engagement with several trade groups that represent the start-up ecosystem.

A participant commented that millions of Americans do not have access to high-speed internet and could be excluded from many AI benefits. He also noted that unless the government ensures research data is discoverable to train AI tools, the true value of U.S. innovation will not reach its full potential. If institutions restrict availability of their data for AI training in the near term, they may not be acting in their own long-term interest, Stevens agreed. He also suggested that agencies that fund the production of data should facilitate long-term storage.

The tension between access and impact on national security was raised. Tools that are broadly accessible to extend equity could be open to black-box and white-box attacks, a participant commented. Stevens noted an evolving idea to move away from watermarking AI-generated content to authenticating real content in a cryptographically secure way. Given the possibility of attacks, more R&D and engagement with academia is needed.

A participant observed that patents and copyrights incentivize innovation by creating a monopoly for the innovator and asked how that system will work in an AI world. Katz concurred that this is an important issue and that the U.S. Patent and Trademark Office (USPTO) and the Library of Congress’s Copyright Office are carefully thinking through these issues. He suggested that Congress’s role is to watch those processes to ensure they’re going in the right direction. Another open question is how interacting with AI, especially generative AI, will influence the human brain, suggested another participant, who urged prioritizing human well-being. She welcomed a recent President’s Council of Advisors on Science and Technology (PCAST) recommendation that federal agencies double down around improving public engagement with science and offered philanthropy as a space that can provide proof of concept about how to create productive dialogues. While this engagement can be time consuming and costly, she acknowledged, it is essential to advance AI in line with societal goals and values. Responding to these comments, Katz hypothesized an example where AI would be able to produce individually targeted political communications. With examples like this, he stressed the need for humility in understanding what “we do not know.”

EFFECTIVE GOVERNANCE OF AI TECHNOLOGIES

The second panel, also moderated by Pines, focused on AI ethics, regulations, and risk management frameworks. Presenters drew from their experience in federal agencies and academia to highlight their current activities and share guideposts and considerations for others.

NIST AI Risk Management Framework

Elham Tabassi (NIST) elaborated on the AI Risk Management Framework (AI RMF 1.0) mentioned earlier by Miriam Vogel. NIST’s overall mission, she said, is to cultivate trust in technology by advancing measurement science and developing international standards that make technology more reliable. NIST released the AI RMF in January 2023 as a voluntary resource for organizations designing, developing, or using AI systems to manage risks and promote trustworthy and responsible AI. It was developed to be rights-preserving, flexibly applied, and measurable.

Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.

Tabassi highlighted the timeline to AI RMF 1.0 in an open, transparent process that began in 2021. There was broad engagement not only from the technical community, such as computer scientists, mathematicians, and statisticians, but also from social scientists, psychologists, and philosophers. An important step was to agree on key terminology. Seven characteristics were identified, including that AI should be safe, secure and resilient; explainable and interpretable; privacy-enhanced; fair; accountable and transparent; and valid and reliable. Risk management requires measurement, tolerance, prioritization, and integration and management. Tradeoffs are necessary, she emphasized.

The AI RMF also presents an AI lifecycle and AI actors (Figure 1). Responsibility for risk management should be shared by all actors throughout the lifecycle and requires continual monitoring, rather than one-time testing.

The RMF provides 72 guidelines to map, measure, manage, and govern AI risks. AI RMF Playbooks provide more background about each guideline and suggest possible actions as part of any AI design, development, and deployment effort to create a culture of trustworthiness. Context is essential, but use-case, temporal, and cross-sectoral profiles are being developed.

An AI RMF Roadmap sets out 10 actions for NIST to complete, including alignment and crosswalks with other standards. Expanding TEVV (testing, evaluation, verification, validation) efforts is essential so NIST can advance the measurement science around AI to ensure trustworthiness to meet its mission. She highlighted NIST’s AI Resource Center with resources and a platform for engagement.8 A Generative AI Public Working Group has also been set up.

Image
FIGURE 1 AI lifecycle and actors.
SOURCE: Elham Tabassi, Workshop Presentation, October 11, 2023.

AI and the Intelligence Community

Jackie Acker (CIA) recalled the concern about risks from earlier technologies, such as the first automobiles. Acker likened AI governance to brakes, which allowed cars to go faster in a safe way. Knowledgeable and reliable rules of the road and standards can help make AI go further and farther.

One size does not fit all for AI governance, Acker stressed. Governance depends on each organization’s risk tolerance, resources, and other characteristics. She noted an Intelligence Ethics Framework for the Intelligence Community has been developed that is law-neutral and can be applied across agencies.9 It is built on an unclassified set of principles with the following high-level values: respect the law and act with integrity, transparent and accountable, objective and equitable, human-centered development and use, secure and resilient, and informed by science and technology. It is “Version 1.0” and she welcomed feedback.

Operationalizing principles is the hard part, she acknowledged, which is where a framework comes into play. She recommended organizations write down their own principles and how to operationalize them and assess risks and tradeoffs. More privacy may mean less accuracy, for example. In developing policies and frameworks, she suggested organizations look internally at their stakeholders and, importantly, consider who is not in the room, such as ethicists, humanists, NGOs, or others. In addition to principles and frameworks, it is critical to have a tailored training component for people at all levels, including the leadership. “Step Zero,” she said, is to ensure there is a diverse team to develop or evaluate the technology.

Acker concluded that in developing AI governance, it is essential to make sure the right people are in the room, write down values, have a process to support those values, train, and continuously evaluate to operationalize an ethics implementation plan.

__________________

8 For more information, see https://airc.nist.gov/home.

9 For more information, see https://www.intelligence.gov/artificial-intelligence-ethics-framework-for-the-intelligence-community.

Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.

Data Governance Gaps and International Implications

Susan Arial Aaronson (George Washington University) is co-PI of the NSF-NIST Institute of Trustworthy AI in Law and Society (TRAILS). She described its mission to develop and deploy a new, more accountable, inclusive, and participatory approach to AI. To achieve this goal, they conduct research and hold convenings and training events.

Aaronson highlighted gaps that she sees in data governance. While data governance is one way to govern AI, she urged thinking about international implications, such as whether firms follow internationally accepted rules around copyright and data protection. In addition, are they using internationally representative datasets that are accurate and complete? What are policymakers trying to govern, and which governance issues require international coordination? Such issues could include risk, accountability, and business practices of firms developing the technology.

The U.S. is now in a dominant position, with a large fraction of the world’s generative AI firms. These firms rely on open-source websites to get data but rely on trade secrets to protect their algorithms and to control use and re-use of data. She questioned whether this dominance leads to the perception that AI markets are unfair.

The generative AI supply chain is comprised of proprietary data belonging to the firm, as well as data collected or purchased. Questions raised include how web scraping affects copyrighted data, how those who create content are protected, and whether large language models (LLMs) should be open or closed. Governments have responded to generative AI in different ways, but none, in her view, has acted thoughtfully or creatively. The potential to solve problems with generative AI is amazing and may be an indirect incentive to data sharing and open science. But, she warned, generative AI could lead to costs to global science, including less willingness to share data without compensation and policymakers choosing between innovation and other priorities. As an example, China chose censorship over more accurate generative AI based on global datasets, she pointed out, and warned that if the U.S. limits openness, it could stifle innovation.

The Role of Technologists at the Federal Trade Commission

The mission and jurisdiction of the Federal Trade Commission is to deem whether a company’s practices are unfair as defined by Section 5 Authority and other legislation,10 explained Stephanie Nguyen (FTC). She highlighted the work of the Office of Technology, a team of twelve technologists who work alongside attorneys to fulfill this mission. The technologists are integrated directly into cases rather than building software or performing other technical tasks.

Technical shifts have affected consumers over the years, she noted. For example, in 1888, the Kodak Camera raised concerns about privacy and unauthorized image usage. Fast forward to May 2023, when FTC Chair Lina Khan noted the need to regulate AI and to act proactively.11 Nguyen said the tech team seeks to understand how harms can manifest in different layers of a product or service, including infrastructure, models, algorithms, front end, and applications. Her team’s activities include enforcement; policy, narrative development, coalition building, and institutional change are all done in the service of enforcement, rather than research for its own sake. Nguyen cited FTC Chair Khan, who said rules are most successful when they are clear and simple. Liability should be directed at corporate responsibility and not overburden consumers. The agency looks to upstream enforcement, such as prevention and detection of existing harms and risks for deployment to millions of users. Transparency and monitoring are fallbacks if prevention fails but should not put more burden on the users.

These principles are applied to a range of cases and enforcement, Nguyen continued. For example, an online game may harvest children’s data, opt users into facial technology, or insufficiently limit data access to their employees. To understand how a technology is built and changes over time, horizon scanning, and research are applied to enforcement. For example, FTC released an RFI on cloud-computing business practices, including single points of failure and security features.12 Additionally, a roundtable with creative professionals

__________________

10 For more information, see https://www.ftc.gov/about-ftc/mission/enforcement-authority.

11 Kahn, L. We must regulate AI. Here’s How. The New York Times, May 3, 2023.

12 For more information, see https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/03/inquiry-cloud-computing-business-practices-federal-trade-commission-seeking-public-comments.

Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.

was recently convened to discuss AI-related copyright concerns.

Discussion

In response to a concern about open data threatening national security, Acker agreed existing data governance structures are challenged by AI and both policy and technical issues come into play. A multidisciplinary team is addressing these concerns within the intelligence community. Aaronson said tension has grown because datasets are huge, quick, and global. She cautioned that too many restrictions mean less potential for use of the data. Nguyen said best practices include multifactorial verifications, data retention schedules, and a focus on larger systems. Tabassi urged technology development that makes it easy to do the right thing, difficult to do the wrong thing, and easy to recover. This principle should inform design and development, rather than blame the users, she suggested.

Asked about changes given the rapid advances in AI technology, Nguyen said the biggest change is the scale and speed of deployment. A lot must be understood in real time, because a product may already be used by millions of consumers. Probability of an event vs magnitude of its consequences must be considered, Tabassi said. Aaronson said the volume and different types of data are more than is governable. Acker shared a personal observation that users need education and methods such as watermarking to discern the sources of their information.

A participant commented that data sharing in the biomedical field has taken years to be adopted and he would not want to retreat given the discoveries that have resulted. He suggested a nuanced view about protecting different data types and in different ways. Aaronson referred to her paper about renting data to share but also protect it.13 She urged civic education on data related to democracy and accountability, citing a free public course on AI that is offered in the EU. Acker pointed to opportunities for policymakers to develop processes to protect data perhaps by using it in a federated way. She also noted that sector-specific use cases and standards will be necessary since risks are different in every sector.

STRENGTHENING STRATEGIC COOPERATION ON AI-DRIVEN INNOVATION

During a panel moderated by Grace Wang (Worcester Polytechnic Institute), presenters from federal agencies and from the private sector considered ways to strengthen cooperation in AI R&D, as well as how to use the power of partnerships to drive AI innovation, research, and technology development to accelerate rapid scaling and deployment of AI in the marketplace.

View from the National Science Foundation

As Tess deBlanc-Knowles (NSF) noted, new AI capabilities will transform daily lives, contribute to economic growth and U.S. competitiveness, and, most exciting from NSF’s perspective, accelerate science and discovery.14 Public- and private-sector investments and the ability to cooperate across geographic and sectoral boundaries are needed to leverage the strengths so all benefit and drive success.

Many AI technologies have their roots in NSF funding, dating back decades. NSF’s AI investment now totals more than $700 million annually across the country and the entire pipeline from foundational discoveries to solutions to societal problems. DeBlanc-Knowles highlighted NSF’s role in the U.S. AI ecosystem in four areas. First, to support the next generation of technology, NSF funds more than 500 new foundational, curiosity-driven research projects annually. Second, to advance trustworthy AI, NSF supports projects to assess current and future risks, including launching the Safe Learning-Enabled Systems program with Open Philanthropy and Good Ventures and the AI Safety Institute with NIST, which focuses on ethical and societal considerations. Third, to apply AI to solve societal challenges, NSF supports the application of AI to new fields and big problems, including through a network of 25 National AI Research Institutes in such areas as weather, public health, and agriculture. In addition, the NSF Engines Programs made 15 awards in 2023 that support the application of AI to a wide range of societal and economic challenges. Fourth, to translate research breakthroughs, NSF lab-to-market platform includes Partnerships for Innovation, Innovation Corps, and the Small Business Innovation Research/Technology Transfer Programs.

__________________

13 Aaronson, S. A. 2022. Wicked problems might inspire greater data sharing. Working Papers 2022-09, The George Washington University, Institute for International Economic Policy.

14 For more information on AI at NSF, see https://www.nsf.gov/cise/ai.jsp.

Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.

Machine learning has highlighted a growing divide based on access to computational resources and high-quality datasets. NSF has proposed expanding opportunities through NAIRR. NSF is also supporting AI workforce development from K-12 to experiential learning to advanced degrees, especially in communities and regions that have been underserved by federal funding. International collaborations are advancing because the opportunities and challenges are too big for one sector or country, she concluded.

Evaluating AI at the Food and Drug Administration

Ed Margerrison (FDA) focused on the Center for Devices and Radiological Health (CDRH). CDRH regulates 238,000 types of devices such as MRI machines, in vitro diagnostics, and pacemakers. The Office of Science and Engineering Laboratories (OSEL), CDRH’s R&D arm, tries to future-proof through technology forecasting and preparing reviewers to evaluate new products. Some of OSEL’s 20 research programs are product-specific, but others, such as AI, cut across device areas.

Rather than peer-reviewed papers, the desired work outputs are regulatory science tools that can be broadly used to evaluate products and technologies, validated through publications. These tools can make the regulatory process more efficient and help upstream innovators de-risk earlier. As a former CEO of a startup, he underscored the value to entrepreneurs of having validation tools like these when seeking funding. One way to measure impact of the tools is to see if they are used in premarket applications and for benchmarking early technologies.

OSEL still needs to develop tools to fill six gaps: (1) limited labeled datasets and synthetic data; (2) bias, equity, and generalizability; (3) metrics for performance estimation, reference standards, and uncertainty; (4) evolving algorithms through a Predetermined Change Control Program (PCCP); (5) emerging clinical AI applications; and (6) post-market AI monitoring. For example, metrics are not yet developed to evaluate effectiveness of an algorithm when built into a medical device (Gap 3), and a PCCP (Gap 4) would allow manufacturers to submit an evolution plan that can be validated. To handle the enormous number of images needed, he noted, OSEL is investigating the idea of a federated model in which institutes hold their images but allow them to be used through a federated marketplace.

AI Recommendations from the U.S. Chamber of Commerce

Michael Richards (U.S. Chamber of Commerce) highlighted the AI Working Group within the Chamber’s Technology Engagement Center, comprised of 160 members and 32 sectors. He agreed with earlier comments on the value of the business community working alongside the government to drive AI innovation. Benefits include sharing costs, building on the strengths of each partner, economies of scale, internalizing knowledge spillover and overcoming informational and behavioral barriers, securing higher-quality contributions from the private sector, and increasing opportunities for commercialization.

To strengthen domestic research and development cooperation, the Chamber recommended that Congress authorize NAIRR as one of nine AI policy recommendations in a September 2023 letter to Congress.15 Richards noted how the development of a recent report by the OSTP/NSF-led NAIRR Task Force could inspire increasing diversity and talent, improving capacity, and advancing trust in AI. In a public poll conducted by the Chamber in 2022, 80 percent of respondents who said they understand AI well agree it will help society. In contrast, only 40 percent of respondents who said they do not know much about AI said it will benefit the public. This shows the importance of education, Richards said.

Another area of strategic cooperation is open data, because the lifeline of AI is good, solid data. The Chamber supports the Open Data Act, signed in 2019, and asks policymakers to build on this law through funding, oversight, and expansion to other datasets. The Chamber also supports appropriation of the $200 billion in the CHIPS + Science Act for AI R&D, quantum, and robotics as key investments to drive innovation.

Leading globally is also critical for consistent rules of the road. A domestic workforce needs to be developed, as highlighted in the Chamber’s AI Commission Report and its recommendations to Congress, such as through K-12 education and granting more H1-B visas for skilled workers. To drive international innovation and collaboration, Richards stressed the importance of trans-continental data flows, rather than government silos and restrictions on data; stronger partnerships with allies; and international

__________________

15 For the text of the letter to Congress articulating the Chamber of Commerce’s nine AI priorities, see https://www.uschamber.com/technology/u-s-chamber-letter-on-artificial-intelligence-priorities-2.

Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.

AI sandbox tools, which provide a controlled environment to test new approaches outside of existing regulatory structures.

“Violet Teaming” for AI-Driven Innovation

Alexander Titus (In Vivo Group) noted that policies must address the risks for “mal-innovation.” He presented a technical paradigm to consider the opportunities for positive and negative cooperation. On the one hand, a solution for a UK woman’s seemingly incurable infection was found using a large-scale phage dataset that included key data contributed by a South African teen.16 In contrast, researchers reconstituted an extinct poxvirus, similar to smallpox, which spotlights biosecurity concerns.17 Similarly, a biotech company came out with an automated scientist to engineer microbial strains, which can accelerate pharmaceutical development but also may create areas of concern.18 In addition to intentional misuse of AI, unintentional risks can occur.

There is not a good way to balance tradeoffs, Titus said. The tech sector employs red (thinking like a bad actor), blue (how to stop a bad actor), and purple (collaborate) teaming but this approach only looks at the technology itself without considering wider implications. Titus cited the concept of Violet Teaming to bring institutions and the public good into the equation.19 In addition to identifying how a system might cause harm post-hoc, he suggested the system could be developed to prevent or minimize risks—for example, to engineer viruses for phage therapies while also ensuring models cannot generate viruses with lethal properties. These tradeoffs could perhaps be encoded into the training mechanism that develops algorithms to learn, he suggested, but research is needed across sectors.

Analytical metrics for risks must be developed, accompanied by public dialogue. Most importantly, he urged, is the need to train the next generation of experts who can think across boundaries, such as those between academic fields. Everyone needs education about AI because it is impacting every field, he posited. A new violet-teaming paradigm, along with research, analytical tools, discussion, and training programs, might help maximize benefits while minimizing risks.

Discussion

Margerrison clarified that the algorithms developed at FDA are publicly available and can be adapted and developed by others. In rare situations, patent protections are needed so an innovator has the incentive to bring a life-saving technology to market. Expanding on the concept of a federated model, he said it is important to determine quality control of datasets. Encouraging a federated model or diversified ownership of information through a rental or pay-for-play model could be explored. Despite the shared goals to test and verify, deBlanc-Knowles agreed that validation techniques are immature while things are moving quickly. Titus said that datasets can undergo quality control (QC) but QC assumes that people are properly trained to evaluate the quality of the data. Margerrison provided several examples that showed the value of an incremental approach to QC. A participant commented on how validation and verification methods cannot or do not yet exist and asked how to ensure the methods developed will work on future iterations of a technology. Titus urged flexibility in test ranges as they will change fast. DeBlanc-Knowles added that NSF is looking into AI-ready testbeds.

To develop the workforce, deBlanc-Knowles underscored pathways at different points of careers and occupational categories, train-the-trainer models, and experiential learning. Margerrison pointed out the need yet the difficulty in recruiting AI and other technical experts into the government, as well as the importance of a better educated public. A participant also stressed the need for communications, critical thinking, analytical writing, and other skills for STEM students entering the workforce. Titus added scientists must be able to connect what they do in terms understandable by lay audiences.

Whether the country is investing enough in AI was raised. DeBlanc-Knowles noted the ecosystem relies on government investments in innovation, research, and work-

__________________

16 Yong, E. 2019. A dying teenager’s recovery started in the dirt. The Atlantic. https://www.theatlantic.com/science/archive/2019/05/engineered-viruses-cured-a-dying-girls-infections/589075/

17 Kupferschmidt, K. 2017. How Canadian researchers reconstituted an extinct poxvirus for $100,000 using mail-order DNA. Science Insider. Science. https://www.science.org/content/article/how-canadian-researchers-reconstituted-extinct-poxvirus-100000-using-mail-order-dna.

18 Singh, A. H. et al. 2023. An automated scientist to design and optimize microbial strains for the industrial production of small molecules. bioRxiv. https://www.biorxiv.org/content/10.1101/2023.01.03.521657v1.

19 Ovadya, A. 2023. Red teaming improved GPT-4. Violet teaming goes even further. Wired. https://www.wired.com/story/red-teaming-gpt-4-was-valuable-violet-teaming-will-make-it-better/.

Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.

force development and the government investments in R&D must grow in parallel. Margerrison noted that it is a difficult question to answer because AI is competing with other priorities for limited resources. Titus noted that AI is so interconnected with other disciplines that investments in those disciplines are often also investments in AI.

Another participant commented on the need to leverage traditional library science (i.e., how people seek information) to create knowledge-oriented workers. Current faculty members also need training in how to create the disciplines of the future, she suggested. Titus said prompt engineering, built on skills to extract and vet information, is foundational. DeBlanc-Knowles invited industry and other partners to consider involvement in NSF Regional Innovation Engines.20

AI AND THE FUTURE OF WORK

Randy Ozden (Streamlyne) moderated the final panel that considered the impacts of recent AI-driven changes on the U.S. workforce, with a discussion of the skills needed, programs to share those skills, and impact on jobs at different levels.

New Skills for Generative AI

Edward Preble (RTI International) offered his perspective as a data scientist in a research organization that works with a range of government clients and complex problems. RTI must assemble diverse, well-integrated teams of subject matter experts with adjacent AI knowledge and AI experts with adjacent subject matter knowledge to achieve a multi-domain AI perspective, he commented. Unlike previous image and text-processing AI breakthroughs that were niche in terms of what each individual model could do, recent LLMs are broad in their scope, flexibility, and capabilities. It is not possible to compete with broad-scope AI using manual labor or dozens of niche models, he said. Using these tools will be critical to prevent leaving behind organizations and individuals.

Generative AI offers threats and opportunities, which Preble looked at through a data provenance lens. As threats, AI increases the need to understand who (or what) generated Internet-sourced data. It requires knowing if and how subcontractors have used AI in their deliverables or how interviewees have used it the hiring process. But as opportunities, it can contribute to the hiring process; streamline long-form language tasks; and improve image generation.

The skillset required for data science has shifted for AI. Adapting a diagram developed in 2010 that depicts the subject matter, math and statistics, and hacking skills needed for data science, Preble offered a version of the skills needed for AI (Figure 2). Subject matter expertise is still needed, but knowledge of math and statistics is not enough to validate output. Hacking skills have given way to the need to produce and test reliable code and data. Few individuals will have all these skills, he added, but the team as a whole must.

Intel Digital Readiness Programs

Anshul Sonak (Intel) stressed the need to bring the power of technology to the “average person” through education and skilling. He shared key assumptions: AI is everywhere but AI skills are not. Work and learning are human-centric social experiences; if work is changing with AI, the learning experience must also evolve. Although the big policy picture is important, the small picture—what individuals can do to make a difference now—must also be considered.

Jobs increasingly will need digital skills, green (sustainability) skills, and digital trust and ethics, and semiconductor skills are the foundation for new competitiveness. Only 0.2 percent of the global workforce is in the tech industry, but the entire workforce needs new digital readiness skills, he said. Intel conducted a study

Image
FIGURE 2 Data science (and AI) skills shift, 2010 to the 2020s.
SOURCE: Edward Preble, workshop presentation, October 11, 2023, based on Conway (2010).21

__________________

20 For more information, see https://new.nsf.gov/funding/initiatives/regional-innovation-engines.

21 For more information, see http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram.

Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.

that looked at five types of AI impacts, finding that it can replace, displace, complement, augment, and elevate depending on the occupation. Certain jobs can only be done by humans, some only by machines, and others have some combination. The big question is what it means for specific jobs and industries. Sonak noted McKinsey Research has found that AI will impact those earning less than $38,000 14 times more than those earning $58,000 or more, with a disproportionate impact on women. While AI researchers, engineers, and developers are needed, he concurred with previous speakers on the need for AI solution builders, power users, and responsible consumers. Deeper inequalities and gaps may result between those who know or responsibly use AI compared with non-users.

Intel has public-private partnership programs in 28 countries to reach non-technical audiences.22 Built on a high-tech, high-touch, open, shared value model, they are designed for children, vocational and college students, the current workforce, government leaders, and others. Most of Intel’s emphasis in the United States has been with community colleges. Sonak shared the components of its AI for Workforce curriculum.23 Shorter modules have also been developed, such as applied mathematics for AI, as well as three new courses: AI for Manufacturing, AI for Sustainability, and AI for Agriculture, with more topics to be added. Use cases are based on input from manufacturers, higher education leaders, and other stakeholders. An example of the impact of these programs is Houston Community College’s Intel-based AI degrees, which included employment, advanced degrees, apprenticeships, and internships, and even three students winning Intel Global and Country competitions. He urged more public-private partnerships to multiply the impact and an embrace of a new mindset of lifelong learning, reskilling, and upskilling for human-machine collaboration.

Generative AI: The Next Productivity Frontier

Navjot Singh (McKinsey) spoke from a business lens about generative AI. While supportive of existing efforts, he advocated for increases and acceleration, based on McKinsey research on the topic.24

AI seems like a big disruption but “since the Industrial Revolution, innovation has fueled economic growth.” GDP goes up with each innovation, although he acknowledged that not everyone benefits. Generative AI represents a natural evolution of analytical AI, but it has captured the public imagination. In the span of a few months, clients went from unfamiliarity with generative AI to asking how to learn about and use it. McKinsey research has shown that the scale of opportunity is trillions of dollars, and while no industry will remain untouched, it needs investment at scale.

McKinsey analyzed more than 60 use cases across functions and sectors and found four functional areas that particularly benefit from generative AI: customer journeys (e.g., customer service chatbots); creative content (e.g., personalized marketing communications); concision (e.g., creating research reports or analyses); and coding (e.g., code generation and user testing). Further, customer operations, marketing and sales, software engineering, and R&D drive 75 percent of the value from generative AI. Looking across job categories, generative AI will increase the demand for STEM specialists, managers, and creatives, while office support and customer service jobs could decline, although AI will also transform all jobs in some way. It is important to prepare students and professionals for these jobs. At most, 1.5 percent of workers have AI-related roles or skills, but this percentage must be higher, he asserted.

Singh offered several ideas to stimulate thinking on building the workforce. First, enable higher skill mobility through harmonized skills taxonomies, national skills councils, and skills passports and badges. Second, increase education through new curricula, lifelong training benefits, and training of educators. Third, unlock economic drivers through tax benefits for investments in human capital, social benefits to individuals, and national recognition or awards for companies.

__________________

22 For more information, see https://www.intel.com/content/www/us/en/corporate/artificial-intelligence/digital-readiness-home.html.

23 For more information on AI for Workforce, see https://www.intel.com/content/www/us/en/corporate/artificial-intelligence/ai-for-workforce-us.html.

24 McKinsey findings in Singh’s presentation can be found in The Economic Potential of Generative AI, the Next Productivity Frontier, https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier; Generative AI and the Future of Work in America, https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-work-in-america.

Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.

Discussion

Singh agreed educators need to be trained at least as AI users, but, like mastery of spreadsheets, suggested AI applications will become easier to learn and use. Sonak noted educator preparedness affects all countries. Solutions he has seen include re-skilling current educators, intergenerational learning, and industry involvement in faculty development and exchange. He also stressed the need to reimagine the preservice educator pipeline.

Preble recommended starting small with easy wins to build confidence and trust for teams with differing levels of comfort with technology. Sonak acknowledged the tension when people are afraid to break current patterns that seem to work. He suggested showcasing successes and working with and recognizing role models. Singh emphasized the role of career counselors who can guide students to these opportunities based on their competencies. Sonak noted AI is a new frontier and all countries are having the same conversations on workforce development.

CLOSING COMMENTS

GUIRR Director Michael Nestor thanked participants and expressed hope that they were challenged, inspired, and learned something new. He suggested as a challenge that AI regulations not overburden government, education, or industry. As an inspiration, he pointed to the potential of AI to enhance democracies and increase participation. As a new learning, he recalled the message from the legislative branch representatives and urged participants to communicate their ideas to policymakers.

DISCLAIMER This Proceedings of a Workshop—in Brief was prepared by Paula Whitacre as a factual summary of what occurred at the workshop. The planning committee’s role was limited to planning the workshop. The statements made are those of the rapporteur or individual workshop participants and do not necessarily represent the views of all workshop participants; the planning committee; or the National Academies of Sciences, Engineering, and Medicine.

REVIEWERS To ensure that it meets institutional standards for quality and objectivity, this Proceedings of a Workshop Series—in Brief was reviewed by Jon Eisenberg, National Academies of Sciences, Engineering, and Medicine; William Scherlis, Carnegie Mellon University; and Ceren Susut, U.S. Department of Energy. Marilyn Baker, National Academies of Sciences, Engineering, and Medicine, served as the review coordinator.

PLANNING COMMITTEE Grace Wang (Chair), Worcester Polytechnic Institute; Jeffrey Welser, IBM Research; Pamela Norris, George Washington University; and Robert J. Lawton, Office of the Director of National Intelligence.

STAFF Michael Nestor, Director, GUIRR; Komal Syed, Program Officer; and Christa Nairn, Senior Program Assistant; Clara Harvey-Savage, Senior Finance Business Partner; Cyril Lee, Financial Assistant.

SPONSOR This workshop was supported by the Government-University-Industry Research Roundtable membership, National Institutes of Health (HHSN263201800029I/75N98021F00017), Office of Naval Research, and the United States Department of Agriculture.

For additional information regarding the workshop, visit: www.nas.edu/guirr.

SUGGESTED CITATION National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. https: doi.org/10.17226/27793.

Policy and Global Affairs

Copyright 2024 by the National Academy of Sciences. All rights reserved.

images
Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
Page 1
Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
Page 2
Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
Page 3
Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
Page 4
Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
Page 5
Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
Page 6
Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
Page 7
Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
Page 8
Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
Page 9
Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
Page 10
Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
Page 11
Suggested Citation: "Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief." National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence at the Nexus of Collaboration, Competition, and Change: Proceedings of a Workshop—in Brief. Washington, DC: The National Academies Press. doi: 10.17226/27793.
Page 12
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.