Scientists initially sounded the warning that human activities were changing the global climate in 1957 (Revelle and Suess, 1957). In the seven decades since, climate science has greatly expanded understanding of the scope, pace, and impacts of climate change on human societies and the natural world. It is now possible to obtain detailed projections of changes in climatic conditions and rising sea levels, build models to estimate economic damages and risks, predict altered flow in major river basins, and analyze health risks due to climate-related hazards over the coming three months (USGCRP, 2024), among a host of ways that science can inform decision-making. The scientific basis of this knowledge is validated by peer reviews, tests of consistency, transparency of data, modeling methods, and reporting procedures, contributing over time to a robust consensus. The salience of climate change as a public issue has also risen (Brimicombe, 2022; Tyson and Kennedy, 2023), contributing to a widening range of decisions made possible by the growing scientific understanding of climate change.
Along with scientific consensus, collaboration has increased both nationally and internationally to model and understand the changing climate, collect data on climate changes and their impacts, and develop policies for mitigating and adapting to climate change. Following the creation of the United Nations Framework Convention on Climate Change, the parties to the treaty (Conference of the Parties [COP]) have met regularly; the most recent meeting is known as COP28. At the same time, global climate scientific consensus efforts, most notably culminating in the Intergovernmental Panel on Climate Change (IPCC) Assessment Reports, have produced assessments of the state of the climate, global contributions to climate change, and standardized future policy and emission pathways.
Within the United States, there is a similar need to coordinate and synthesize the growing body of climate science, both to build scientific consensus and to make it accessible to broader audiences. In 1990, Congress enacted and President George H. W. Bush signed into law the Global Change Research Act (GCRA).1 This statute created the U.S. Global Change Research Program (USGCRP, or the Program), an interagency unit in the White House Office of Science and Technology Policy,2 coordinating and synthesizing climate research in 15 federal agencies. The GCRA mandated that USGCRP organize an authoritative statement of knowledge of global environmental changes, particularly the changing climate. Such a statement, the National Climate Assessment (NCA), has been released five times, most recently in 2023. The NCA has grown greatly over
___________________
1 Global Change Research Act of 1990, 15 U.S.C. Chapter 56, Public Law 101-606, 104 Stat. 3096-3104.
2 The International Climate Councils Network (ICCN) was launched in 2021 as a forum for climate councils around the world. The United States is not a party to ICCN, but USGCRP functions as a climate council to facilitate coordination and provide data, and the Program works with other nations and their climate councils in various settings.
time, starting at roughly 100 pages and growing to about 2,000 pages, further accompanied by supplemental specialized reports and tools. From the outset, USGCRP has developed the NCA with inclusiveness and transparency as core values. By engaging with sources of knowledge across a wide spectrum and sharing sources, data, and modeling assumptions, this approach can enhance the legitimacy of the findings of climate science among those using the NCA.
Although the understanding of climate change has grown in both the natural and social sciences, there is only a limited grasp of how this knowledge is informing decision-making (see Moss et al., 2019)—the purpose set out in the GCRA. At the request of USGCRP, this committee was created and charged with developing a framework for evaluating the use of the NCA and related products to inform decision-making. To be clear, this committee is not evaluating the NCA but providing advice to USGCRP, and the evaluators it works with, on how to approach the task of evaluating the use of USGCRP products. Designing an evaluation in the current context is complex, with different audiences needing different types of information, and with climate change–related information being disseminated through many sources. Creating an evaluation design specifically to measure the impact of the NCA—outcomes of the NCA in terms of who uses the NCA, how they use it, and how it could be more useful—requires considerable thought. This report, accordingly, aims to inform how USGCRP can prepare evaluations directed at improving future national assessments to make them more useful to the very wide audience of people and organizations affected by and responding to a changing climate.
This section discusses the benefits of evaluations and why this evaluation requires special care.
Program evaluations seek to provide systematic and objective assessments of the effectiveness, efficiency, and impact of programs or interventions (Vedung, 2017). Evaluation provides insights and data to inform decision-making, program improvement, and future planning, as it aims to determine whether and how programs are achieving their intended outcomes. Program evaluation has been used to identify best practices, areas of improvement, and areas where resources could be better allocated; demonstrate accountability to intended beneficiaries, other participants, and funders; and ensure transparency in program implementation.
The use of evaluation in federal climate change–related programs has been modest compared with other fields (e.g., health care), but it can play an important role in helping organizations and policymakers assess programs’ effectiveness and impact, including scientific research, collaboration with participants, and providing information to target audiences. Evaluation can also help identify and prioritize gaps in existing research and highlight opportunities to address those gaps to better meet the needs of policymakers and other organizations addressing climate change. Global, scientific, consensus-based efforts, such as the IPCC Assessment Reports,3 have been the subject of a range of formal and informal evaluation efforts, mostly focused on processes leading up to the production of assessment reports, as well as their effects on climate science (O’Reilly et al., 2024; Schulte-Uebbing et al., 2015; Vasileiadou et al., 2011). These evaluations have helped shape improvements in process, presentation, and communication, contributing to the provision of accessible and widely used reports and other products. The NCA has also been the subject of a number of informal evaluations (Jacobs et al., 2016; Meyer, 2011; Morgan et al., 2005; Moser, 2005; NRC, 2007; Parson et al., 2003) and one formal effort (Dantzker et al., 2016); these have provided suggestions for improving the NCA process and products. As reliable knowledge of a changing climate has become increasingly relevant, continuous improvement in the NCA process is increasingly important. At the same time, a targeted effort to understand outcomes and utilization of the NCA among various audiences can support the more effective allocation of resources by USGCRP and its member agencies, and understanding of how better to serve the needs of the Program’s priority audiences and participants.
___________________
3 Through USGCRP and its member agencies, the United States contributes to the worldwide research effort that supports the periodic global assessments of the IPCC (2023; see also NRC, 2007).
U.S. activities to address climate change are both extensive and diverse, although much of this effort ranges beyond climate science. The president’s budget for 2023 proposed over $5 billion for climate science research, and over $18 billion for climate resilience and adaptation programs (OMB, 2022). In a report examining federal funding from 2010 through 2017, the Government Accountability Office (GAO, 2018) identified 18 programs whose primary purpose is to address climate change and 515 additional programs that included other program goals in addition to addressing climate change. For fiscal year 2017, the programs were spread across 19 agencies. These statistics understate the diversity of federally related activities concerning climate change, since, for example, a single grant program may encompass a variety of grants, each with a particular research design and goal. Additionally, climate change–related activities in the United States are not limited to the federal government, but include state, tribal, local, nonprofit, and business activities. To investigate if and how the NCA is informing these broad investments and efforts, an evaluation would need to consider a wide range of actors, the actions they are undertaking, and the climate information that informs their activities.
Just as many actors are involved in addressing climate change, their roles vary greatly. For example, a federal agency might set national policy, perform or fund research, disseminate information, or carry out activities—such as water management—that rely on climate science information. A nongovernmental organization (NGO) might be involved in disseminating information; it might be taking actions such as planting trees or helping disadvantaged populations; or it might be advocating for policy change. Local city and town officials might be preparing for extreme weather events through better stormwater management or erosion control.
The climate information needs and knowledge levels of these actors vary. They may access different sources of information and their intended uses range from increasing awareness to targeted and specific use of climate projections and data. Their interactions with other actors addressing climate change may range from being highly integrated to being relatively isolated. They may or may not be organized to work together and may be centrally located or highly dispersed.
These variations have implications for how evaluations may be performed. Some groups will be easier to reach than others, and topics that are highly salient to one group may be irrelevant to another. In light of the variety of users and uses of climate information, USGCRP needs to clarify its objectives and the audiences and participants it seeks to reach, both directly and indirectly. This enables an evaluator, working with the Program, to design an evaluation that takes into account the priorities of the Program and provides useful insights into the outcomes of the NCA. Guidance to inform this design is the objective of this report.
The purpose of the NCA is to provide information to support the needs of a wide range of decision-makers, needs that might include developing policy, mitigating climate change, or improving resilience on local to national scales. Measuring the impact of providing information for such a wide range of uses is difficult. There is no practical way of measuring changes in knowledge of climate change across the vast array of NCA audiences. Moreover, different users will need different kinds of information, so a test that is appropriate for one group might be inappropriate for another. Perhaps the most straightforward quantitative measure of outcomes would be a measure of the number of citations of the NCA, but people might use NCA information without citing it, or may even use NCA information without knowing that the NCA was the original source. As such, it will be important to define the scope of the audiences broadly, not limiting the study to those who are known users of the NCA. Only by including potential nonusers and indirect users can one measure the extent of use of the NCA, and it may be that the data from nonusers are the most important for determining barriers to use.
Except for bibliographic searches, an evaluation of the uses of the NCA will largely depend on users’ perceptions and self-reports. Evaluation measures might address topics such as people’s knowledge of the NCA, their ability to access the information they are seeking, and their success in applying the information. Ideally, such measures will capture both direct and indirect use of the NCA (indirect use is when NCA information is transmitted and perhaps modified by intermediaries before reaching the ultimate users). The data will be subjective but still informative, more likely to be helpful for program improvement purposes (e.g., in making the NCA more accessible and identifying unreached groups) than for measuring impact. This would be consistent with USGCRP’s interest in continuous improvement. Chapter 3 provides further discussion on different aims of evaluation and concludes that, based on the Statement of Task, USGCRP is most interested in an outcome evaluation rather than an impact evaluation.
The process of generating the NCA has been, from the outset, a conversation between the federal government and the scientific community. In that role, USGCRP has both created and facilitated a network of diverse actors and organizations, which in turn has facilitated information exchanges between researchers and decision-makers, and promoted communication that evolves with each iteration of the NCA. This study is intended to support that continuing conversation. USGCRP asked the National Academies of Sciences, Engineering, and Medicine to convene an expert committee to prepare an evaluation strategy for examining the uses of the NCA. The Statement of Task (Box 1-1) for this study is an articulation by USGCRP of important issues that could, if addressed, improve the usefulness of future USGCRP products, as well as illuminate the contributions and challenges of the most recent NCA. The committee sought to respond to each element of the questions listed in the Statement of Task, as spelled out in Chapter 3. We have articulated an approach to evaluating the NCA that aims to capture its dynamism and complexity, as the nation responds to the challenges of climate change.
The committee anticipates that implementing the approach articulated in this report would require working with the Program to help it conduct a self-examination, which is outside of the committee’s Statement of Task. Based on the committee’s current, limited understanding, USGCRP has developed programming to address many, varied demands for information on climate change, and these have not been prioritized systemically. Over the years, USGCRP has expanded the focus and anticipated audiences of the NCA, so that the original legislative mandate reflects only a small part of what the NCA now delivers.
Following are some topics to be examined by USGCRP that constitute necessary groundwork for creating a detailed evaluation design. Some of these might be addressed before hiring an evaluator, and others might be examined with the help of the evaluator. As discussed later in this report, creating a logic model could be helpful in making the Program’s priorities explicit, providing a strong foundation for an evaluation design. In the absence of such groundwork, this report focuses on the Statement of Task’s request for a strategy that might be used to develop a logic model and evaluation design.
At the request of the U.S. Global Change Research Program (USGCRP), the National Academies of Sciences, Engineering, and Medicine (the National Academies) will establish an ad hoc committee to develop a strategy for evaluating stakeholder use of the National Climate Assessment (NCA) and selected other USGCRP products. The committee will develop criteria to prioritize the stakeholder groups that should be involved in such an evaluation, a conceptual and methodological framework and design for an evaluation, and plans for data collection and other information-gathering activities. The evaluation strategy will be designed to help determine to what extent these products meet decision and informational needs of selected stakeholder groups. Diversity, equity, inclusion, and justice (DEIJ) principles will be considered and incorporated in the evaluation strategy. The committee will not provide a technical review of the assessments.
The evaluation strategy will address the following questions:
Answering these questions will help support the evaluation design in many ways, including helping to establish which audiences to examine, indicating the extent to which data on other sources are needed, determining what kinds of data are needed and what questions to ask audiences, and prioritizing audiences to determine the most appropriate methodologies.
Given these ambiguities and unanswered questions, this report does not provide a finalized evaluation design. Instead, consistent with the committee’s charge, it provides a framework (or strategy, as called for in the Statement of Task) that can be used by USGCRP to work with its evaluator in creating an evaluation design. In addition (as discussed in Chapter 7), different evaluation techniques will have very different costs and timelines. It is not
feasible to develop an evaluation design without understanding the available resources that will be allocated to the evaluation.
This report focuses particularly on a tool often used in evaluation, the logic model: a systematic account of how an organization, program, or project seeks to affect its audiences and the decisions they make. Although not specifically requested in the Statement of Task, the committee feels that a logic model can be a key component in developing and interpreting the evaluation. The logic model spells out the intentions guiding the program and forms the basis of an evaluation. An important stage of the proposed strategy, then, is for USGCRP, working with its evaluator, to develop a logic model, followed by the development of a set of questions to be investigated in an evaluation. Guided by the illustrative logic model it developed, the committee proposes—again, only as an example—a set of overarching evaluation questions, which guide the methods and examples developed in succeeding chapters. Appendix C spells out the relationship between the evaluation questions in this illustration and the questions posed in the Statement of Task.
Several challenges have important implications for how an evaluation can be conducted and what information it might provide:
Given these complexities, the committee determined that a helpful way to study the spread and use of information by multiple audiences is to think of them as a network of networks (Castells, 1996), as discussed in
___________________
4 The Statement of Task uses the term stakeholders for these audiences. The committee interpreted this word to refer to audiences and participants with which USGCRP works and seeks to share knowledge. Although commonly used as a synonym, to some audiences the word carries connotations that conflict with the inclusiveness to which USGCRP is committed (CDC, 2022; Reed et al., 2024). The committee prefers to use the term audience in this report, except where stakeholder appears in a direct quote. Other words whose ordinary language meaning overlaps with stakeholder are also used in context, including participant, user, contributor, and partner.
Chapter 4. That is, there are multiple individual networks, such as urban planners or health professionals, that exist across levels of government, NGOs, media, and individuals. These networks communicate information, plans, and strategies to their members. USGCRP both intersects with these networks, such as when they distribute information from the NCA, and seeks to increase its collaboration with them by involving multiple audiences in writing and reviewing the report. Through these networks, the NCA may reach some audiences indirectly without being recognized as a source, such as when a professional organization customizes NCA information to serve its members. Viewing the various audiences as a network of networks helps to frame the study design and provide a method for analyzing the data.
Networks and audiences vary greatly in their characteristics, and thus in the feasibility of identifying and engaging them in an evaluation. Some audiences are relatively well-defined and identifiable, such as members of Congress or their staff, or climate programs within federal agencies; others are more difficult to define; still others, such as those not making any use of the NCA, may not be readily identifiable in any systematic manner. For some audiences, case studies may provide instructive data about how audiences interact with climate science information, which could lead to important insights on how USGCRP products could be improved to increase effectiveness. Such studies might also identify unanticipated uses of climate information that require further exploration. Chapter 6 discusses methodologies for evaluating how this might be done with different audiences and the uses that they make of information from the NCA. In sum, audiences of potential users of the NCA may need to be prioritized based on their importance to USGCRP’s goals and on their accessibility, and often will need to be approached through multiple stages of evaluation as information is gathered.
The committee discussed at the outset what information it needed to complete its task. Collectively, the committee members brought experience in the development process for NCA; communications, dissemination, and audience engagement; the design and implementation of evaluations; and the use of climate assessments. The committee decided it needed more information on how the previous NCA evaluation was performed, on federal perspectives on the NCA, on NGOs’ use of NCA, and on historically underrepresented groups and the NCA. To obtain such information, the committee heard presentations from and interacted with the following speakers in public sessions:
To address the Statement of Task, the committee conducted both meetings that were open to the public and closed meetings consisting of committee and National Academies staff only. The first meeting, which was closed, was devoted to discussing both the Statement of Task and the composition and balance of the committee. The second meeting, also closed, examined what is already known about the outcomes of NCA, who the audiences are, what the committee needs to know about the various audiences’ needs, the organization of the final report, and the priorities for information gathering. The third meeting was hybrid, conducted primarily in person but with some virtual participants, and it had a mixture of open and closed segments. The open part of the meeting was devoted to information gathering, with presentations on the previous evaluation of the NCA, federal perspectives on the NCA, and audience perspectives among historically underrepresented groups. The closed portion was devoted to discussions of the preceding presentations and to organizing writing teams and the structure of the report. Between the third and fourth meetings, subgroups of the committee met in closed sessions to work on each chapter. Each committee member participated in multiple subgroups, and all subgroup decisions and processes were shared with the full committee for feedback and to support coordination across the chapters. The fourth, fifth, and sixth meetings were closed, consisting of reviewing drafts of the report, with the sixth meeting devoted to discussing comments received from the external reviewers.
In addressing its charge to “[design] a strategy for evaluating stakeholder use of the National Climate Assessment and select other USGCRP products designed to help determine the extent to which these products meet the needs of stakeholders to support decision-making” (see Box 1-1), the committee considered the needs and insights provided by USGCRP and other experts in its open session, along with the aims expressed in the Statement of Task. The Statement of Task focuses on the users of the NCA, their awareness of and engagement with the NCA products, and the usefulness of the information contained in the NCA for decision-making; this focus critically informed the committee’s recommended evaluation strategy.
In addition to recommending an approach to evaluation design, this report identifies critical questions and decisions to be addressed by USGCRP, in order to obtain an evaluation that meets its highest-priority needs for enhancing the accessibility, usability, and appropriateness of the NCA, given its wide range of audiences and users and its goal of informing decision-making. This focus on enabling decision-making was central to the committee’s work, as making information available is not the same as enabling decisions. Decision-making requires useful information, including at the appropriate spatial and temporal scales, in forms that are accessible, usable, and understandable to users; decisions also have important social dimensions that depend on networks of relationships, using shared information to address shared challenges.
In the following chapters, this report provides the committee’s approach, aims, and recommendations for a strategy for evaluating the NCA to glean insights and guidance on the users and uses of the current NCA, and to inform development and use of future assessments and other USGCRP products, as part of a cycle of continuous improvement.
Chapter 2 provides the overall context for the evaluation, describing the goals of the NCA, how it was created, how it operates, what audiences it serves, how it has grown and evolved over time, and what previous evaluations of NCA have determined. Each NCA is a statement emerging from an ongoing process. The history of the NCA and USCGRP matters: each NCA is an assessment of the current state of climate change, interpreted in light of the evolving priorities of USGCRP and the wide array of participants enlisted by the Program in assembling the NCA. One notable component of that history, and a fundamental theme in this evaluation strategy, is that the audiences making use of NCA have multiplied. Any evaluation of NCA must consider its expanded mission and consider whether and how it is reaching wider audiences, and whether those audiences find it useful. Furthermore, given that the needs of audiences continuously change, a process of ongoing evaluation may be most appropriate to support continuous improvement in the delivery of timely and relevant information to users.
Chapter 3 discusses the goals of the evaluation in terms of what types of questions should be answered, presenting an illustrative logic model for identifying the most important factors and how they are intended to produce change, what interconnections appear among these factors, and what outcomes are intended and among what populations. For example, what knowledge is the Program trying to share? With whom does it seek to work, both in designing and assembling the NCA and in sharing the assessment? How does the Program believe that users respond to the information in the assessment—do they find it salient, accessible, and useful in practice for making decisions? The model offered here is meant to be suggestive but not prescriptive—first, because USGCRP may develop a different logic model, and second, because the data collection and other inputs may suggest other important lines of analysis that were not anticipated when the logic model was initially designed. Based on the logic model, Chapter 3 examines what might be learned from an evaluation, and suggests key overarching questions that, when reviewed and revised in light of a refined logic model, could be the basis for a planned evaluation. As these questions were developed based on the illustrative logic model, they have a somewhat different structure than the questions in the Statement of Task; a crosswalk between the two sets of questions is provided in Appendix C.
Chapter 4 discusses how network analysis can be used to advance the study. As discussed earlier in this chapter, one can picture the audiences for climate science as being diverse in their orientations and needs, with many different research and policy concerns. Collectively, they can be thought of as a network of networks. The science of networks is an evolving branch of applied mathematics that looks at the connections between nodes and how the nodes affect each other. In this way, network analysis can provide a framework in which a wide range of evaluation methods—including, among others, citations of scientific literature, internet-based queries, focus groups, case studies, and surveys—can be brought together to illuminate the networks through which the climate science
in the NCA is used, providing a framework for their work and use of the NCA. Understanding networks is also necessary for understanding the full impact of the NCA since networks often act as intermediaries in disseminating and customizing NCA information to the needs of the various audiences. Determining how networks operate and how they shape the knowledge shared through them may be important in measuring the impacts of the NCA.
Chapter 5 builds on the earlier discussion of diverse audiences for USGCRP products in order to articulate criteria for determining which audiences should be prioritized. Given that USGCRP is unlikely to have the knowledge or resources required to examine all potential audiences of the NCA, the Program will need to prioritize those audiences for evaluation, based either on their importance for policymaking or on their ability to provide useful information.
After identifying priority audiences, an evaluator must determine how to collect data from or about those audiences. Chapter 6 provides guidance for selecting appropriate methods to do so, as well as illustrative examples of how a particular audience might be examined using a suite of methods tailored for that audience. It also illustrates how the overarching evaluation questions might be translated into granular questions that could be used in data collection instruments (i.e., overarching evaluation questions spell out what an evaluation as a whole seeks to learn, while much greater specificity might appear on a survey or interview guide).
Chapter 7 discusses some of the practical considerations that appear when designing and implementing an evaluation and discusses the use of evaluation results to guide improvements in the NCA going forward. Chapter 8 brings together the committee’s conclusions and recommendations, as developed in the preceding chapters, in a succinct strategy that USGCRP may follow in designing and implementing an evaluation of the NCA.
Evaluating the NCA is complicated because of the many audiences that the assessment can serve in informing decision-making. The guidance developed in this report will require substantial effort and resources from USGCRP, commensurate with its influence and the need for authoritative and reliable climate information as the nation navigates the impacts of a changing climate. Investing in the evaluation of these critical products may hold substantial benefits. The concepts discussed in the chapters that follow can increase the understanding of how the NCA is used, provide information needed to make future assessments more useful, and aid USGCRP in prioritizing assessment-related activities. The committee is grateful that USGCRP recognizes the need for evaluation by requesting this report, and the recommendations advanced below seek to respond to that need.
This page intentionally left blank.