Previous Chapter: 7 Implementing the Evaluation for Continuous Improvement
Suggested Citation: "8 Putting It All Together." National Academies of Sciences, Engineering, and Medicine. 2024. Developing a Strategy to Evaluate the National Climate Assessment. Washington, DC: The National Academies Press. doi: 10.17226/27923.

8

Putting It All Together

The National Climate Assessment (NCA) incorporates an extensive process to verify the scientific information contained within it. What has largely been missing, however, is an evaluation of how well the NCA works in terms of supporting those who have need for its information. The evaluation strategy discussed here is to meet that second goal and is directed at providing both measures of what the NCA has accomplished and information on how the NCA can be improved.

This report presents the key ingredients to be considered when the U.S. Global Change Research Program (USGCRP) designs evaluations, by offering at least a preliminary understanding of what might be learned through evaluation (overarching evaluation questions and logic model), who the Program wants to learn from (audiences), how the Program can learn from them (methods), and how to approach the process. Not everything can be done at once, and sometimes exploratory work will be needed to collect the information needed for later research. This chapter suggests a multistep approach to design, implement, and learn from evaluation, and offers a strategy for USGCRP to consider as it undertakes evaluation design.

ASKING THE RIGHT QUESTIONS

The answers that an evaluation can provide depend greatly on the questions asked. For this reason, the initial stage of designing evaluation questions is a key component of evaluation studies. Sometimes as an evaluation progresses, new questions are suggested and modifications can be appropriate. But because the initial stage is so important, the committee recommends creating a logic model to help generate the evaluation questions. Chapter 3 develops an illustrative logic model to serve as a foundation.

Often in program evaluations, one can identify highly specific outcomes of interest, such as an increase in graduation rates or a lowering of crime rates. The NCA is different in that its charge is to provide information, not to take specific actions with regard to climate change. One way of defining its goal is to say that the NCA exists so that decision-makers and policymakers have the information they need to make good decisions. That goal might be broken into three parts, as shown in Figure 8-1: (1) decision-makers need to have positive feelings about the NCA (e.g., they trust it and feel that it meets their needs), (2) they need to gain something from it (e.g., knowledge, connections, or solutions), and (3) they need to be able to act on it (e.g., through their decisions). At the same time, decision-makers operate in a larger context with multiple pressures to address climate change and multiple sources of information. By thinking about what each part involves and how those parts are connected and fit into a broader context in which there are multiple sources of information, one can develop a set of overarching

Suggested Citation: "8 Putting It All Together." National Academies of Sciences, Engineering, and Medicine. 2024. Developing a Strategy to Evaluate the National Climate Assessment. Washington, DC: The National Academies Press. doi: 10.17226/27923.
An illustrative logic model in matrix form, showing examples for each logic model component. Entries corresponding to requirements in the Global Change Research Act of 1990 are highlighted.
FIGURE 8-1 Illustrative logic model for an evaluation of the National Climate Assessment (NCA).
NOTE: GCRA = Global Change Research Act; NGO = nongovernmental organization.
SOURCE: Generated by the committee, adapted from Morton and Cook, 2023.
Suggested Citation: "8 Putting It All Together." National Academies of Sciences, Engineering, and Medicine. 2024. Developing a Strategy to Evaluate the National Climate Assessment. Washington, DC: The National Academies Press. doi: 10.17226/27923.

evaluation questions, as listed in Table 3-1. These overarching questions in turn can be adapted and made more specific to apply to particular contexts, and used, for example, in survey questionnaires or in-depth interviews. They also might be used through other types of methods, such as citation searches.

Recommendation 3-3: The U.S. Global Change Research Program should develop a logic model to describe how its products, including the National Climate Assessment, are hypothesized to achieve their intended outcomes.

Recommendation 3-4: The U.S. Global Change Research Program should use a logic model developed for the evaluation to generate a set of overarching evaluation questions and should consult with partners and selected evaluation users to ascertain whether answering those evaluation questions will meet evaluation users’ needs.

DEALING WITH MULTIPLE AUDIENCES

A complicating factor, recognized in the Statement of Task, is that multiple audiences are dealing with and affected by climate change. Different audiences have different needs, different levels of information, different goals, and take different kinds of actions. A question that is appropriate to ask one audience may not be meaningful to another. Thus, any evaluation will require multiple data collections, and USGCRP will be faced with the task of deciding which audiences to focus on.

There is no single correct solution; rather, many solutions could work. Some audiences are so important that they need to be included (e.g., the legislation creating USGCRP specifies the president and Congress as the primary audiences of the NCA), and some audiences are less important in terms of making decisions related to climate change. What may be more important is how a set of audiences works as a collectivity—in other words, that each audience queried in the evaluation fills an important gap in knowledge of the NCA’s outcomes.

One way to deal with multiple audiences is to perform network analysis—that is, to look at how different audiences are connected to each other and to the NCA. This provides insights into how information is disseminated and may indicate that certain audiences act as key nodes through which others are informed. Chapter 4 discusses some of the techniques that could be used. An advantage of this approach is that it provides a more comprehensive view of the entire network of networks concerned with climate change than would studies of individual audiences. It also may provide guidance as to which audiences are most suitable for more detailed examination.

Recommendation 4-1: In designing an evaluation of the National Climate Assessment or related products, the U.S. Global Change Research Program should make use of network analysis as a tool for addressing the evaluation questions related to understanding who key actors are, how information is transmitted across multiple entities, which entities serve as key nodes for disseminating information, and how the network of networks supports that flow of information.

Another approach is to establish criteria for choosing audiences for more extensive study. In Chapter 5, the committee suggests two broad categories of criteria: those that meet USGCRP’s information needs, and those are most feasible.

Recommendation 5-1: In choosing which groups to study as part of an evaluation, the U.S. Global Change Research Program should seek diversity (including a focus on marginalized populations) similar to that of the participants and audiences with which the Program seeks to engage.

Recommendation 5-2: The U.S. Global Change Research Program (USGCRP) should select audiences to include in evaluation based on the following criteria: importance in USGCRP’s logic model, including (1) the role of an audience in climate-related decision-making; (2) the role of an audience in the transmission of climate information for decision-making; (3) the generalizability of results from an

Suggested Citation: "8 Putting It All Together." National Academies of Sciences, Engineering, and Medicine. 2024. Developing a Strategy to Evaluate the National Climate Assessment. Washington, DC: The National Academies Press. doi: 10.17226/27923.

audience to other populations; and (4) feasibility, diversity, and suitability for the evaluation question and method used. A targeted audience does not need to meet all of these criteria, but the audiences prioritized in an evaluation should meet these criteria collectively.

Recommendation 5-3: The U.S. Global Change Research Program (USGCRP) should continue to pay attention to how climate change affects historically marginalized communities and underrepresented audiences by continuing to provide pathways for including them as part of the development process of the NCA—making sure that they are heard throughout the development process—and more broadly by sustaining efforts to provide information about the climate-associated needs of those groups. The evaluators engaged by the Program should include persons whose backgrounds and lived experiences afford them understanding of underserved communities, including those exposed to the impacts of climate change.

Recommendation 5-4: Guided by its logic model, the U.S. Global Change Research Program (USGCRP) should progressively develop a roadmap of the network of networks in which the NCA and its related products are used. This roadmap would show the nodes that are hypothesized to play key roles in the diffusion of usable knowledge in the NCA and related products. The results should be considered when selecting which audiences to target in the evaluation.

A difficult issue is identifying and examining the underserved, particularly those who make no use of the NCA. If such audiences are involved in transmitting climate-related information, then a media search might identify them (depending on which media are included). If the audiences are users of climate-related information but not disseminators, then they may be hard to find (e.g., the NCA does not monitor who accesses its information, although such monitoring could be established). If they are neither disseminators nor users, even if their decisions might be improved substantially by using reliable information about climate change, then they are especially hard to identify and to get access to. One might possibly be able to contact individuals within these audiences and gain useful information, even without attempting to construct a statistically representative sample. The goal is to gain insight into what information is useful, even to those not now using knowledge of climate change to make decisions, and into better ways to build trust in the reliable science provided in the NCA.

CHOOSING APPROPRIATE METHODOLOGIES

Only after the audiences are chosen can the appropriate methodologies be determined. Some audiences might be examined using survey questionnaires; this works best when the members of an audience can be identified readily (e.g., using a membership list) and the questions to be asked are generally close-ended in nature (e.g., yes/no questions or multiple choice). Sometimes strategies for survey research are available even without a membership list (e.g., there is no comprehensive list of K–12 educators, but one could first sample schools and then teachers within schools), although cost would also be a consideration. If a more extended response is desired, then qualitative tools such as in-depth interviews and focus groups can be useful. Sometimes, these qualitative tools are used first in exploratory research to establish a general understanding of the issues, after which surveys might be used. Analysis of secondary (preexisting) data can also be useful without contacting any members of the audience, such as in a citation analysis. Case studies can be especially valuable for providing deeper insights and for exploratory research, focusing on a particular situation in depth rather than taking a broader view. Case studies can potentially use a mixture of tools, including surveys, in-depth interviews, focus groups, and analysis of secondary data.

Chapter 6 provides additional information about these methodologies, and then looks at five different audiences and how the evaluation approach might be customized for each audience.

Recommendation 6-1: The U.S. Global Change Research Program should design its evaluation and data collection plan so that the methods used for priority audiences can answer the overall evaluation

Suggested Citation: "8 Putting It All Together." National Academies of Sciences, Engineering, and Medicine. 2024. Developing a Strategy to Evaluate the National Climate Assessment. Washington, DC: The National Academies Press. doi: 10.17226/27923.

questions identified in the logic model. The methods and approach chosen should be tailored to the audience and the evaluation question being investigated.

CONTINUOUS EVALUATION AND IMPROVEMENT

For several reasons, the committee suggests that USGCRP adopt a policy of continuous evaluation and improvement rather than performing a one-time study. First, USGCRP has expressed an interest in using evaluation results to inform improvement, and in making evaluation a regular feature of its work. Second, USGCRP is operating in a rapidly changing landscape of climate services, and evaluation results can help to guide that evolution. Third, some aspects of effective evaluation for such a dynamic and cyclical program will benefit from conducting multiple iterations, such as an exploratory study followed by a survey, or different surveys to widen the range of audiences included.

The ability to support continuous improvement depends on what data are collected. For example, simply finding out what percentage of an audience makes use of the NCA provides little direct information toward program improvement, although it may indicate where there are gaps in reaching audiences. If the evaluation looks further into the reasons an audience has not used the NCA (e.g., lack of knowledge of the NCA, difficulties in finding information), then such information may help in knowing where and how to direct program improvement efforts. Iterative evaluations, then, can help determine whether program changes have had a positive effect.

Multiple evaluation iterations would help in other ways as well. USGCRP may initially have too little knowledge about some audiences to conduct anything more than exploratory research, perhaps using a case study. Based on the knowledge gained from the exploratory research, later evaluations may be able to examine those audiences more systematically. And, possibly, if some audiences cannot be examined because of practical limitations, later evaluations may be able to include them in place of audiences well studied through previous evaluations. Also, for audiences such as NGOs, findings from a case study may lead to ways to better identify other NGOs from whom information might be obtained.

Recommendation 7-2: The U.S. Global Change Research Program should sequence evaluation into manageable components, allowing for iterative testing and learning about how to best pursue evaluation over time. Sequenced components may include conducting evaluability assessments, piloting focused on certain agencies or chapters of the National Climate Assessment, picking low-hanging fruit first, or developing case studies.

Particularly if there are multiple evaluations, the value of obtaining feedback is multiplied. Such feedback can help in understanding previous findings and in refining the methodology to be used in the future.

Recommendation 7-3: In communication about evaluation efforts, the U.S. Global Change Research Program (USGCRP) should aim for active two-way communication with users. Communication mechanisms may include ongoing feedback, interim findings, meetings to tailor the communication of evaluation findings to particular situations, and communication about how input was used that helps connect evaluation efforts with USGCRP’s objectives.

MULTISTEP APPROACH TO EVALUATION

To ensure that an evaluation addresses priority needs and opportunities for the Program and the wide range of audiences and decision-makers who use the NCA, engagement in and buy-in to the evaluation by USGCRP leadership is critical to determine the scope of the evaluation, the team implementing the evaluation, and the budget.

Recommendation 3-1: The leadership of the U.S. Global Change Research Program should engage from the start in defining the evaluation scope and should ensure that the leadership perspective, as well as the necessary evaluation expertise, is incorporated throughout the design and implementation of the evaluation.

Suggested Citation: "8 Putting It All Together." National Academies of Sciences, Engineering, and Medicine. 2024. Developing a Strategy to Evaluate the National Climate Assessment. Washington, DC: The National Academies Press. doi: 10.17226/27923.

Recommendation 7-4: The U.S. Global Change Research Program should consider bringing in outside expertise and research capabilities—such as through contractors, consultants, grantees, or interagency agreements—to assist in designing and implementing the evaluation.

This report outlines a multistep approach that USGCRP can follow to develop an evaluation of the NCA:

  • USGCRP engages an evaluator. The evaluator should be experienced in designing and conducting large-scale evaluations and proposing methods to support continuous improvement.
  • Working with the evaluator, USGCRP develops a logic model, spelling out its objectives for the NCA (see Chapter 3) and identifying important participants and audiences (see Chapters 4 and 5) for the information in the NCA and associated products; spelling out causal assumptions about the use of NCA information; and specifying the causal assumptions to be investigated in an evaluation (see Chapter 3). The development of a logic model and evaluation questions requires substantial involvement by USGCRP leadership.
  • Working with important participants, including its federal agency members, USGCRP shares its logic model, and modifies it and the evaluation questions in light of comments from its partners (see Chapter 3).
  • USGCRP’s evaluator proposes an evaluation design: a process that engages selected audiences to provide data and identifies existing sources of information that will collectively provide insights to answer the evaluation questions (see Chapter 6). The design is likely to include a portfolio of methods, aimed at mapping the networks through which climate information travels and identifying groups to be investigated using evaluation instruments aimed at answering more specific questions.
  • USGCRP incorporates information collection for evaluation purposes in its process of developing the sixth NCA. The Program also develops a process for continuous improvement, supported by ongoing information gathering and evaluation (see Chapter 7).
  • USGCRP’s evaluator collects new data and analyzes existing data. As the evaluation progresses, the evaluator consults with USGCRP on modifications to the evaluation design in light of initial findings. The data collection is likely to involve multiple phases, sometimes starting with exploratory analysis, and sometimes possibly requiring a long-term approach that extends beyond the development of a single NCA report. The evaluator prepares a summary of information and the answers to the evaluation questions.
  • USGCRP drafts a response to the evaluation and engages participants and audiences via a public process, seeking comment on the evaluation findings and the Program’s draft response. A final response is provided to the Office of Science & Technology Policy and, if requested, to Congress.

Evaluating the NCA is complex because of the many audiences that the assessment can serve in informing decision-making. The guidance developed in this report will require substantial effort and resources from USGCRP, commensurate with its influence and the need for authoritative and reliable information as the nation navigates the impacts of a changing climate. Investing in the evaluation of these critical products may hold substantial benefits. The concepts discussed here can increase the understanding of how the NCA is used, provide information needed to make future assessments more useful, and aid USGCRP in prioritizing assessment-related activities. The committee commends USGCRP for recognizing the need for evaluation by requesting this report; the recommendations advanced seek to respond to that need.

Suggested Citation: "8 Putting It All Together." National Academies of Sciences, Engineering, and Medicine. 2024. Developing a Strategy to Evaluate the National Climate Assessment. Washington, DC: The National Academies Press. doi: 10.17226/27923.
Page 81
Suggested Citation: "8 Putting It All Together." National Academies of Sciences, Engineering, and Medicine. 2024. Developing a Strategy to Evaluate the National Climate Assessment. Washington, DC: The National Academies Press. doi: 10.17226/27923.
Page 82
Suggested Citation: "8 Putting It All Together." National Academies of Sciences, Engineering, and Medicine. 2024. Developing a Strategy to Evaluate the National Climate Assessment. Washington, DC: The National Academies Press. doi: 10.17226/27923.
Page 83
Suggested Citation: "8 Putting It All Together." National Academies of Sciences, Engineering, and Medicine. 2024. Developing a Strategy to Evaluate the National Climate Assessment. Washington, DC: The National Academies Press. doi: 10.17226/27923.
Page 84
Suggested Citation: "8 Putting It All Together." National Academies of Sciences, Engineering, and Medicine. 2024. Developing a Strategy to Evaluate the National Climate Assessment. Washington, DC: The National Academies Press. doi: 10.17226/27923.
Page 85
Suggested Citation: "8 Putting It All Together." National Academies of Sciences, Engineering, and Medicine. 2024. Developing a Strategy to Evaluate the National Climate Assessment. Washington, DC: The National Academies Press. doi: 10.17226/27923.
Page 86
Next Chapter: References
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.