Evaluation of the Every Day Counts Program (2025)

Chapter: 2 Evaluation Method

Previous Chapter: 1 Introduction
Page 11
Suggested Citation: "2 Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2025. Evaluation of the Every Day Counts Program. Washington, DC: The National Academies Press. doi: 10.17226/28871.

CHAPTER 2

Evaluation Method

The design and execution of this evaluation project conform to the standards and guidance set by authorities in program evaluation, including the American Evaluation Association, the U.S. Government Accountability Office (U.S. GAO), and the OECD. The evaluation adhered to the following principles to ensure its integrity:

  • The evaluation was theory driven, grounding its analysis in the literature on public-sector innovation and on innovation diffusion.
  • The evaluation was informed by an analytical program logic model (described in this chapter) that established primary and secondary EQs for the study.
  • The evaluation generated data and evidence to reinforce the objectivity and rigor of its analysis and conclusions.
  • The evaluation design allowed for ambiguity and flexibility, as the evaluation team faced uncertainties about the availability of critical data and difficulties in establishing the Program’s outcomes and the attribution of outcomes to program activities (as noted in Chapter 1).

These principles helped to ensure that the evaluation was conducted with integrity and consistency.

2.1 Evaluation Scope

The evaluation team first established the scope of the evaluation by defining clear parameters (e.g., the timeframe for the activities covered, the range of program activities analyzed, and the nature of the evidence to be collected and analyzed). Practical considerations, such as data availability and access, also shaped the evaluation scope. The scope was defined to ensure that the overall evaluation was comprehensive and balanced, given the limitations discussed below.

The evaluation included the 53 unique Innovations covered in EDC Round 1 through Round 6 (noting that several Innovations were promoted across more than one Round). However, during Round 1, the EDC Program was still developing and improving the documentation and processes that became more systematic in later Rounds. Therefore, Round 1 Innovations were largely excluded from the evaluation’s analysis. Still, Round 1 provided important lessons learned for the EDC Program that informed the later development of the EDC Program’s practices, so the Round 1 Innovations were examined to provide context for assessing the Program’s processes and strategy. Innovations in Round 7 were largely excluded from the evaluation, because they were announced in December 2022 and were still being implemented while this evaluation was conducted. The evaluation focused on 42 unique Innovations from EDC Round 2 through Round 6.

As a program-level evaluation, this effort did not focus on the value or impacts generated by each individual EDC Innovation. The innovations are very diverse and cover many aspects of surface transportation; thus, a complete accounting of all their intended and realized outcomes would

Page 12
Suggested Citation: "2 Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2025. Evaluation of the Every Day Counts Program. Washington, DC: The National Academies Press. doi: 10.17226/28871.

be impossible to compile. Instead, the evaluation was scoped to use examples of Innovations to illustrate the overall outcomes of the EDC Program relative to the FHWA mission and to the Program’s aim of enabling broader and faster deployment of underutilized innovations. The central questions addressed by this evaluation (discussed in Section 2.4) address the program-level activities and outcomes, rather than assessing innovation-specific activities and results.

The evaluation encompassed EDC Program activities related directly to supporting the innovations deployed and program management, including the full life cycle of program planning, execution, and assessment. The evaluation addressed the major activity areas identified in the logic model (Section 2.3):

  • Overall program management;
  • Selecting and supporting EDC Innovations;
  • Generating and disseminating knowledge about EDC Innovations;
  • Tracking and reporting implementation progress; and
  • Sharing of lessons learned and best practices.

2.2 EDC Program Logic Model

To identify how the EDC Program generates impact, the evaluation developed the program logic model shown in Figure 3. The logic model is a simplified view of the EDC Program that highlights its key elements and connects them to anticipated intermediate outcomes expected to translate to the overall program impact.

Program logic model of the EDC Program and its potential impact
Figure 3. Program logic model of the EDC Program and its potential impact.
Page 13
Suggested Citation: "2 Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2025. Evaluation of the Every Day Counts Program. Washington, DC: The National Academies Press. doi: 10.17226/28871.

The logic model specified the key resources and commitments needed to make the EDC Program successful (inputs), the activities undertaken within the program and the key organizations (participants) involved, the immediate results of those activities (outputs), and the likely evidence showing downstream benefits, changes, and other medium- to long-term outcomes that may be attributable to the EDC Program. Using the model, the evaluation team determined the data collection methods appropriate for gathering the evidence related to model elements, identified from whom and where the data would be gathered, and designed the quantitative and qualitative analyses to investigate all aspects of the program and to answer the evaluation questions.

The logic model also clarifies the EDC Program’s “theory of change”—the rationale for determining and designing the Program’s activities to achieve its desired outcomes. The EDC Program creates change on two levels: (1) the level of the Innovations within each 2-year Round and (2) the long-term organizational level (over the course of multiple Rounds). Within each Round, the program is designed to accelerate the deployment of individual Innovations by addressing four conditions affecting the diffusion of innovations among state DOTs:

  • Lack of awareness. DOTs may not be aware of innovations proven to generate important benefits to surface transportation. The EDC Program builds awareness about available innovations by selecting, defining, and publicizing the Innovations in each EDC Round.
  • Uncertainty about value. DOTs may be uncertain about the value of innovations because they lack the ability to assess the benefits generated by those innovations relative to their costs and risks. Through educational resources outlined in the implementation plans, the EDC Program assists the DOTs with how to properly assess risks and benefits for each innovation.
  • Skepticism about value. DOTs may be skeptical about innovations because they undervalue the potential benefits of implementation. The EDC Program demonstrates the value of each innovation through case studies, data sets showing the results of past implementations, and decision support tools.
  • Lack of capabilities, resources, or skills. DOTs may be deterred from deploying innovations because they believe that they lack key capabilities, resources, or skills required for implementation. The EDC Program facilitates information exchanges where DOTs can learn from their peers in other states about strategies and approaches leading to successful innovation implementation.

At the organizational level, iterative support by the EDC Program to encourage the deployment of valuable but underutilized innovations across multiple Rounds enhances the innovation capacity at state DOTs, which in turn engenders an innovation-oriented culture across the U.S. surface transportation community.

2.3 Evaluation Questions and Approach to Evidence-Building

To match the four aspects of the EDC Program addressed by this innovation (listed in Section 1.2), the evaluation focused on four major evaluation questions:

  1. Strategy and processes question: How is the EDC Program designed, structured, and operated to support state-led deployment of underused innovative technologies and project delivery approaches?
  2. Outcomes question: What are potential and realized outcomes of EDC-funded, state-led deployment projects, and how did EDC Program activities contribute to those outcomes?
  3. Effectiveness question: How have the EDC Program’s outcomes contributed to achieving programmatic and FHWA strategic objectives, including catalyzing efficiencies that support deployment of innovations in the highway ecosystem?
Page 14
Suggested Citation: "2 Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2025. Evaluation of the Every Day Counts Program. Washington, DC: The National Academies Press. doi: 10.17226/28871.
  1. Lessons learned question: What factors influenced the effectiveness and outcomes of the EDC Program and what might be done differently in the future to improve program results?

One key finding of the Scoping Report was that the EDC Program supports a diverse assortment of innovations that spans many aspects of surface transportation, from highway construction to safety to road operations. The EDC Innovations are not easily comparable in terms of their complexity, the organizations involved, or their intended outcomes. Recognizing this, quantitative approaches alone would fail to capture the full effects of the EDC Program. Instead, this evaluation adopted a “mixed-methods” design integrating qualitative and quantitative data using multiple techniques for collection and analysis. The primary data sources were Program-related documents (published by the FHWA and by state DOTs and their contractors) and key informant interviews. The documents provided qualitative, descriptive data about the EDC Program and its Innovations and generated some quantitative data about activities and outputs. Interviews were conducted with staff from FHWA headquarters offices and division offices, representatives from state DOTs, and other organizations such as trade and professional associations in the surface transportation sector. The evaluation team also conducted a literature review on the dynamics of innovation in the private sector (and especially in state DOTs) and analyzed data on activities provided by the EDC Program.

Each method and technique used in this evaluation followed procedures described in the evaluation literature, such as Ruegg and Jordan (2007). The matrix presented in Table 1 connects the four aspects of the evaluation described in Section 2.3 to the key elements in the program logic model. (See Appendix C for the full Evaluation Planning Matrix.) To integrate all elements of the logic model and characterize program-level outcomes, the evaluation also used the case study method to illustrate the cumulative outcomes of multiple Innovations (Yin 1992). This approach is described in Section 3.4.

Table 1. Condensed evaluation method matrix.

Evaluation Aspect Logic Model Elements Data Sources Data Analysis
Process Evaluation Inputs Activities Outputs Short-term outcomes Document review Portfolio data Stakeholder data Literature review Exploratory Interviews Quantitative analysis Qualitative analysis
Outcome Evaluation Short-term outcomes Longer-term outcomes and impacts Document review Portfolio data Interviews Quantitative analysis Qualitative analysis Case studies
Effectiveness Evaluation Alignment of short-term and long-term outcomes to program and FHWA strategic goals Interviews Stakeholder data Portfolio data Quantitative analysis Qualitative analysis Case studies
Lessons Learned All program elements Integrative analysis Involving all the above
Page 11
Suggested Citation: "2 Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2025. Evaluation of the Every Day Counts Program. Washington, DC: The National Academies Press. doi: 10.17226/28871.
Page 11
Page 12
Suggested Citation: "2 Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2025. Evaluation of the Every Day Counts Program. Washington, DC: The National Academies Press. doi: 10.17226/28871.
Page 12
Page 13
Suggested Citation: "2 Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2025. Evaluation of the Every Day Counts Program. Washington, DC: The National Academies Press. doi: 10.17226/28871.
Page 13
Page 14
Suggested Citation: "2 Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2025. Evaluation of the Every Day Counts Program. Washington, DC: The National Academies Press. doi: 10.17226/28871.
Page 14
Next Chapter: 3 Data Collection and Analysis
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.