The design and execution of this evaluation project conform to the standards and guidance set by authorities in program evaluation, including the American Evaluation Association, the U.S. Government Accountability Office (U.S. GAO), and the OECD. The evaluation adhered to the following principles to ensure its integrity:
These principles helped to ensure that the evaluation was conducted with integrity and consistency.
The evaluation team first established the scope of the evaluation by defining clear parameters (e.g., the timeframe for the activities covered, the range of program activities analyzed, and the nature of the evidence to be collected and analyzed). Practical considerations, such as data availability and access, also shaped the evaluation scope. The scope was defined to ensure that the overall evaluation was comprehensive and balanced, given the limitations discussed below.
The evaluation included the 53 unique Innovations covered in EDC Round 1 through Round 6 (noting that several Innovations were promoted across more than one Round). However, during Round 1, the EDC Program was still developing and improving the documentation and processes that became more systematic in later Rounds. Therefore, Round 1 Innovations were largely excluded from the evaluation’s analysis. Still, Round 1 provided important lessons learned for the EDC Program that informed the later development of the EDC Program’s practices, so the Round 1 Innovations were examined to provide context for assessing the Program’s processes and strategy. Innovations in Round 7 were largely excluded from the evaluation, because they were announced in December 2022 and were still being implemented while this evaluation was conducted. The evaluation focused on 42 unique Innovations from EDC Round 2 through Round 6.
As a program-level evaluation, this effort did not focus on the value or impacts generated by each individual EDC Innovation. The innovations are very diverse and cover many aspects of surface transportation; thus, a complete accounting of all their intended and realized outcomes would
be impossible to compile. Instead, the evaluation was scoped to use examples of Innovations to illustrate the overall outcomes of the EDC Program relative to the FHWA mission and to the Program’s aim of enabling broader and faster deployment of underutilized innovations. The central questions addressed by this evaluation (discussed in Section 2.4) address the program-level activities and outcomes, rather than assessing innovation-specific activities and results.
The evaluation encompassed EDC Program activities related directly to supporting the innovations deployed and program management, including the full life cycle of program planning, execution, and assessment. The evaluation addressed the major activity areas identified in the logic model (Section 2.3):
To identify how the EDC Program generates impact, the evaluation developed the program logic model shown in Figure 3. The logic model is a simplified view of the EDC Program that highlights its key elements and connects them to anticipated intermediate outcomes expected to translate to the overall program impact.
The logic model specified the key resources and commitments needed to make the EDC Program successful (inputs), the activities undertaken within the program and the key organizations (participants) involved, the immediate results of those activities (outputs), and the likely evidence showing downstream benefits, changes, and other medium- to long-term outcomes that may be attributable to the EDC Program. Using the model, the evaluation team determined the data collection methods appropriate for gathering the evidence related to model elements, identified from whom and where the data would be gathered, and designed the quantitative and qualitative analyses to investigate all aspects of the program and to answer the evaluation questions.
The logic model also clarifies the EDC Program’s “theory of change”—the rationale for determining and designing the Program’s activities to achieve its desired outcomes. The EDC Program creates change on two levels: (1) the level of the Innovations within each 2-year Round and (2) the long-term organizational level (over the course of multiple Rounds). Within each Round, the program is designed to accelerate the deployment of individual Innovations by addressing four conditions affecting the diffusion of innovations among state DOTs:
At the organizational level, iterative support by the EDC Program to encourage the deployment of valuable but underutilized innovations across multiple Rounds enhances the innovation capacity at state DOTs, which in turn engenders an innovation-oriented culture across the U.S. surface transportation community.
To match the four aspects of the EDC Program addressed by this innovation (listed in Section 1.2), the evaluation focused on four major evaluation questions:
One key finding of the Scoping Report was that the EDC Program supports a diverse assortment of innovations that spans many aspects of surface transportation, from highway construction to safety to road operations. The EDC Innovations are not easily comparable in terms of their complexity, the organizations involved, or their intended outcomes. Recognizing this, quantitative approaches alone would fail to capture the full effects of the EDC Program. Instead, this evaluation adopted a “mixed-methods” design integrating qualitative and quantitative data using multiple techniques for collection and analysis. The primary data sources were Program-related documents (published by the FHWA and by state DOTs and their contractors) and key informant interviews. The documents provided qualitative, descriptive data about the EDC Program and its Innovations and generated some quantitative data about activities and outputs. Interviews were conducted with staff from FHWA headquarters offices and division offices, representatives from state DOTs, and other organizations such as trade and professional associations in the surface transportation sector. The evaluation team also conducted a literature review on the dynamics of innovation in the private sector (and especially in state DOTs) and analyzed data on activities provided by the EDC Program.
Each method and technique used in this evaluation followed procedures described in the evaluation literature, such as Ruegg and Jordan (2007). The matrix presented in Table 1 connects the four aspects of the evaluation described in Section 2.3 to the key elements in the program logic model. (See Appendix C for the full Evaluation Planning Matrix.) To integrate all elements of the logic model and characterize program-level outcomes, the evaluation also used the case study method to illustrate the cumulative outcomes of multiple Innovations (Yin 1992). This approach is described in Section 3.4.
Table 1. Condensed evaluation method matrix.
| Evaluation Aspect | Logic Model Elements | Data Sources | Data Analysis |
|---|---|---|---|
| Process Evaluation | Inputs Activities Outputs Short-term outcomes | Document review Portfolio data Stakeholder data Literature review Exploratory Interviews | Quantitative analysis Qualitative analysis |
| Outcome Evaluation | Short-term outcomes Longer-term outcomes and impacts | Document review Portfolio data Interviews | Quantitative analysis Qualitative analysis Case studies |
| Effectiveness Evaluation | Alignment of short-term and long-term outcomes to program and FHWA strategic goals | Interviews Stakeholder data Portfolio data | Quantitative analysis Qualitative analysis Case studies |
| Lessons Learned | All program elements | Integrative analysis Involving all the above |