The purpose of this chapter is to provide a framework for understanding the multifaceted nature of the NSF SBIR/STTR programs and the different types of public benefits they provide based on both the programs’ stated legislative objectives and relevant economic theory. The chapter begins by briefly reviewing the scope of research funded by NSF and the broad mandate for the NSF SBIR/STTR programs. It then provides a review of the literature examining both direct and indirect impacts of the programs. The challenges entailed in evaluating the programs are then considered. For instance, it is challenging if not impossible to define a perfect counterfactual with which to prove unequivocally that the numerous benefits of SBIR/STTR, and NSF in general, would not have occurred without the agency’s investment. The chapter concludes by describing the committee’s approach to dealing with these evaluation challenges, in light of the programs’ stated objectives and economic rationale and the best available data for evaluation.
Before turning to the specific role of the SBIR/STTR programs at NSF, it is useful to consider NSF’s distinctive mandate and organization for research funding.
NSF was established in 1950 as an independent federal agency, with a mission “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense; and for other purposes.”2
___________________
1 Material in this chapter is similar to that in Chapter 2 of the National Academies of Sciences, Engineering, and Medicine report Review of the SBIR and STTR Programs at the Department of Energy (NASEM, 2020) and Chapter 2 of the National Academies report Assessment of the SBIR and STTR Programs at the National Institutes of Health (NASEM, 2022).
2 U.S. Congress. P.L. 81-507, National Science Foundation Act of 1950 (May 10, 1950).
In fiscal year (FY) 2022 it had a budget of $8.8 billion (in discretionary appropriations), evaluated 39,000 proposals through a competitive merit review process, and made 11,000 awards, which included funding for 1,800 colleges, universities, and other institutions (NSF, 2022a).3 It is the second largest source of federal funding for basic research after the Department of Health and Human Services, and “the top source of federal funding for basic research in the fields of computer science and mathematics, environmental sciences, and social sciences” (CRS, 2021, p. 5). NSF supports 23 percent of all federally sponsored basic research at U.S. colleges and universities (NSF, 2022a).
NSF’s establishment is often associated with the 1945 report Science: The Endless Frontier, prepared by Vannevar Bush, who served as President Franklin Delano Roosevelt’s science advisor and director of the Office of Scientific Research and Development during World War II. This report laid out a vision for U.S. postwar research infrastructure and talent development and called for the establishment of a science agency (Bush, 1945). However, NSF’s origins can be found earlier in congressional debate dating back to at least 1942.
NSF operates as an independent agency, created to operate largely free of political influence while broadly subject to controls by the executive and legislative branches of government. It is governed by a 24-member board and a director, each of whom is appointed by the President for a 6-year term (with one-third of the board being appointed every 2 years). Congress exerts its influence through the annual budget and appropriations processes, the authorization process, and its oversight authority. By law, nominees to serve on the board “shall be eminent in the fields of the basic sciences, medical science, engineering, agriculture, education, or public affairs” and “selected solely on the basis of established records of distinguished service.”4 The NSF director manages the day-to-day functioning of the agency, and the board, known as the National Science Board, “establishes agency policies, identifies issues critical to NSF’s future, approves the agency’s strategic budget direction, approves annual budget submissions to the Office of Management and Budget, and approves new major programs and awards,” in addition to serving as a body of advisors to Congress and the President (CRS, 2021, p. 2).
Administratively, NSF is divided into seven long-standing research directorates, organized mainly by scientific discipline; a new Directorate for Technology, Innovation and Partnerships (TIP), which “advances use-inspired and translational research” (NSF, 2022a) and houses among other things, the NSF SBIR and STTR programs; and other offices that administer agency-wide programs. The agency’s annual discretionary appropriations are organized in a series of accounts. In FY2022, 79 percent of NSF’s $8.8 billion discretionary budget fell under the Research and Related Activities (RRA) account, and an additional 13 percent of the discretionary budget fell under the Education and Human Resources (EHR) account (NSF, 2022a). The RRA and EHR accounts
___________________
3 Figures are rounded.
4 U.S. Congress. P.L. 81-507, National Science Foundation Act of 1950 (May 10, 1950).
form the funding base for NSF’s directorates, and NSF generally has broad discretion to manage the funds within these accounts, giving it great flexibility in managing its programs (CRS, 2021).
NSF does not conduct its own research, but instead funds researchers and educators in their own institutions and entities, mainly through grants but also through contracts and cooperative agreements. In FY2022, 79 percent of the agency’s obligations for research and education programs went to colleges, universities, and academic consortia; 12 percent went to private industry (including nonprofits); and the remainder went to entities such as federally funded research and development (R&D) centers and federal, state, and local governments (NSF, 2022a). Most NSF support for research is awarded through a competitive process with an external and rigorous peer review process at its core.
As described in Chapter 1, the SBIR program had its start at NSF. Roland Tibbetts, a senior program officer at NSF, advanced the need for early-stage financial support for high-risk technologies with commercial potential, and NSF began its pilot program in 1977. In 1982, Congress passed the Small Business Innovation Development Act, which expanded the program to multiple federal agencies, and in 1992, it passed the Small Business Technology Transfer Act to create the STTR program. NSF granted 42 awards as a result of its first SBIR solicitation (NSF, n.d.). For FY2023, NSF expects to fund approximately 250–360 SBIR/STTR Phase I awards and 100–110 SBIR/STTR Phase II awards, with approximately $195 million in funding available.5
The NSF SBIR/STTR programs stand out from their counterparts within other agencies, on even cursory inspection. First, the program staff operate as a centralized and independent group within NSF, recently finding a new home within the TIP directorate. Moreover, relative to other NSF program officers, NSF’s SBIR/STTR programs have historically prioritized recruiting program directors with at least some degree of meaningful engagement with the process of innovation and technology commercialization beyond the realm of “basic science.” Third, although other agencies tend to link their SBIR/STTR programs closely to their specific missions, the NSF programs have historically been proactive in encouraging a broad range of proposals across a variety of technology and industry domains. Finally, over the past decade, NSF’s SBIR/STTR programs have proactively pursued an approach that gives priority not simply to funding small private-sector firms but to attracting “start-up” firms that are both young and new to the process of federal research funding. The committee considered it useful to consider these distinctive features of the operation and management of NSF’s SBIR/STTR programs as it conducted its overall assessment.
Over the past decade, moreover, NSF has augmented and modified its programs, most recently requiring that Phase I applicants first submit a “Project
___________________
5 NSF Small Business Innovation Research/Small Business Technology Transfer Program Phase I (SBIR/STTR Phase I), Program Solicitation NSF 23-515, and NSF Small Business Innovation Research/Small Business Technology Transfer Programs Phase II (SBIR/STTR Phase II), Program Solicitation NSF 23-516.
Pitch” and be invited to submit a proposal based on that pitch, sparing companies from investing significant resources in applications for technologies that have no chance of receiving an award. Another NSF initiative, although one not unique to SBIR/STTR, is the agency’s Innovation Corps (I-Corps) program, which provides early-stage technology commercialization assistance to NSF grantees and other researchers. NSF’s SBIR/STTR processes and procedures are described more fully in Chapter 3, and the implications of the I-Corps program for NSF’s SBIR/STTR programs are explored in Chapter 4.
Pinpointing the appropriate objectives against which to evaluate the NSF SBIR/STTR programs is not a simple task. Consistent with NSF’s broad mandate to promote the advancement of science, the topics of the research funded by SBIR/STTR are driven by investigators. Because no specific agency mission drives detailed funding solicitations, the knowledge spillovers from NSF’s SBIR/STTR funding are difficult to identify and measure.
Over time, the policy and academic discourse around the SBIR and STTR programs has evolved. At the inception of the SBIR program, there were concerns about American competitiveness and the need to incentivize commercial applications of American science. Over the nearly 40 years of the program’s existence, common understandings of what SBIR should be doing have shifted in subtle ways, often reflecting policy imperatives that were salient at certain times or interpreted through academic studies that capitalized on newly available data. These competing objectives often do not align, creating confusion about the program and its impact on the American economy.
A key target of the SBIR/STTR programs is the realization of innovation by small firms. Innovation—the creation of new products, more efficient processes, and better-organized firms and networks of firms—is critical for U.S. economic growth and development and international competitiveness (Feldman et al., 2016). Small firms have an essential role to play in the creation of innovation by conducting research and pursuing ideas that have transformative potential. Essential to innovation is the idea that government investment is required for early-stage development of ideas because of the existence of market failures that lead to underinvestment by private firms. Once a technological innovation has been developed, its commercialization requires a supporting system of private firms, both suppliers and customers; follow-on investors; and developed product markets (Nelson, 1993). Thus the SBIR/STTR programs are an important component of the U.S. system of innovation, focusing on high-risk research and seeding small firms that set the system in motion.
While the barriers and transaction costs facing small businesses are well understood as justifications for government intervention in the early stages of innovation, it has become clear that younger small businesses are the dominant drivers of traditional metrics of economic growth (Haltiwanger et al., 2013). Firm age, therefore, is an important moderating variable in assessment of any program
that aims to support small firms. It would be remiss in reviewing the literature on SBIR/STTR not to recognize this caveat before interpreting the results of existing assessments, particularly those that do not account for firm age.
This systems-based view of the role of government in financing of R&D is critical for understanding the full impact of and rationale for SBIR/STTR. Routine conceptions of government performance in R&D funding often focus exclusively on volume of output per unit of investment. The committee hopes that the results and discussion in the following chapters make clear that such a narrow conception ignores the indispensable role of SBIR/STTR within a much larger ecosystem of innovation. The value of these programs is much deeper and broader than their ability to correct the market failure associated with the undersupply of innovation. Thus the committee assesses SBIR/STTR according to four key dimensions, or potential sources of value, at the core of the SBIR program as originally conceived:
For NSF, the SBIR/STTR programs stimulate technological innovation and facilitate the commercialization of groundbreaking technologies in myriad ways: advancing science, generating patents, producing collaborative partnerships that result in technology transfer, broadening the geographic scope of NSF’s research activities, and regularly identifying and supporting technological and commercial breakthroughs that attract follow-on funding. And yet none of these impacts is amenable to a simple input–output analysis. The web of interconnected partnerships resulting from awards, for example, can span numerous entities, including the firm; other federal agencies; and other collaborators, contractors, and university partners. The value of SBIR/STTR to each of these entities is practically impossible to measure. Similarly, a single firm might use an award to generate a product whose societal value justifies a decade’s worth of expenditure on the programs, but this impact will not manifest on the margin when one calculates an average total effect across all participants. Chapter 5 is devoted to the evaluation of SBIR/STTR’s innovation outputs, and includes measures of innovation outcomes.
The SBIR/STTR programs represent a key lever with which NSF can direct resources strategically toward socially desirable mission areas and provide initial funding for small businesses to advance technologies that may be mission-critical to other federal agencies. Beyond promoting indirect innovative benefits through technology spillovers, setting the nation’s technological agendas, and integrating resources into broader innovation ecosystems, NSF uses SBIR/STTR to meet broader societal objectives. It has funded such companies as BioMason,
which creates sustainable cement for the construction industry, and Gingko Bioworks, which helps provide specialty chemicals that do not rely on fossil fuels as a feedstock. Chapter 4 describes NSF’s role in spurring small businesses that may ultimately provide technologies for other federal agencies.
Another rationale for SBIR/STTR lies in its capacity to attract the best ideas from a larger and more diverse population of entrepreneurs, many of whom face inequitable barriers to market entry in the absence of government involvement. NSF’s success in attracting and funding a broader set of entrepreneurs is discussed in Chapters 4 and 5. Collaboration among awardees, universities, federal laboratories, and other commercial and academic partners represents an additional overarching task in assessment of the programs. These partnerships are a direct requirement of the STTR program and influence each of the four dimensions listed above. Collaboration outcomes are considered in Chapter 4.
Finally, the committee examined the ability of the NSF SBIR/STTR programs to alleviate capital market imperfections by acting as a source of seed funding for small start-ups. The branding of the programs as America’s Seed Fund reflects the core program objective of helping competitive but capital-constrained small businesses weather the “valley of death”—a term referring to the phenomenon whereby smaller and younger firms lack access to the credit and capital needed to survive past the early stages of the innovation process. One way of understanding the programs’ effectiveness, therefore, is to assess whether participating firms are achieving outcomes, such as follow-on funding, that reflect present or forthcoming commercial success (see Chapter 5).
The committee stresses the importance of using these lenses in concert when considering the evidence for SBIR/STTR’s effectiveness. Doing so reflects sensitivity to the potential trade-offs faced by any program seeking to promote positive firm outcomes along these four dimensions, as well as better reflects the realities of how government funds science. These trade-offs, along with several additional assessment challenges, are discussed below.
The variety of theoretical lenses required for a comprehensive assessment of SBIR/STTR suggests multiple outcome measures, and some of these outcomes are better represented in the existing literature than others. While not dominant in the literature, for example, employment outcomes associated with the SBIR/STTR programs are examined in a few studies (Howell, 2019; Howell and Brown, 2019; Lanahan et al., 2021; Lerner, 2000; Siegel and Wessner, 2012; Wallsten, 2000). Other studies have explored SBIR/STTR’s impact on sales (Siegel and Wessner, 2012), follow-on private financing (Howell, 2017; Lanahan and Armanios, 2018; Toole and Czarnitzki, 2010; Wallsten, 2000), patents (Howell, 2017; Siegel and Wessner, 2012), and copyrights and trademarks (Siegel and Wessner, 2012). Scholars have used the SBIR/STTR programs to study
narrow single outcomes, often conflating the objectives of the programs with their own topics of study.
This literature provides evidence for numerous benefits of the programs, although some studies present mixed results, suggesting limitations of the research and opportunities for improvement. Lerner’s (2000) study found evidence that SBIR awards boosted sales and employment, but that these outcomes were concentrated in geographic regions with high levels of venture capital activity. Lerner’s (2000) study also found that the benefits of SBIR were greatest in high-technology industries. A subsequent study found similar positive effects of the program on commercialization and sales (Audretsch et al., 2002). Other studies provide evidence that SBIR/STTR leads to increased follow-on financing from the private sector (Howell, 2017; Toole and Czarnitzki, 2007) and the creation of intellectual property (Howell, 2017; Siegel and Wessner, 2012). And still other scholars have shown that the programs create knowledge spillovers beyond the firms that receive the awards (Myers and Lanahan, 2022; NASEM, 2020, 2022).
Some empirical work indicates possible limitations of SBIR/STTR and areas for further evaluation and improvement. Wallsten (2000) presents evidence that the programs crowd out private financing, but also highlights the difficulty of determining the direction of causality in evaluations of R&D funding programs. Indeed, this is perhaps the chief empirical challenge facing program evaluators: Is it that SBIR/STTR awards are helping firms succeed where they otherwise might not? Or are the awards going to firms that would have achieved positive outcomes anyway? It is almost certain that both circumstances are at play. Nonetheless, finding causal evidence of program impacts requires understanding which firms are capturing program benefits beyond what they would have gleaned otherwise.
The present study builds and improves upon the methodology of prior studies of NSF’s SBIR/STTR programs carried out by the National Academies (NASEM, 2015; NRC, 2008b). Those earlier studies took a survey-based approach. Since then, researchers have begun performing more rigorous evaluations that use a variety of externally collected data from a range of sources to gather more objective evidence. Chapter 3 of this report presents a qualitative assessment of SBIR/STTR organizational practices; Chapter 4 provides a descriptive analysis of the landscape of NSF SBIR/STTR awardees; and Chapter 5 is based on econometric evidence using data gathered from multiple sources.
The SBIR/STTR programs have multiple goals that suggest different outcomes for evaluation. Not all awardees seek private financing (Howell, 2017), for example, and many are focused on innovation goals specific to agency needs or have private goals that require long time horizons for commercialization (Lanahan and Armanios, 2018). The varied nature of program goals complicates interpretation of the results of single studies. For instance, it is tempting to interpret Lerner’s (2000) finding that multiple awards do not increase performance as evidence that multiple-award winners may be problematic. On the other hand, additional certification, in the form of additional SBIR/STTR awards
from a single agency or across agencies, may indicate a firm’s willingness to experiment with technologies that may yield greater or broader performance benefits if successful (Lanahan et al., 2022).
Howell’s (2017) study and the National Academies’ recent assessment of the National Institutes of Health (NIH) (NASEM, 2022) are the only SBIR/STTR studies to make use of applicant data. These studies represent a notable advantage over others given the ability to compare winners and losers in order to estimate additional impacts of an SBIR/STTR award. In assessing outcomes, Chapter 5 addresses this challenge by matching winners to nonwinners across a nationally representative sample of U.S. firms that were likely to apply to the SBIR/STTR programs.
An additional body of work has developed around factors moderating the effectiveness of SBIR/STTR. Joshi and colleagues (2018) examined workforce diversity within granting agencies and found a strong link between representation for women and underrepresented minorities in those agencies and the successful conversion of Phase I to Phase II grants among those same groups. Agencies with lower representation of these groups in their workforce saw lower conversion rates for woman- and minority-owned awardees. Other work has shown that program performance varies depending on geography, with awardees doing better in areas with greater access to resources and with higher concentrations of high-tech industry activity (Gans and Stern, 2003; Lerner, 2000).
Collectively, the above studies provide quantitative evidence that the SBIR/STTR programs (at least in some agencies) have a salutary impact on the principal goal of encouraging innovation and attracting follow-on funding. However, simply achieving those objectives does not imply that the SBIR/STTR programs accomplish other goals, such as encouraging a higher rate of participation by women and underrepresented minority applicants. It is important to note, moreover, that there are multiple pathways along which firms can evolve through the programs and multiple institutional forces at play in shaping program outcomes. For instance, some firms receive a Phase I award and move quickly to private funding sources, while others proceed to Phase II, perhaps generating new SBIR/STTR Phase I applications and ultimately procurement relationships. Conventional empirical approaches tend to miss the qualitative differences among these experiences. Further, myriad institutions other than the federal government play a role in determining the effectiveness of the SBIR/STTR programs. Seventeen U.S. states have noncompetitive state matching programs for SBIR/STTR awardees, and 45 states offer some form of Phase 0 support (Lanahan and Feldman, 2015).
Lanahan and Feldman’s (2018) study of noncompetitive state matching programs encourages evaluation of SBIR/STTR not within a silo of direct inputs and outputs but within a broader context of policies and institutions. The study results reveal that state programs increase in the likelihood of Phase I applications, as well as the successful conversion of Phase I to Phase II awards. Moreover, the study found that additional public funds increase the quality of SBIR/STTR Phase
II applications by increasing the size of the potential applicant pool and raising the overall quality of funded proposals.
Additional influences on the effectiveness of the SBIR/STTR programs include regional institutions and collaborative patterns among awardees, universities, and federal laboratories. The value of these collaborations for innovative activity more generally is clear. A robust literature supports the finding that technology transfer programs in universities of high research quality induce commercialization (Lockett and Wright, 2005; Siegel et al., 2003).
Haeussler and Colyvas (2011) surveyed more than 2,000 life scientists in the United Kingdom and Germany and identified numerous drivers of commercial activity among academics and universities. Publications and patents matter in predicting commercial success and collaboration with the private sector. The study found that publications and views of the importance of patents are positive predictors of commercial activity and consulting, and that these effects are most pronounced in such fields as engineering and clinical medicine, in which commercial applications are particularly viable.
Procurement and commercialization linkages represent another major source of value for SBIR/STTR. A previous SBIR assessment (NRC, 2008a) described the program as a path to procurement, especially for the Department of Defense. While no studies specifically have analyzed the causal effects of SBIR/STTR on procurement, substantial evidence demonstrates that government procurement in general has produced important commercial applications, notably including weapons systems (Sherwin and Isenson, 1967) and the iPhone (Mazzucato, 2013).
With the above framework for the rationale for SBIR/STTR in place, this section identifies empirical challenges in assessing the program. Underlying these challenges are a number of questions: What types of outcomes should evaluators observe? How should they weigh each of those outcomes in determining the overall effectiveness of the programs? Are the most important outcomes even observable? Which ones are not? What types of evidence matter most (quantitative versus qualitative, “extreme” successes versus average effects versus notable failures), and across what time scales and domains? Answers to these questions will likely depend on the specific aspect of SBIR/STTR under consideration and the associated assessment lens from the four discussed above. Chapter 1 of this report identifies the central challenges of assessment for programs such as SBIR/STTR. This section examines those challenges in greater depth, highlighting attempts to deal with them in the current SBIR/STTR literature.
One reason assessment of SBIR/STTR is challenging is the difficulty of observing and measuring certain outcomes related to program success, including
the direct production of innovation. Ideally, the committee would have liked to have data on innovative new products introduced to the market, which would enable an evaluation of the commercialization pathway associated with NSF-sponsored science. Yet those ideal data do not exist. Tracing commercialization is complicated further by the focus on business-to-business markets rather than the more publicly visible introduction of new products for consumer markets. NSF SBIR/STTR projects often produce innovation, such as technologies for biorenewable products, that provide a public good and are difficult to price. Realizing the full potential of NSF SBIR/STTR investment requires long time horizons, cumulative activity, and large infrastructure investment.
Another assessment challenge lies in measuring economic outcomes. Program evaluations, by their very nature, focus on marginal gains for the marginal firms served by the program, with program impacts reported in terms of average effects. As illustrated in Chapter 5, however, many of SBIR/STTR’s economic impacts derive in large part from a small subset of firms that achieve outstanding results. And while the programs are often expected to create jobs, it is unreasonable to expect many young firms to do so in the short term, given SBIR/STTR’s explicit incentives for firms to spend award money on partnerships with experts at research institutions who are already employed (Lanahan et al., 2021).
In general, some degree of uncertainty is inevitable in direct attribution of program effects across all relevant aspects of the systems of innovation in which SBIR/STTR is embedded. It is unrealistic to expect that all program impacts can be precisely measured. Mazzucato (2015) highlights both the uncertainty around technological innovation as well as interrelationship and feedback between innovation and the market structure that firms are operating in that make a comprehensive assessment of SBIR/STTR on all relevant dimensions impossible.
A priority reflected in the current SBIR/STTR literature is determining additionality in estimating program outcomes. In other words, do evaluations reflect a plausible comparison between what happened under SBIR/STTR and what would have happened had the programs not been implemented? The literature offers numerous different examples of attempts to evaluate SBIR/STTR by identifying this counterfactual condition and using it as a benchmark for measuring program success. Typically, such efforts involve using one of three primary approaches to inferring the programs’ causal impacts.
Random experiments represent the ideal approach because they virtually guarantee that any observed impact is “additional” in the sense that it would not have occurred without the programs. But since SBIR/STTR awards are based on application quality, the process that separates recipients from nonrecipients is inherently nonrandom.
A second, quasi-experimental approach can often approximate random assignment. Howell’s (2017) example exploits a merit-based assignment threshold to compare program outcomes for those just above and those just below the cutoff. This approach is celebrated in the economic literature for its ability to uncover causal effects. A downside of such methods, however, is that they are amenable only to a narrow set of outcomes and often allow inference for only a small subset of the population of actual and potential awardees. Moreover, these types of studies are possible only if an agency provides access to all applicant data, including those that are not selected for awards.
A third approach is nonexperimental. Such studies may attempt to infer additionality by creating matched “twins” for winners (e.g., Lanahan et al., 2021; Lerner, 2000) from among a population of nonwinners based on a number of observable firm characteristics. Others control for those characteristics, statistically, to rule out competing explanations for what appear to be the effects of the programs (e.g., Siegel and Wessner, 2012). Still other studies acknowledge that certain important outcomes are not measurable within a quantitative causal inference framework, and instead use qualitative data to uncover mechanisms and process details that are critical to understanding program outputs.
This last approach is least well represented in the SBIR/STTR literature, perhaps because it compares poorly with the other two in the ability to meet strict empirical requirements for additionality. However, qualitative and process-oriented studies of SBIR/STTR are indispensable in addressing some of the assessment challenges identified above. Such studies can better account for multiple and conflicting outcomes at once. Moreover, they are likely the best means of elucidating causal mechanisms that explain the details of program impacts and clarify opportunities for process improvements.
The committee’s assessment of SBIR/STTR employs the four-part rationale for the programs discussed above, as well as multiple outcomes to account for different program objectives. In selecting those outcomes, the committee looked to its formal statement of task while also considering gaps in the existing literature, including the need to consider qualitative data on process, extreme outliers, and outcomes such as procurement and basic research, which may compete with more straightforward economic outputs such as job growth and product commercialization.
The committee also placed emphasis on empirical approaches that explicitly address additionality concerns. Chapters 5 presents results of statistical models designed to approximate the counterfactual of what would have happened to awardees in the absence of SBIR/STTR. However, these approaches are insufficient, on their own, for a comprehensive assessment of SBIR/STTR along the dimensions requested in the statement of task. Accordingly, the committee used qualitative interviews to assess outcomes and processes not amenable to quantitative and experimental research designs. This approach also considers the
importance of good descriptive data, especially at the far right and left ends of the distribution of performance outcomes.
Finally, the report emphasizes that individual estimates of program performance should not be interpreted in isolation. Rather, these results are part of SBIR/STTR’s performance within a complex innovation system. Strong performance in certain areas may mean corresponding deficits in others, and individual firm performance will not capture the full value of return on SBIR/STTR investment given spillovers and unobservable influences within these systems. Explaining these dynamics requires in-depth interviews and detailed description of patterns and findings at multiple levels of SBIR/STTR processes.
Using the approach outlined above, the committee performed the evidence-based assessment of SBIR/STTR processes and procedures at NSF.