This report on the Small Business Innovation Research (SBIR) and Small Business Technology Transfer programs at the Department of Energy (DoE) is a part of a series of reports on SBIR and STTR at the National Institutes of Health (NIH), Department of Defense (DoD), NASA, and National Science Foundation (NSF). Collectively, they complement and earlier assessment of the SBIR program by the National Academies of Sciences, Engineering, and Medicine, completed in 2009.1
The first-round assessment of SBIR, conducted under a separate ad hoc committee, resulted in a series of reports released from 2004 to 2009, including a framework methodology for that study and on which the current methodology builds.2 Thus, as in the first-round study, the objective of this second-round study is “not to consider if SBIR should exist or not”—Congress has already decided affirmatively on this question, most recently in the 2011 reauthorization of the program.3 Rather, we are charged with “providing assessment-based findings of the benefits and costs of SBIR . . . to improve public understanding of the program, as well as recommendations to improve the program’s effectiveness.” As with the first-round, this study “will not seek to compare the value of one area with other areas; this task is the prerogative of the Congress and the Administration acting through the agencies. Instead, the study is concerned with the effective review of each area.”
___________________
1 Effective July 1, 2015, the institution is called the National Academies of Sciences, Engineering, and Medicine. References in this report to the National Research Council or NRC are used in an historic context identifying programs prior to July 1, 2015.
2 National Research Council, An Assessment of the Small Business Innovation Research Program: Project Methodology, Washington, DC: The National Academies Press, 2004.
3 National Defense Authorization Act of 2012 (NDAA) HR.1540, Title LI.
These areas refer to the four legislative objectives of the SBIR program:4
The parallel language for STTR from the SBA’s STTR Policy Directive is as follows:
(c) The statutory purpose of the STTR Program is to stimulate a partnership of ideas and technologies between innovative small business concerns (SBCs) and Research Institutions through Federally-funded research or research and development (R/R&D). By providing awards to SBCs for cooperative R/R&D efforts with Research Institutions, the STTR Program assists the small business and research communities by commercializing innovative technologies.5
The SBIR/STTR programs, on the basis of highly competitive solicitations, provides modest initial funding for selected Phase I projects (up to $150,000) for feasibility testing, and further Phase II funding (up to $1 million) for about one-half of Phase I projects.
From a methodology perspective, assessing these programs presents formidable challenges. Among the more difficult are the following:
___________________
4 The most current description of these legislative objectives is in the Policy Guidance provided by the Small Business Administration (SBA) to the agencies. SBA Section 1.(c) SBIR Policy Directive, October 18, 2012, p. 3.
5 Small Business Administration, Office of Investment and Innovation, “Small Business Technology Transfer (STTR) Program— Policy Guidance,” updated February 24, 2014.
biggest impacts take many years to peak even after products have reached markets.
The methodology utilized in this study of the SBIR/STTR programs builds on the methodology established by the committee that completed the first-round study of the SBIR program.
The committee that undertook the first-round study and the agencies under study formally acknowledged the difficulties involved in assessing SBIR programs. Accordingly, that study began with development of the formal volume on methodology, which was published in 2004 after completing the standard National Academies peer-review process.6
The established methodology stressed the importance of adopting a varied range of tools, which meshes with the methodology originally defined by the study committee to include a broad range of tools, based on prior work in this area. The committee concluded that appropriate methodological approaches
build from the precedents established in several key studies already undertaken to evaluate various aspects of the SBIR. These studies have been successful because they identified the need for utilizing not just a single methodological approach, but rather a broad spectrum of approaches, in order to evaluate the SBIR from a number of different perspectives and criteria.
This diversity and flexibility in methodological approach are particularly appropriate given the heterogeneity of goals and procedures across the five agencies involved in the evaluation. Consequently, this document suggests a broad framework for methodological approaches that can serve to guide the research team when evaluating each particular agency in terms of the four criteria stated above. [Table APP A-1] illustrates some key assessment parameters and related measures to be considered in this study.7
___________________
6 National Research Council, An Assessment of the Small Business Innovation Research Program: Project Methodology, 2.
7 National Research Council, An Assessment of the Small Business Innovation Research Program: Project Methodology, 2.
TABLE APP A-1 Overview of Approach to SBIR Program Assessment
| SBIR Assessment Parameters | Quality of Research | Commercialization of SBIR-Funded Research/Economic and Non-Economic Benefits | Small Business Innovation/Growth | Use of Small Businesses to Advance Agency Missions |
|---|---|---|---|---|
| Questions | How does the quality of SBIR-Funded research compare with that of other government-funded R&D? | What is the overall economic impact of SBIR-funded research? What fraction of that impact is attributable to SBIR funding? | How to broaden participation and replenish contractors? What is the link between SBIR and state/regional programs? | How to increase agency uptake while continuing to support high-risk research |
| Measures | Peer-review scores, publication counts, citation analysis | Sales; follow-up funding; progress; initial public offering | Patent counts and other intellectual property/employment growth, number of new technology firms | Agency procurement of products resulting from SBIR work |
| Tools | Case studies, agency program studies, study of repeat winners, bibliometric analysis | Phase II surveys, program manager surveys, case studies, study of repeat winners | Phase I and Phase II surveys, case studies, study of repeat winners, bibliometric analysis | Program manager surveys, case studies, agency program studies, study of repeat winners |
| Key Research Challenges | Difficulty of measuring quality and of identifying proper reference group | Skew of returns; significant interagency and inter-industry differences | Measures of actual success and failure at the project and firm levels; relationship of federal and state programs in this context | Major interagency differences in use of SBIR to meet agency missions |
NOTE: Supplementary tools may be developed and used as needed. The committee notes that while sales is a legitimate indicator of progress toward commercialization, it is not a reliable measure that commercial success has occurred.
SOURCE: National Research Council, An Assessment of the Small Business Innovation Research Program: Project Methodology, Washington, DC: The National Academies Press, 2004, Table 1, p. 3.
Quantitative and qualitative tools being utilized in the current study of the SBIR/STTR programs include the following:
Taken together with our committee deliberations and the expertise brought to bear by individual committee members, these tools provide the primary inputs into the analysis.
We would stress that, for the first-round study and for our current study, multiple research methodologies feed into every finding and recommendation. No findings or recommendations rest solely on data and analysis from the survey; conversely, data from the survey are used to support analysis throughout the report.
Congressional discussions of the SBIR program in the context of the 2011 reauthorization reflected strong interest in the commercialization of technologies funded through SBIR. This enhanced focus is understandable: the investment made should be reflected in outcomes approved by Congress.
However, no simple definition of “commercialization” exists.8 Broadly speaking, in the context of the program it means funding for technology development beyond that provided under Phase II SBIR funding. At DoE, most commercialization occurs outside the agency, mostly in the private sector (as survey results indicate).
In the 2009 report on the DoE SBIR program9 the committee charged with that assessment held that a binary metric of commercialization was insufficient. It noted that the scale of commercialization is also important and that there are other important milestones both before and after the first dollar in sales that should be included in an appropriate approach to measuring commercialization. The committee carrying out the current study further notes that while sales is a legitimate indicator of progress toward commercialization, it is not a reliable measure that commercial success has occurred.
Despite substantial efforts at DoE, described below, significant challenges remain in tracking commercialization outcomes for the DoE SBIR/STTR programs. These include the following:
___________________
8 See Chapter 5 (Quantitative Outcomes) for related analysis of commercialization in the SBIR program.
9 National Research Council, An Assessment of the SBIR Program at the Department of Energy, Washington, DC: The National Academies Press, 2009.
Congress often seeks evidence about the effectiveness of programs or indeed about whether they work at all. This interest has in the past helped to drive the development of tools such as the Company Commercialization Record database at DoD. However, in the long term the importance of tracking lies in its use to support program management. By carefully analyzing outcomes and associated program variables, program managers should be able to manage more successfully.
We have seen significant limitations to all of the available data sources. Unfortunately, DoE declined to share its tracking data on privacy grounds, so we are unable to draw conclusions about the quality or extent of DoE data collection, and the data itself were not made available for our use.
Although Congressional interest has focused primarily on commercialization in recent years, it remains the case that there are four congressionally mandated objectives for the SBIR program, and that commercialization is only one of them. STTR adds additional objectives beyond commercialization. DoE’s data collection tools focus almost exclusively on that commercialization; they appear to have limited capabilities for collecting data about the other three SBIR program objectives described in the introduction to this appendix.
Our analysis of the SBIR and STTR programs at DoE make extensive use of case studies, interviews, and other qualitative methods of assessment. These sources remain important components of our overall methodology, and Chapter 7 (Insights) is devoted to lessons drawn from case studies and other qualitative sources. But qualitative assessment alone is insufficient.
The survey offers several significant advantages over other data sources, as it
At the same time, however, we are fully cognizant of the limitations of this type of observational survey research in this case. To address these issues while retaining the utility and indeed explanatory power of survey-based methodology, this report contextualizes the data by comparing results to those from the survey conducted as part of the first-round assessment of the SBIR program (referred to below as the “2005 Survey”10). This report also adds transparency by publishing the number of responses for each question and indeed each subgroup, thus allowing readers to draw their own conclusions about utility of the data.
We contracted with Grunwald Associates LLC to administer a survey to DoE award recipients. This 2014 survey is based closely on the 2005 Survey but is also adapted to lessons learned and includes some important changes discussed in detail below. A methodology subgroup of the committee was charged with reviewing the survey and the reported results for best practice and accuracy. The 2014 Survey was carried out simultaneously with surveys focused on the SBIR programs at NIH, and followed a survey in 2011 of awardees at NASA, NSF, and DoD.11
The primary objectives of the 2011 and 2014 surveys were as follows:
Box A-1 identifies multiple sources of bias in survey response.
___________________
10 The survey conducted as part of the current, second-round assessment of the SBIR program is referred to below as the “2014 Survey” or simply the “survey.” In general, throughout the report, any survey references are understood to be to the 2014 Survey unless specifically noted otherwise.
11 Delays at NIH and DoE in contracting with the National Academies combined with the need to complete work contracted with DoD NSF and NASA led the committee to proceed with the survey at three agencies only.
In order to ensure maximum comparability for a time series analysis, the survey for the current assessment was based as closely as possible on previous surveys, including the 2005 Survey and the 1992 GAO survey.
Given the limited population of Phase II awards, the starting point for consideration was to deploy one questionnaire per Phase II award. However, we were also aware that the survey imposes burdens on respondents. Given the detailed and hence time-consuming nature of the survey, it would not be appropriate to over-burden potential recipients, some of whom were responsible for many awards over the years.
An additional point of consideration was that this survey was intended to add detail on program operations, rather than the original primary focus on program outcomes. Agency clients were especially interested in probing operations more deeply. We decided that it would be more useful and effective to administer the survey to PIs—the lead researcher on each project—rather than to the registered company point of contact (POC), who in many cases would be an administrator rather than a researcher.
The survey was therefore designed to collect the maximum amount of relevant data, consistent with our commitment to minimize the burden on individual respondents and to maintain maximum continuity between surveys. Survey questionnaires were to be sent to PIs of all projects that met selection characteristics, with a maximum of two questionnaires per PI. (The selection procedure is described in section on “Initial Filters for Potential Recipients”.)
Based on reviewer feedback about the previous round of assessments, we also attempted to develop comparison groups that would provide the basis for further statistical analysis. This effort was eventually abandoned (see section on “Effort at Comparison Group Analysis”).
Key similarities and differences between the 2005 and 2014 Surveys are captured in Table A-2.
The 2014 Survey included awards made from FY 2001 to FY2010 inclusive. This end date allowed for completion of Phase II awards (which nominally fund 2 years of research) and provided a further 2 years for commercialization. This time frame was consistent with the 2005 Survey, which surveyed awards from FY 1992 to FY 2001 inclusive. It was also consistent with a previous GAO study, published in 1992, which surveyed awards made through 1987.
The aim of setting the overall time frame at 10 years was to reduce the impact of difficulties generating information about older awards, because some companies and PIs may no longer be in place and because memories fade over time. Reaching back to awards made before FY 2001 would generate few additional responses.
TABLE A-2 Similarities and Differences: 2005 and 2014 Surveys
| Item | 2005 Survey | 2014 Survey |
| Respondent selection | ||
|
Focus on Phase II winners |
||
|
Inclusion of Phase I winners |
||
|
All qualifying awards |
||
|
Respondent = Principal Investigator (PI) |
||
|
Respondent = Point of Contact (POC) |
||
|
Max number of questionnaires |
<20 | 2 |
| Distribution | ||
|
|
No | |
|
|
||
|
Telephone follow-up |
||
| Questionnaire | ||
|
Company demographics |
Identical | Identical |
|
Commercialization outcomes |
Identical | Identical |
|
IP outcomes |
Identical | Identical |
|
Women and minority participation |
||
|
Additional detail on minorities |
||
|
Additional detail on PIs |
||
|
New section on agency staff |
||
|
New section on company recommendations for SBIR |
||
|
New section capturing open-ended responses |
||
Following the precedent set by both the original GAO study and the first-round study of the SBIR program, we differentiated between the total population of awards, the preliminary survey target population of awards, and the effective population of awards for this study.
Two survey response rates were calculated. The first uses the effective survey population of awards as the denominator, and the second uses the preliminary population of awards as the denominator.
Upon acquisition of data for the 2014 Survey from the sponsoring agencies (NIH and DoE) covering record-level lists of awards and recipients,
initial and secondary filters were applied to reach the preliminary survey population and ultimately the effective survey population. These steps are described below.
From this initial list, determining the preliminary survey population required the following steps:
This process of excluding awards either because they did not fit the protocol agreed upon by the committee or because the agencies did not provide sufficient or current contact information, reduced the total award list provided by DoE from an initial list of 1,325 to a preliminary survey population of Phase II SBIR and STTR awards of 1,077 awards.
This preliminary population still included many awards for which the PI contact information appeared complete, but for which the PIs were no longer associated with the contact information provided and hence effectively unreachable. This is not surprising given that there is considerable turnover in both the existence of and the personnel working at small businesses and that the survey reached back 13 years to awards made in FY 2001. PIs for awards may have left the company, the company may have ceased to exist or been acquired, or telephone and email contacts may have changed, for example. Consequently, two further filters were utilized to help identify the effective survey population.
There was little variation between agencies or between programs in the quality of the lists provided by the agencies, based on these criteria.12
Following the application of these secondary filters, the effective population of DoE Phase II awardees was 494.
The survey opened on December 3, 2014, and was deployed by email, with voice follow-up support. Up to four emails were sent to the PIs for the effective population of awards (emails were discontinued once responses were received or it was determined that the PI was non-contactable). In addition, two voice mails were delivered to non-responding PIs of awards in the effective population, between the second and third and between the third and fourth rounds of email. In total, up to six efforts were made to reach each PI who was sent an award questionnaire.
After members of the data subgroup of the committee determined that additional efforts to acquire new responses were not likely to be cost effective, the survey was closed on April 7, 2015. The survey was therefore open for a total of 18 weeks.
Standard procedures were followed to conduct the survey. These data collection procedures were designed to increase response to the extent possible within the constraints of a voluntary survey and the survey budget. The population surveyed is a difficult one to contact and obtain responses from, as
___________________
12 The share of preliminary contacts that turned out to be not contactable was higher for this survey than for the 2005 Survey. We believe this is primarily because company points of contact (POCs) to which the 2005 Survey was sent have less churn than do principal investigators (PIs) (often being senior company executives).
evidence from the literature shows.13 Under these circumstances, the inability to contact and obtain responses always raises questions about potential bias of the estimates that cannot be quantified without substantial extra efforts requiring resources beyond those available. (See Box A-1 for a discussion of potential sources of bias.)
The lack of detailed applications data from the agency, beyond the name and address of the company, makes it impossible to estimate the possible impact of non-response bias. We therefore have no evidence either that nonresponse bias exists or that it does not. For the areas where Academy surveys have overlapped with other data sources (notably DoD’s mandatory CCR database), results from the survey and from the DoD data are similar. Table A-3 shows the response rates at DoE, based on both the preliminary study population and the effective study population after all adjustments.
The 2014 Survey primarily reached companies that were still in business: overall, 97 percent of PIs responding for an award in the effective population indicated that the companies were still in business.14
Several readers of the first-round reports on the SBIR program suggested inclusion of comparison groups in the analysis. There is no simple and easy way to acquire a comparison group for Phase II SBIR or STTR awardees especially at the agency level. These are technology-based companies
TABLE A-3 2014 Survey Response Rates at DoE
| Overall Population of Awards (all awards made) | 1,325 |
| Preliminary Population of Awards | 1,077 |
| Awards for which the PIs Were Not Contactable | |
|
No Email |
320 |
|
No Phone Contact |
263 |
| Effective Population of Awards | 494 |
| Number of Awards for which Responses Were Received | 269 |
| Response Rate: Percentage of Effective Population of Awards | 54.5 |
| Response Rate: Percentage of Preliminary Population of Awards | 25.0 |
SOURCE: 2014 Survey.
___________________
13 Many surveys of entrepreneurial firms have low response rates. For example, Aldrich and Baker (1997) found that nearly one-third of surveys of entrepreneurial firms (whose results were reported in the academic literature) had response rates below 25 percent. See H. E. Aldrich and T. Baker, “Blinded by the Cites? Has There Been Progress in Entrepreneurship Research?” pp. 377-400 in D. L. Sexton and R. W. Smilor (eds.), Entrepreneurship 2000, Chicago: Upstart Publishing Company, 1997.
14 2014 Survey, Question 4A.
at an early stage of company development, which have the demonstrated capacity to undertake challenging technical research and to provide evidence that they are potentially successful commercializers. Given that the operations of the SBIR/STTR programs are defined in legislation and limited by the Policy Guidance provided by SBA, randomly assigned control groups were not a possible alternative.
As part of the our 2011 Survey of DoD, NSF, and NASA SBIR and STTR award recipients, efforts to identify a pool of SBIR-like companies were made by contacting the most likely sources (Dun & Bradstreet and Hoovers), but these efforts were not successful, as insufficiently detailed and structured information about companies was available.
In response, we sought to develop a comparison group from among Phase I awardees that had not received a Phase II award from the three agencies surveyed in 2011 Survey during the award period covered by the survey (FY 1999-2008). After considerable review, however, we concluded that the Phase I-only group was also not appropriate for use as a statistical comparison group.