In keeping with the definitions and concepts in the previous section, the NRC study will identify the desired measures for expressing results related to each of the objectives defined in section 3, Clarifying Study Objectives. It is important to note that the metrics ultimately used in the study will be selected partly based on their theoretical importance in answering critical questions, and partly based on practicalities. Here we list a set of draft metrics of clear utility to the study; not all will ultimately be adopted, and, as the research progresses; undoubtedly others will be developed as additional elements emerge as the study moves forward.
Internal measures of research quality—These will be based on comparative survey results from agency managers with respect to the quality of SBIR-funded research versus the quality of other agency research. It is important here to recognize that standards and reviewer biases in the selection for SBIR awards in the selection of other awards may vary.
External measures of research quality
Peer-reviewed publications
Citations
Technology awards from organizations outside the SBIR agency
Patents
Patent Citations
Agency missions vary; for example procurement will not be relevant to NSF and NIH (and some of DoE) SBIR programs. The value of SBIR to the agency mission can best be addressed through surveys at the sub-unit manager level, similar to the approach demonstrated by Archibald and Finifter’s (2000) Fast Track study, which provides a useful model in this area.45 These surveys will seek to address:
The alignment between agency SBIR objectives and agency mission
Agency-specific metrics (to be determined)
Procurement:
The rate at which agency procurement from small firms has changed since inception of SBIR;
The change in the time elapsed between a proposal arriving on an agency’s desk and the contract arriving at the small business;
The rate at which SBIR firm involvement in procurement has changed over time;
Comparison of SBIR-related procurement with other procurement emerging from extra-mural agency R&D;
Technology procurement in the agency as a whole;
Agency success metrics – how does the agency assess and reward management performance? Issues include
Time elapsed between a proposal arriving on an agency’s desk and the contract arriving at the small business.
Minimization of lags in converting from SBIR Phase I to Phase II
Parallel data collection across the five agency SBIR programs is to compile year-by-year program demographics for approximately the last decade. Data compilation requests will include the number of applications, number of awards, ratio of awards to applications, and total dollars awarded for each phase of the multi-phase program. It will cover the geographical distribution of applicants, awards, and success rates; statistics on applications and awards by women-owned and minority-owned companies; statistics on commercialization strategies and outcomes; results of agency-initiated data collection and analysis; and uniform data from a set of case studies for each agency.
The Committee plans to draw on the following data collection instruments:
Phase I recipient survey
Phase II recipient survey
SBIR program manager survey
COTAR (technical point of contact) survey
case data from selected cases
Data collected from these surveys and case studies will be added to existing public sources of data that will be used in the study, such as:
all agency data covering award applications, awards, outcomes, and program management
patent and citation data
venture capital data
census data
Additional data may be collected as a follow-up based on an analysis of response.
The study will examine the agency rates of transition between phases, pending receipt of the agency databases for applications and awards of Phase I and Phase II.
The Phase II survey will gather information on all Phase III activity including commercial sales, sales to the federal government, export sales, follow-on federal R&D contracts, further investment in the technology by various sources, marketing activities, and identification of commercial products or federal programs that incorporate the products. SBIR Program manager surveys and interviews will address federal efforts to exploit the results of phase II SBIR into phase III federal programs.
First order metrics for commercialization revolve around these basic areas:
Sales (firm revenues)
Direct sales in the open market as a percentage of total sales
Indirect sales (e.g. bundled with other products and services) as a percent of total sales
Licensing or sale of technology
Contracts relating to products
Contracts relating to the means of production or delivery—processes
SBIR-related products, services, and processes procured by government agencies.
Spin-off of firms
The issue of commercial success goes beyond whether project awards go to firms that then succeed in the market. It is possible that these firms may well have succeeded anyway, or they may simply have displaced other firms that would have succeeded had their rival not received a subsidy. The issue is whether SBIR increases the number of small businesses that succeed in the market. If the data permit, the study team may try to emulate the research of Feldman and Kelley to test the hypothesis that the SBIR increases/does not increase the number of small businesses that pursue their research projects or achieve other goals.46
Support for firm development, which may include:
Creation of a firm (i.e., has SBIR led to the creation of a firm that otherwise would not have been founded)
Survival
Growth in size (employment, revenues)
Merger activity
Reputation
Increase in stock value/IPO, etc.47
Formation of collaborative arrangements to pursue commercialization, including pre-competitive R&D or a place in the supply chain
Investment in plant (production capacity)
Other pre-revenues activities aimed at commercialization, such as entry into regulatory pipeline and development of prototypes
Access to capital
Private capital
From angel investors
From venture capitalists
Banks and commercial lenders
Capital contributions from other firms
Stock issue of the SBIR-recipient firm, e.g., initial public offerings (IPO)
Subsequent (non-SBIR) funding procurement from government agencies
Enhanced research efficiency
Outcomes from SBIR vs. non-SBIR research
Agency manager attitudes toward SBIR
Social returns include private returns, agency returns, and spillover effects from research, development, and commercialization of new products, processes, and services associated with SBIR projects. It is difficult, if not impossible, to capture social returns fully, but an attempt will be made to capture at least part of the effects beyond those identified above including the following:
Evidence of spillover effects
Small business support:
Small business share of agency R&D funding
Survival rates for SBIR supported firms
Growth and success measures for SBIR vs. non-SBIR firms
Training:
SBIR impact on entrepreneurial activity among scientists and engineers
Management advice from Venture Capital firms
Other training effects.
|
47 |
The web site inknowvation.com has a data set on publicly traded SBIR firms. |
Intellectual property
Patents filed and granted
Patent citations
Litigation
Non-intellectual property
Journal articles and citations
Human capital measures
Given the complexity of the NRC study, the Committee is unlikely to devote substantial resources to this area. However, some evidence about other non-economic benefits e.g., environmental or safety impacts may emerge from the case studies and interviews.
Absolute SBIR funding levels
SBIR vs. other agency extra mural research funding received by small businesses
Agency funding for small business relative to overall sources of funding in the US economy
It will be important to analyze the categories below with respect to the size of the firm.
Recipient views on process
Management views on process
Flexibility of process, e.g., award size
Timeliness of application decision process
Management actions on troubled projects
For all of the outcome metrics listed above, it will be important to capture a range of demographic variables that could become independent variables in empirical analyses.
What is the best way of assessing SBIR? One approach—utilized by many agencies when examining their SBIR programs—has been to highlight successful firms. Another approach has been to survey firms that have been funded under the SBIR program, asking such questions as whether the technologies funded were ever commercialized, the extent to which their development would have occurred without the public award, and how firms assessed their experiences with the program more generally. It is important to recognize and account for the biases that arise with these and other approaches. Some possible sources of bias are noted below48:
Response bias—1: Many awardees may have a stake in the programs that have funded them, and consequently feel inclined to give favorable answers (i.e., that they have received benefits from the program and that commercialization would not have taken place without the awards). This may be a
particular problem in the case of the SBIR initiative, since many small high-technology company executives have organized to lobby for its renewal.
Response bias—2: Some firms may be unwilling to acknowledge that they received important benefits from participating in public programs, lest they attract unwelcome attention.
Measurement bias: It may simply be very difficult to identify the marginal contribution of an SBIR award, which may be one of many sources of financing that a firm employed to develop a given technology.
Selection bias. This source of bias concerns whether SBIR awards firms that already have the characteristics needed for a higher growth rate and survival, although the extent of this bias is likely overdrawn since an important role of SBIR is to telegraph information about firms to markets operating under conditions of imperfect information.49
Management bias: information from agency managers, who must defend their SBIR management before the Congress, may be subject to bias in different ways.
Size bias: The relationship between firm size and innovative activity is not clear from the academic literature.50 It is possible that some indexes will show large firms as more successful (publications and total patents for example) while others will show small firms as more successful (patents per employee for example.)
A complement of approaches will be developed to address the issue of bias. In addition to a survey of program managers, we intend to interview firms as well as agency officials, employ a range of metrics, and use a variety of methodologies.
The Committee is aware of the multiple challenges in reviewing “the value to the Federal research agencies of the research projects being conducted under the SBIR program…” [H.R. 5667, sec. 108]. These challenges stem from the fact that
the agencies differ significantly by mission, R&D management structures (e.g., degree of centralization), and manner in which SBIR is employed (e.g., administration as grants vs. contracts); and
different individuals within agencies have different perspectives regarding both the goals and the merits of SBIR-funded research.
The Committee proposes multiple approaches to assessing the contributions of the program to agency mission, in light of the complicating factors mentioned above:
A planned survey of all individuals within studied agencies having SBIR program management responsibilities (that is, going beyond the single "Program Manager" in a given agency). The survey will be designed and implemented with the objective of minimizing framing bias. We will reduce sampling bias by soliciting responses from R&D managers without direct SBIR responsibilities as well as those who have both. Important areas of inquiry include study of the process by which topics are defined, solicitations developed, projects scored, and award selections made.
Systematic gathering and critical analysis of the agencies' own data concerning take-up of the products of SBIR funded research.
Study of the role of multiple-award-winning firms in performing agency relevant research;