The National Academies of Sciences, Engineering and Medicine
Office of Congressional and government Affairs
At A Glance
: Small Businesses and Federal Contracting and Innovation Research
: 07/12/2006
Session: 109th Congress (Second Session)
: Charles W. Wessner

Study Director, Committee on Capitalizing on Science, Technology, and Innovation: An Assessment of the Small Business Innovation Research Program, and Program Director, Board on Science, Technology, and Economic Policy, Policy and Global Affairs Division, National Research Council, The National Academies

: Senate
: Small Business and Entrepreneurship Committee

Testimony of

Charles W. Wessner
Study Director
Committee on Capitalizing on Science, Technology, and Innovation: An Assessment of the Small Business Innovation Research Program
Program Director
Board on Science, Technology, and Economic Policy
Policy and Global Affairs Division
National Research Council
The National Academies

Before the

Small Business and Entrepreneurship Committee
U.S. Senate

On the

The Small Business Innovation Research Program

July 12, 2006

Good morning Senator Snowe and members of the Committee. My name is Charles Wessner, and I work at the National Research Council’s Board on Science, Technology, and Economic Policy. The National Research Council is the operating arm of The National Academies, which consists of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. Currently, the National Research Council is conducting a major assessment of the Small Business Innovation Research (SBIR) program at the request of Congress. My remarks today do not focus on findings and recommendations of that study, which is now in its final stages, but will instead draw on a conference report that addresses the role of SBIR as a source of early-stage finance in the U.S. innovation system.1


Created in 1982 through the Small Business Innovation Development Act, SBIR offers competition-based awards to stimulate technological innovation among small private-sector businesses while providing government agencies new technical and scientific solutions to help agencies achieve mission objectives. SBIR also encourages small businesses to commercialize innovative technologies in the private sector, helping to stimulate U.S. economic growth and international competitiveness.

As conceived in the 1982 Act, SBIR’s grant-making process is structured in three phases:

• Phase I is essentially a feasibility study in which award winners undertake a limited amount of research aimed at establishing an idea’s scientific and commercial promise. Today, the legislation anticipates Phase I grants as high as $100,000.2

• Phase II grants are larger – normally $750,000 – and fund more extensive R&D to further develop the scientific and technical merit and the feasibility of research ideas.

• Phase III. This phase normally does not involve SBIR funds, but is the stage at which grant recipients should be obtaining additional funds either from a procurement program at the agency that made the award, from private investors, or from the capital markets. The objective of this phase is to move the technology to the prototype stage and into the marketplace.

Phase III of the program is often fraught with difficulty for new firms. In practice, agencies have developed different approaches to facilitating this transition to commercial viability; not least among them is the use of additional SBIR awards.3 Some firms with more experience with the program have become skilled in obtaining additional awards.

Previous NRC research showed that different firms have quite different objectives in applying to the program. Some seek to demonstrate the potential of promising research. Others seek to fulfill agency research requirements on a cost-effective basis. Still others seek a certification of quality (and the investments that can come from such recognition) as they push science-based products towards commercialization.4

Features that make SBIR grants attractive from the firm’s perspective include the fact that there is no dilution of ownership or repayment required. Importantly, grant recipients retain rights to intellectual property developed using the SBIR award, with no royalties owed to the government, though the government retains royalty free use for a period. Selection to receive SBIR grants also confer a certification effect—acting as a signal to private investors of the technical and commercial promise of the technology.5

From the perspective of the government, the SBIR program helps achieve agency missions as well as encourage knowledge-based economic growth.6 By providing a bridge between small companies and the federal agencies, especially for procurement, SBIR serves as a catalyst for the development of new ideas and new technologies to meet federal missions in health, transport, the environment, and defense.7 It also provides a bridge between universities and the marketplace, thereby encouraging local and regional growth.8 Finally, by addressing gaps in early-stage funding for promising technologies, the program helps the nation capitalize on its substantial investments in research and development.9 While SBIR operations and accomplishments are sometimes discussed in general terms, the actual implementation of the program is carried out in agencies with quite distinct missions and interests. There is, therefore, significant variation in program objectives and procedures.


The NRC assessment represents a significant opportunity to gain a better understanding of one of the largest of the nation’s early-stage finance programs. Despite its size and 20-year history, the SBIR program has not been comprehensively examined. There have been some previous studies focusing on specific aspects or components of the program—notably by the General Accounting Office and the Small Business Administration.10 There are, as well, a limited number of internal assessments of agency programs.11 The academic literature on SBIR is also limited.12

To help fill this assessment gap, and to learn about a large, relatively under-evaluated program, the National Academies’ Committee for Government-Industry Partnerships for the Development of New Technologies was asked to review the SBIR program, its operation, and current challenges. Under its chairman, Gordon Moore, the Committee convened government policymakers, academic researchers, and representatives of small business for the first comprehensive discussion of the SBIR program’s history and rationale, to review existing research, and to identify areas for further research and program improvements.13

The Moore Committee reported that:

• SBIR enjoyed strong support in parts of the federal government as well as in the country at large.

• At the same time, the size and significance of SBIR underscored the need for more research on how well it is working and how its operations might be optimized.

• There should be additional clarification about the primary emphasis on commercialization within SBIR, and about how commercialization is defined.

• There should also be clarification on how to evaluate SBIR as a single program that is applied by different agencies in different ways.14

Subsequently, at the request of the Department of Defense, the Moore Committee was asked to review the operation of the SBIR program at Defense, and in particular the role played by the Fast Track Initiative. This resulted in the largest and most thorough review of any SBIR program to date. The study found that the SBIR program at Defense was contributing to the achievement of mission goals—funding valuable innovative projects—and that a significant portion of these projects would not have been undertaken in the absence of the SBIR funding.15 The Moore Committee’s assessment also found that the Fast Track Program increases the efficiency of the Department of Defense SBIR program by encouraging the commercialization of new technologies and the entry of new firms to the program.16

More broadly, the Moore Committee found that SBIR facilitates the development and utilization of human capital and technological knowledge.17 Case studies have shown that the knowledge and human capital generated by the SBIR program has economic value, and can be applied by other firms.18 And through the certification function, which generates additional information for investors, SBIR awards encourage further private sector investment in the firm’s technology.

Based on this and other assessments of public-private partnerships, the Moore Committee’s Summary Report on U.S. Government-Industry Partnerships recommended that “regular and rigorous program-based evaluations and feedback is essential for effective partnerships and should be a standard feature,” adding that “greater policy attention and resources to the systematic evaluation of U.S. and foreign partnerships should be encouraged.”19

The legislation mandating the current assessment of the nation’s SBIR program focuses on the five agencies that account for 96 percent of program expenditures, although the National Research Council is seeking to learn about the views and practices of other agencies administering the program as well. The mandated agencies, in order of program size, are the Department of Defense, the National Institutes of Health, the National Aeronautics and Space Administration, the Department of Energy, and the National Science Foundation.

Logic of the Current NRC Study

The current NRC assessment has been structured in three phases, with the first phase focused on fact-finding. A launch conference was a key element in this first study phase in that it:

• Provided agencies an opportunity to describe program operations, challenges, and accomplishments;

• Highlighted the important differences in agency goals, practices, and evaluations carried out within the common framework of the program; and

• Described the evaluation challenges arising from the diversity in program objectives and practice.

This first phase also developed a study methodology, which is a complement of evaluation tools and research strategies. Following review and approval by an independent National Academies panel of experts of this methodology, the second phase implemented the research methodology, gathered the results of surveys and case studies, and developed recommendations and findings.20

The third and final phase involves the preparation of reports on the various agency programs and the dissemination of the findings. Thus, in addition to its initial conference report, the NRC Committee expects to publish reports evaluating SBIR at each of the five mandated agencies listed above. An additional final report will include the Committee’s overall findings and recommendations for the program, as well as a summary of the main points from the individual agency reports.


In the United States today, the beneficial effects of science-based innovations are apparent in almost every arena—from health care and communications to leisure and defense applications. Given that many of these visible successes are products grounded in government-funded research and procurement, there is an understandable desire to ensure that federal policies smooth the path toward commercialization.21

This federal role is important, especially as it affects potential investors’ perceptions of risk, keeping in mind that commercializing science-based innovations is inherently a high-risk endeavor.22 One source of risk is the lack of sufficient public information for potential investors about technologies developed by small firms.23 Potential investors seek to learn about the growth potential of small firms, yet in many cases, the entrepreneur – especially in high-technology startups – is likely to better understand the technology and may well foresee its probable application better than potential investors. And even this understanding may not include a competent assessment of commercial potential.24

A second related hurdle is the leakage of new knowledge that escapes the boundaries of firms and intellectual property protection. The creator of new knowledge can seldom fully capture the economic value of that knowledge for his or her own firm. This spillover can inhibit investment in promising technologies for large and small firms—though it is especially important for small firms focused on a promising product or process.25

The challenge of incomplete and insufficient information for investors and the problem for entrepreneurs of moving quickly enough to capture a sufficient return on “leaky” investments pose substantial obstacles for new firms seeking capital. The difficulty of attracting investors to support an imperfectly understood, as yet-to-be-developed innovation is especially daunting. Indeed, the term, Valley of Death, has come to describe the period of transition when a developing technology is deemed promising, but too new to validate its commercial potential and thereby attract the capital necessary for its development.26


Despite these challenges, some firms do find their way through this Valley of Death with financing from wealthy individual investors (business “angels”) or, later in the development cycle, from venture capital firms. Recognizing the important role played by these business angels and venture capital firms, academic researchers and others have initiated new research on their impact.27 In this regard, one recent study found that while the ratio of funding provided by venture capital groups to the total funding for R&D has averaged less than 3 percent in recent years, venture capital accounts for about 15 percent of industrial innovations.28

Within the last decade, the number of venture capital firms that invest primarily in small business tripled, and their total investments rose eight-fold.29 This was followed by a sharp contraction in 2000 in the venture capital market, especially for new start-ups as low valuations and a contraction in IPO activity concentrated fund managers’ attention on existing investments.

Although business angels and venture capital firms, along with industry, state governments, and universities provide funding for early-stage technology development, the federal role may well be larger than is generally thought. Recent research by Branscomb and Auerswald estimated that the federal government provides between 20 to 25 percent of all funds for early-stage technology development—a substantial role by any measure.30 (See Figure 1.)


 Small Business Innovation Research Program

This contribution is made more significant in that the government awards address segments of the innovation cycle that private investors often find too risky. Because technology-based firms are a significant source of innovation and competitive advantage for the United States, it is important to improve our understanding of how public-private partnerships policies—in this case, innovation awards—can play in encouraging small-firm growth.31

The Role of Government Partnerships

Partnerships in general are cooperative relationships involving government, industry, laboratories, and (increasingly) universities, organized to encourage innovation and commercialization. The long-term goal of these public-private partnerships is to develop industrial processes, products, and services, and thereby apply new knowledge to government missions such as improved health, environmental protection, and national security. 32

Overcoming Investment Barriers

A key purpose of public-private partnerships is to help entrepreneurs overcome the financial and other obstacles they face in developing new technologies for the market.33 In the case of a research consortium, the government can facilitate cooperation among firms in developing pre-competitive platform technologies by providing, for example, matching funds and selective exemptions to antitrust laws.

Innovation awards—another important type of government-industry partnership—are intended to encourage the development of promising technologies that might otherwise be perceived to be too financially risky. As noted above, even the largest firms may not be able to recapture an investment in a technology that “leaks” too soon to too many users.34 Recent assessments of innovation award programs support the view that these government-industry partnerships can help firms overcome barriers to investment for promising, high-spillover technologies.35

Indeed, the National Academies’ Moore Committee found that such public-private partnerships “can play an instrumental role in accelerating the development of new technologies from idea to market.”36 It further identified several broad conditions contributing to successful partnerships: As applied to SBIR, these include:

• Industry Initiation: Individual researchers and firms develop proposals in response to government solicitations that are fairly broad, or, in some cases, purely at their own initiative. This bottom-up, self-selection approach is a source of strength for award programs, allowing great flexibility and encouraging diversity.

• Competitive Selection Mechanisms: The SBIR program, while relatively large, remains highly competitive.37 Normally, under 15 percent of Phase I candidates are successful.

• Shared Cost Commitments: SBIR awards can encourage innovation, leverage company investments, attract other sources of capital, and ensure management commitment because awardees retain control of the intellectual property.

• Objective and Ongoing Assessments: Regular evaluations of the partnership programs at the operational and policy levels are needed to ensure effective alignment of the program with current agency goals and needs.

Capitalizing on National Investments in Research

Reaching similar conclusions, a study by the National Academies’ Committee on Science, Engineering, and Public Policy found partnerships to be an essential tool in the mix of policies needed to capitalize on the nation’s investments in scientific research.38 It observed that partnerships contribute to a relatively open flow from fundamental breakthroughs to first demonstrations to product applications. This openness was seen as a particular strength of the U.S. innovation system. Citing the development of monoclonal antibodies, and the semiconductor technologies underlying personal computers and the Internet as examples, the report identified four conditions favorable for effective commercialization of the fruits of research. These are the presence of:

• Mechanisms for research and capitalization that support cooperation between the academic, industry, and government sectors;

• A strong, diverse national portfolio of science and technology investments;

• A favorable environment for capitalizing on research investments, characterized by strong incentives for innovation and free movement of ideas and people; and

• A skilled, flexible science and engineering human resource base.

The report further noted that nearly all the successful examples of capitalization examined depended on the collaboration of scientists and engineers who had diverse perspectives, time frames, and talents, drawn from the whole web of public, private, and educational institutions. This web of institutions, it said, had become far more complex in recent years, as many large corporations reached outside the firm to rely on universities, suppliers, and subcontractors as sources of research. Similarly, technology-oriented start-ups too small to support basic research programs often depended on close contacts with university researchers.

The report concluded that governments, industries, and universities should continue to experiment with partnerships and consortia, with the goals of conducting mutually beneficial research, invigorating education, and capitalizing on research for the benefit of society. During the partnership phase, industry should share costs and take the initiative in research directions—criteria met by the SBIR program.


As noted earlier, the SBIR program has not been comprehensively assessed to date, despite its size and twenty-year history. Even so, there are numerous views of the program that have developed in the absence of credible data and analysis. The current NRC assessment has the potential to contribute to a greater understanding of the program by improving knowledge about its operations, achievements, potential, and constraints based on the evidence it has collected. This knowledge may help illuminate some commonly held opinions about SBIR as well as suggest ways to improve the operation and impact of the program.

Some Contrasting Views of the Program

Some commentators have suggested that the failure rate of SBIR awards is too high, which suggests, in turn, that the program funds R&D of marginal value. This is a challenging point. Measuring the impact and results of an R&D program is intrinsically difficult.39 What constitutes an acceptable failure rate for a program designed to make high-risk, potentially high-payoff investments is, of course, a central question—one that is especially difficult for those with a fiduciary responsibility for public funds. High-risk R&D investments are, indeed, high-risk—project failures in such initiatives are inevitable and not necessarily indicative of program failure.

Still, the question of what an appropriate return on investment in new technologies remains. One benchmark may be the venture capital market, where only about 10 percent of investments in new firms succeed. A key question in assessing SBIR is whether this comparison is appropriate.40 Another recurrent question is whether a project or firm failure is indicative of a complete loss on federal investment—as it sometimes is—or if the loss is mitigated by knowledge generated by the SBIR grant that is then transmitted through less direct ways to the overall benefit of society. This second scenario takes into account potential indirect knowledge spillovers that were not a part of the original research design or intent. Consider, for example, the case of a principal investigator who takes the knowledge gained from work at a “failed” firm and uses it at a new firm to guide product development in an entirely new market.41

An additional concern is that SBIR awards might “crowd out” or replace private capital. While theoretically possible, recent work by Bronwyn Hall, Paul David, and Andrew Toole suggests that the overall empirical evidence for “crowding out” is at least equivocal.42 Interestingly, there is some positive evidence that programs like SBIR can prompt “crowding in” of private capital. Awards have a “halo effect” that attracts private investors, who see the awards as a certification of technical quality, reducing the uncertainty inherent in early-stage investment.43

Finally, some object to the SBIR program more broadly as an unwarranted and unnecessary intervention in capital markets.44 Yet, as noted above, it is widely recognized that capital markets are imperfect with significant gaps (or asymmetries) in information between the potential investor and the prospective entrepreneur.45 Venture capital markets, in particular, tend to focus on later stages of technology development than SBIR, and venture funds in the aggregate seem to be prone to herding tendencies. The attention of private investors does not necessarily extend to all areas of socially valuable innovation.46

Perhaps the most significant point to retain from these various perspectives on SBIR is how much uncertainty surrounds early-stage finance in the U.S. economy. As noted, some recent work suggests that the federal role in early-stage firm development is more significant than commonly believed, while also affirming the analytical uncertainty surrounding the funding and development of early-stage firms. Strong affirmations about the “appropriate” role of government support for innovation are not borne out by the history of innovation and industrial development in the United States or, indeed, recent experience.47

The Challenge of Establishing Causality

The issue of causality is complex. Awards are given to firms for projects. Yet specific projects funded by SBIR can be difficult to track even within a company. Firms that are granted SBIR awards can merge, fail, and change their name before a product reaches the market. In addition, key individuals can change firms, carrying their knowledge of the project with them. In this way, an SBIR investment in one firm may translate into a valuable product from another firm. Especially when the process from discovery to market is long, as is the case for drug development, these transitions are difficult to track.

One way to measure the value of SBIR is to track awards to see if they have resulted (variously) in publications, citations, patents, products, licensing, sales, and increased employment for firms receiving awards. While data relating to these metrics appear to promise information of some significance on SBIR’s impact, accurate data on commercial successes can be difficult to gather for a variety of reasons, not least definitional.

The Challenge of Gauging Commercial Success

Given the diversity of agency missions and objectives, commercial success can vary substantially. For example, gauging success in public procurement of SBIR technologies can depend on the nature of the product, the type of research, and its utility for the agency mission. In some cases, the appropriate metric is likely to vary with the specific mission of the agency or sub-unit. For some research questions, the project report itself may constitute the product. For the Department of Defense, one way of measuring commercialization success would be to count the products procured by the agency—although large procurements from major suppliers are more easily tracked than products from small suppliers such as SBIR firms. However, successful development of a technology or product does not always translate into successful “uptake” by the procuring agency, often for reasons having little to do with product quality or its potential contribution to an agency’s mission.

Even promising research does not always move toward the market in a linear fashion. Sometimes research enabled by a particular SBIR award takes on commercial relevance in new unanticipated contexts. Duncan Moore of the University of Rochester recounts for example that his SBIR-funded research in gradient index optics was initially a commercial failure when an anticipated market for its application (in picture phones) did not emerge. However, the technology later found substantial commercial success in the boroscope, a device used to look inside materials and structures. This story illustrates that today’s commercial dead end could well be a key to tomorrow’s market success.

While the concept of “commercial” at the Department of Defense most often relates to the use of a new product or process by the government, the concept more conventionally refers to the means by which a new product or process—provided by a viable business enterprise—enters the market on an independent, third party, competitive basis. These differing interpretations also reveal the differing pathways to commercialization. For some products, this path is akin to a long, complex, winding, and uncertain road. For others, the pathway is more immediate with visible linkages to mission, industrial, and commercial applications.

The Need for Realistic Expectations:

An assessment of SBIR must be based on an understanding of the realities of the distribution of successes and failures in early-stage finance. As with most early-stage finance, SBIR awards are characterized by a highly skewed distribution of successes. This includes a few genuinely large successes that generate returns that would cover, in themselves, the cost of the entire program. For example, Science Research Laboratory Inc has reported over a billion dollars in sales from a new technology that increased the number of circuits on a computer chip by thirty percent. Similarly, Digital System Resources’ technology significantly improved the computing power of sonar technology, leading to its adoption across the U.S. submarine fleet. Below these star performers are a number of more modest successes, followed by a large number of awards that had produced few or no results, he said. Given this skew, a purely random sample of individual project outcomes is likely to yield an imbalanced assessment of the SBIR program.

Compared to What?

Appropriate expectations for success, or indeed failure, is sometimes overlooked in assessing SBIR. Senior executives from pharmaceutical companies have observed that the failure rate for the biotechnology industry, form target identification to product launch, is about 90 percent. This holds true even in the best of circumstances and even for large companies that had invested billions of dollars in research and development.48 Venture capital firms, which typically fund more articulated and market-proximate proposals than SBIR, report similar rates of failure. It is against these comparisons that SBIR’s awards to new firms should be compared.

In short, when we set metrics for SBIR projects, it is important to have realistic expectations as to what would constitute success and what rate of success we should expect. Some commentators criticize SBIR for not having enough successes. Yet high success rates could imply that the SBIR program does not have a sufficiently risky portfolio: Using success rates alone can be a dangerous metric for assessing SBIR because it can encourage little or no risk taking, obviating, in turn, the advantage of small awards to explore promising ideas—a key feature of the SBIR program.

Concluding the Assessment

The NRC study of SBIR, led by Jack Gansler and carried out by a distinguished committee, continues to progress. The research phase is complete and the Committee is now assessing the data it has collected and its meaning in terms of program achievements and challenges. One of the encouraging things to which we believe the NRC study has in fact contributed has been the development of an assessment culture among the agencies’ SBIR programs. During our study, there has been considerably more attention to program refinement and efforts to assess outcomes from agencies that had previously not conducted such assessments. Partly as a result of these initiatives, the SBIR program continues to evolve and contribute to agency missions. We look forward to bringing you the results of our surveys, case studies, and analysis in the not-too-distant future.



1. This testimony is drawn from the National Research Council Report, The Small Business Innovation Research Program: Program Diversity and Assessment Challenges, Report of a Symposium, Washington, D.C.: National Academies Press, 2004

2. With the accord of the Small Business Administration, which plays an oversight role for the program, this amount can be higher in certain circumstances; e.g., drug development at NIH, and is often lower with smaller SBIR programs, e.g., EPA or the Department of Agriculture.

3. NSF, for example, has what is called a Phase II-B program that allocates additional funding to help potentially promising technology develop further and attract private matching funds. As with venture-funded firms, Phase III is likely to include some mix of economically viable and non-viable products, ultimately to be determined by the relevant agency mission requirements or private markets.

4. See Reid Cramer, “Patterns of Firm Participation in the Small Business Innovation Research Program in Southwestern and Mountain States,” in National Research Council, The Small Business Innovation Research Program, An Assessment of the Department of Defense Fast Track Initiative, C. Wessner, ed., Washington, D.C.: National Academy Press, 2000.

5. This certification effect was initially described by Josh Lerner, “Public Venture Capital,” in National Research Council, The Small Business Innovation Program: Challenges and Opportunities, C. Wessner, ed. Washington, D.C.: National Academy Press, 1999.

6. See the presentation of Robert Norwood of NASA in National Research Council, The Small Business Innovation Research Program: Program Diversity and Assessment Challenges, op. cit.,, pp. 95-100.

7. See the presentation of Kenneth Flamm in National Research Council, The Small Business Innovation Research Program: Program Diversity and Assessment Challenges, op. cit., pp. 63-67.

8. See the presentation of Christina Gabriel in National Research Council, The Small Business Innovation Research Program: Program Diversity and Assessment Challenges, op. cit., pp. 131-133.

9. See the presentation by Joseph Bordogna in National Research Council, The Small Business Innovation Research Program: Program Diversity and Assessment Challenges, op. cit., pp. 123-128.

10. See for example, GAO, “Federal Research: Small Business Innovation Research Shows Success but Can Be Strengthened,” GAO/RCED-92-37, 1992; GAO, “Federal Research: DOD's Small Business Innovation Research Program,” GAO/RCED-97-122, 1997; GAO, “Federal Research: Evaluation of Small Business Innovation Research Can Be Strengthened,” GAO/RCED-99-114, 1999; GAO, “Survey of Companies Receiving Small Business Technology Transfer (STTR) Phase II Awards Fiscal Years 1995-1997,” GAO-01-766R, 2001; and GAO, “Small Business Innovation Research: Information on

Awards Made by DoD in Fiscal Years 2001 Through 2004,” GAO-06-565, 2006.

11. Agency reports include an unpublished 1997 DoD study on the commercialization of DoD SBIR. NASA has also completed several reports on its SBIR program. Following the authorizing legislation for the NRC study, NIH launched a major review of the achievements of its SBIR program.

12. Writing in the 1990s, Joshua Lerner positively assessed the program, finding “that SBIR awardees grew significantly faster than a matched set of firms over a ten-year period.” See Josh Lerner, “The Government as Venture Capitalist: The Long-Term Effects of the SBIR Program," Journal of Business 72(2), July 1999. Underscoring the importance of local infrastructure and cluster activity, Lerner’s work also argued that the “positive effects of SBIR awards were confined to firms based in zip codes with substantial venture capital activity.” These findings were consistent with both the corporate finance literature on capital constraints and the growth literature on the importance of localization effects. For example, see Michael Porter, “Clusters and Competition: New Agendas for Government and Institutions,” in On Competition, Boston: Harvard Business School Press, 1998.

13. See National Research Council, The Small Business Innovation Research Program: Challenges and Opportunities, op cit.

14. Ibid.

15. National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, op. cit.. See Section III, Recommendations and Findings, p. 32

16. Ibid, p. 33.

17. Ibid, p. 33.

18. Ibid, p. 33.

19. See National Research Council, Government-Industry Partnerships for the Development of New Technologies, Summary Report, C. Wessner, ed., Washington, D.C.: National Academy Press, 2002, p. 30.

20. For a description of current NRC program assessment, see National Research Council, Capitalizing on Science, Technology, and Innovation: An Assessment of the Small Business Innovation Research Program, Project Methodology. Access at The NRC analysis draws on existing reports and data sources, as well as from newly commissioned surveys of award recipients and program managers, and extensive case studies.

21. For an overview of the importance of federal contributions to technology development, see Vernon Ruttan, Technology, Growth and Development: An Induced Innovation Perspective, New York: Cambridge University Press, 2001. See also David Audretsch et al., “The Economics of Science and Technology,” Journal of Technology Transfer, 27:155-203.

22. See, for example, Lewis M. Branscomb, Managing Technical Risk: Understanding Private Sector Decision Making on Early Stage Technology Based Projects. Lewis M. Branscomb, Kenneth P. Morse, Michael J. Roberts, Darin Boville. Washington, D.C.: Department of Commerce/National Institute of Standards and Technology, 2000.

23. Joshua Lerner, “Evaluating the Small Business Innovation Research Program: A Literature Review,” in National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, op. cit. For a seminal analysis on information asymmetries in markets and the importance of signaling, see Michael Spence, Market Signaling: Informational Transfer in Hiring and Related Processes, Cambridge: Harvard University Press, 1974.

24. Joshua Lerner, “Public Venture Capital: Rationale and Evaluation,” in National Research Council, Small Business Innovation Research: Challenges and Opportunities, op cit.

25. Edwin Mansfield, “How Fast Does New Industrial Technology Leak Out?” Journal of Industrial Economics, Vol. 34, No. 2, pp. 217-224.

26. See Vernon J. Ehlers, Unlocking Our Future: Toward a New National Science Policy, A Report to Congress by the House Committee on Science, Washington, D.C.: Government Printing Office, 1998. Accessed at

27. See Jeffrey Sohl’s 1999 article in Venture Capital, Vol. 1, No. 2, pp. 101-120. Dr. Sohl estimates that of the total populations of business angels and of venture capital funds, each of the two groups invests approximately the same annual amounts in small firms ($30-40 million), but the funds of business angels are spread over some 50,000 firms, while those of venture capital groups are focused on some 4,000 firms. The typical “deal size” for angels is approximately $50,000-$1 million and for venture capital firms $8-9 million. See also Jeffrey Sohl, John Freear, and W.E. Wetzel, Jr., “Angles on Angels: Financing Technology-Based Ventures - An Historical Perspective,” Venture Capital: An International Journal of Entrepreneurial Finance, 4(4): 275-287, 2002.

28. Samuel Kortum and Josh Lerner, 1998. "Does Venture Capital Spur Innovation?” NBER Working Papers 6846, National Bureau of Economic Research, Inc.

29. Jeffrey Sohl,

30. The authors stress the “limitations inherent in the data and the magnitude of the extrapolations…” and urge that the findings be interpreted with caution. They note further that while the funding range presented for each category is large, these approximate estimates, nonetheless, provide “valuable insight into the overall scale and composition of early-stage technology development funding patterns and allow at least a preliminary comparison of the relative level of federal, state, and private investments.” For further discussion of the approach and its limitations, see Lewis M. Branscomb and Philip E. Auerswald, Between Invention and Innovation, An Analysis of Funding for Early-Stage Technology Development, Gaithersburg, MD: NIST GCR 02–841, November 2002, pp. 20-24.

31. See National Research Council, Government-Industry Partnerships for the Development of New Technologies, Summary Report, op. cit., passim.

32. Ibid.

33. Lewis M. Branscomb and Philip E. Auerswald, Taking Technical Risks: How Innovators, Managers, and Investors Manage Risk in High-Tech Innovations, Cambridge MA: MIT Press, 2001.

34. Technological knowledge that can be replicated and distributed at low marginal cost may have a gross social benefit that exceeds private benefit—and in such cases is considered by many as prone to be undersupplied relative to some social optimum. See Richard N. Langlois and Paul L. Robertson, “Stop Crying over Spilt Knowledge: A Critical Look at the Theory of Spillovers and Technical Change,” paper prepared for the MERIT Conference on Innovation, Evolution, and Technology, August 25-27, 1996, Maastricht, Netherlands.

35. See Albert N. Link, “Enhanced R&D Efficiency in an ATP-funded Joint Venture,” in The Advanced Technology Program: Assessing Outcomes, C. Wessner, ed., Washington, D.C.: National Academy Press, 2001. For a review of why firms might under-invest in R&D, see Albert N. Link, “Public/Private Partnerships as a Tool in Support of Industrial R&D: Experiences in the United States,” Final Report to the Working Group on Innovations and Technology Policy of the OECD Committee for Scientific and Technology Policy, January 1999. For specific reviews of programs such as SBIR, ATP and SEMATECH, see National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, op. cit.; National Research Council, The Advanced Technology Program, Assessing Outcomes, C. Wessner, ed., Washington, D.C.: National Academy Press, 2001; and National Research Council, Securing the Future: Regional and National Programs to Support the Semiconductor Industry, C. Wessner, ed., Washington, D.C.: National Academies Press, 2003.

36. See National Research Council, Government-Industry Partnerships for the Development of New Technologies: Summary Report, op. cit.

37. The SBIR program now disburses $2.05 billion in awards annually.

38. The analysis was carried out by the NRC’s Committee on Science, Engineering, and Public Policy (COSEPUP). See National Research Council, Capitalizing on the Results of Scientific Research, Washington, D.C.: National Academy Press, 1999.

39. See National Research Council, Capitalizing on the Results of Scientific Research, Washington, D.C.: National Academy Press, 1999.

40. Despite the growing popularity of the idea of “public venture capital” programs, SBIR cannot be considered a venture capital program because awards do not involve equity ownership, management input, or an exit strategy, involving sale of the firm. For a description of a public venture initiative, see the presentation of the CIA’s In-Q-Tel by Gilman G. Louie, “In-Q-Tel A ‘Nonprofit Venture Capital Fund,’” in National Research Council, A Review of the New Initiatives at the NASA Ames Research Center, C. Wessner, ed., Washington, D.C.: National Academy Press, 2001.

41. Relatedly, see the description by Duncan Moore of his experience in SBIR-financed work on gradient optics that was not initially commercially successful but that later led to the invention and successful commercialization of the borescope. National Research Council, The Small Business Innovation Research Program: Program Diversity and Assessment Challenges, op. cit., pp. 93-94.

42. See Paul A. David, Bronwyn H. Hall, and Andrew A. Toole, "Is Public R&D a Complement or Substitute for Private R&D? A Review of the Econometric Evidence," No 7373, NBER Working Papers, 1999.

43. See Maryann P. Feldman and Maryellen R. Kelley, “Leveraging Research and Development: The impact of the Advanced Technology Program,” in National Research Council, The Advanced Technology Program: Assessing Outcomes, op. cit.

44. See, for example, Scott Wallsten, “Rethinking the Small Business Innovation Research Program,” in L. M. Branscomb and J. Keller, eds., Investing in Innovation: Creating a Research and Innovation Policy, Cambridge, MA: The MIT Press, 1998, pp. 194-220.

45. See Michael Spence, Market Signaling: Informational Transfer in Hiring and Related Processes, op. cit.

46. See case studies in National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, op. cit.

47. See the discussion of this question in the Introduction to the review of the Advanced Technology Program in National Research Council, The Advanced Technology Program: Assessing Outcomes, op. cit., and National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, op. cit.

48. See the conference presentation of Gail Cassell in National Research Council, The Small Business Innovation Resarch Program: Program Diversity and Assessment Challenges, op. cit.