On March 14–15, 2023, the Board on Science, Technology, and Economic Policy (STEP) of the National Academies of Sciences, Engineering, and Medicine held a 2-day workshop in Washington, DC, “Experimentation in Federal Funding.” The purpose of the workshop was to explore the use of data, research, and experiments to improve the processes for and outcomes of federal funding of scientific research.
As delineated in the statement of task, shown in Box I-1, the current system for making funding decisions raises a number of concerns—including, for example, potential biases that may favor more established researchers, less risky proposals, and a narrow applicant pool—and the use of experimentation may help guide any reform of the system. However, because research on the use of experiments in funding research, particularly within government, is limited, the committee charged with planning the workshop (see p. v) decided that it was important to bring together speakers with a wide range of backgrounds and experiences to inform the discussion. The workshop included leading researchers in the burgeoning area of the science of science funding, as well as practitioners from government and the private sector with experience supporting or carrying out experimentation and evaluation. Participants explored the potential of and opportunities for using experimental design to test new federal funding policies and practices, discussed illustrative examples of the use of experimentation from the United States and abroad, considered methods of evaluation, and fostered relationships for future experimentation.
Public funding of scientific research has generated enormous scientific, technological, economic, and social benefits, but the core mechanisms by which funding decisions are made can still largely be traced to those implemented when the National Institutes of Health (NIH) was expanded in the late 1940s and the National Science Foundation (NSF) was created in the early 1950s. Numerous concerns have been raised about how the current systems operate, including:
Changes in the way that agencies structure grants and solicit and evaluate proposals have the potential to respond to these concerns, and agencies have undertaken some new programs and program modifications intended to address some of these issues. There is, however, an inadequate evidentiary base from which to determine how specific program elements affect specific program goals and outcomes. The burgeoning science of science research has provided many insights, but its capacity to explain how specific program elements affect particular aspects of program outcomes has been constrained by an overwhelming reliance on nonexperimental data. Using experiments, with evaluations of the impact, could provide the evidence needed to systematically improve the achievement of program objectives and address the types of concerns noted above.
As detailed in Azoulay and Li (2020), while economists have recognized the key role in science funding on innovation, until recently little empirical work has been done on exploring the relationship between funding mechanisms (and the selection and evaluation criteria used) and scientific discoveries. This is true
despite a body of research of the detrimental impact of current funding selection processes on early-career individuals, reproducibility of results, and novelty of ideas (Alberts et al., 2014). Recent work offers a notable counterexample of experimental evidence on funding outcomes. Teplitskiy et al. (2021) ran an experiment at the Harvard Medical School examining how peer-review scores relate to the novelty of the proposal (as measured by the percentage of key words not previously used). The authors found that the more novel the proposal is, the lower its score. The result is driven by proposals with particularly high levels of novelty. In a related experiment that examines the role of information sharing during peer review, Lane et al. (2021) found that reviewers, when confronted with lower scores than their own, lowered their scores; however, they did not raise their scores when confronted with higher scores. Additionally, while there are few experiments being conducted in the United States, there has been substantial European interest in conducting such experiments (e.g., Heyard et al., 2021; Mengel, 2021; Bendiscioli et al., 2021; Cuello, 2021).
Historically, there has been reluctance to “experiment” with public money, but the ongoing concerns about the current system made a National Academies workshop on this topic timely. Indeed, the workshop was held in the context of government-wide emphasis on the use of evidence and data in policymaking, notably the Foundations for Evidence-Based Policymaking Act of 2018 (P.L. 115-435), commonly known as the Evidence Act, which requires, among other things, agency evidence-building and evaluation plans. The workshop speakers, with their wide-ranging backgrounds and experiences, were well qualified to discuss ongoing areas of research on the use of experiments in a variety of contexts, the government context for experimentation and the use of evidence, and concerns about and the potential for the use of experimentation to improve processes for government funding of research.
As STEP chair and planning committee member Adam Jaffe (Brandeis University) said in his opening remarks:
This is an exciting but challenging time to be involved in science and innovation policy and science and innovation research. By planning in advance, you can try things out but do it in a systematic way that ensures—or at least maximizes the chances—that you can learn what works or what doesn’t. . . . Everyone has creative ideas about how to do things differently or how to do things that have never been done before.
Jaffe observed that the science of science funding poses many challenges. Data may be difficult to gather, standard procedures may constrain experiments, or insufficient attention to the establishment of control groups or randomization may limit what can be learned. “You have to figure out how to navigate those issues,” he said.
Workshop presenters and other participants discussed a wide array of topics, as summarized in this proceedings, including examples of experiments by U.S. agencies (Chapter 2), the international context (Chapter 3), nongovernmental perspectives (Chapter 4), challenges with experimentation (Chapter 5), how to structure support for experimentation (Chapter 6), and models for institutionalizing experimentation (Chapter 7), and closing with a roundtable discussion of key ideas from the workshop (Chapter 8). Keynote addresses opened the workshop each day and are summarized together in Chapter 1. The workshop represents an important first step in raising awareness of how experimentation may be used to improve the processes and outcomes of federal funding of research.
This Proceedings of a Workshop was prepared by a workshop rapporteur as a factual summary of what was presented and discussed at the workshop. The planning committee’s role was limited to organizing the workshop. The statements made are those of the rapporteur and do not necessarily represent positions of the workshop participants as a whole; the planning committee; or the National Academies of Sciences, Engineering, and Medicine.
Alberts, B., M. W. Kirschner, S. Tilghman, and H. Varmus. 2014. Rescuing US biomedical research from its systemic flaws. Proceedings of the National Academy of Sciences of the United States of America 111(16):5773–5777, April 2014. https://doi.org/10.1073/pnas.1404402111
Azoulay, P., and D. Li. 2020. Scientific grant funding. NBER Working Paper 26889. Cambridge, MA: National Bureau of Economic Research. https://doi.org/10.3386/w26889
Bendiscioli, S., T. Firpo, A. Bravo-Biosca, E. Czibor, M. Garfinkel, T. Stafford, J. Wilsdon, and H. Buckley Woods. 2021. The experimental research funder’s handbook. RoRI Working Paper No. 6. Research on Research Institute. https://doi.org/10.6084/m9.figshare.17102426.v1
Cuello, H. March 2021. Boosting experimental innovation policy in Europe: How innovation agencies are embracing randomized experimentation. Innovation Growth Lab Working Paper. https://www.innovationgrowthlab.org/sites/default/files/IGL001_IGLReport_v8_020321.pdf
Heyard, R., M. Ott, G. Salanti, and M. Egger. 2021. Rethinking the funding line at the Swiss National Science Foundation: Bayesian ranking and lottery. https://arxiv.org/abs/2102.09958
Lane, J. N., M. Teplitskiy, G. Gray, H. Ranu, M. Menietti, E. C. Guinan, and K. R. Lakhani. 2021. Conservatism gets funded? A field experiment on the role of negative information in novel project evaluation. Management Science. https://doi.org/10.1287/mnsc.2021.4107
Mengel, F. 2021. Gender bias in opinion aggregation. International Economic Review 62(3):1055–1080. http://dx.doi.org/10.2139/ssrn.3572594
Teplitskiy, M., H. Peng, A. Blasco, and K. R. Lakhani. 2021. Is novel research worth doing? Evidence from journal peer review. September 9. SSRN. https://ssrn.com/abstract=3920711 or http://dx.doi.org/10.2139/ssrn.3920711
This page intentionally left blank.