Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop (2024)

Chapter: 5 Challenges with Experimentation in U.S. Federal Agencies

Previous Chapter: 4 Nongovernmental Perspectives on Experiments
Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

5

Challenges with Experimentation in U.S. Federal Agencies

As moderator Heidi Williams (Stanford University) explained, the fourth panel provided perspectives on experimentation from representatives of several federal agencies. Both the National Science Foundation (NSF) and the National Institutes of Health (NIH) were represented, as was a presenter with experience at several advanced research project agencies.

THE MAXIMIZING INVESTIGATORS’ RESEARCH AWARD PROGRAM

Jon Lorsch (NIH) described a new funding mechanism at the National Institute of General Medical Sciences (NIGMS), which is an experiment in the broad sense that it is testing a new model. However, the experiment changed many variables at the same time, so the comparison is between one model and another, rather than between a treatment group and a control group.

The problem being addressed relates to the traditional project-based funding mechanism in which investigators are asked to detail what they will be doing for as many as 5 years into the future. “That’s not really how science works,” said Lorsch. “If you can tell me exactly what you’re going to be doing 4 years from now, let alone 4 months from now, . . . it’s probably a reiteration of things that have already been done.” If experiments turn up something unexpected, following that unexpected finding is where not only major discoveries but also more incremental advances are likely to lie. “Examples are things like RNAi and CRISPR, where it was not something someone predicted in advance,” he said.

Stability of funding is another issue with the traditional funding model, Lorsch explained. The prospect of possibly losing funding in 1–2 years diminishes a researcher’s ability to take risks. Instead of seeking major payoffs, investigators tend to do safer things that they know will generate papers and renewed funding. They also tend to write multiple grants, imposing an administrative burden that can detract from the ability to do science and mentor students and other trainees.

Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

Finally, the traditional funding model has had the effect of aggregating large amounts of NIH’s funds into a relatively small number of people. Roughly 10 percent of grantees receive half the research funding distributed, which is “not an efficient way of distributing funds,” said Lorsch. “Something that spreads the funds out amongst many more individuals, with many different research questions, different backgrounds, and from different regions of the country, would probably produce a better outcome.” Early-stage investigators, in particular, have to compete with scientists who have been running a laboratory for decades, he said, “and whether that’s desirable is something that should be debated.”

To address these issues, Lorsch continued, NIGMS created the Maximizing Investigators’ Research Award (MIRA) program. It allows an investigator to get just one grant from NIGMS, although investigators can still seek grants from other institutes or agencies. To further reduce the administrative burden on investigators, MIRA grants are for 5 years rather than the 4-year average of the R01 grants traditionally used at NIH. Investigators are no longer asked for specific goals and are given freedom to change research directions so long as they stay within the mission of NIGMS. To increase funding stability, renewal rates are high—above 80 percent, roughly twice that for R01s.

The MIRA program has two parts, one for early-stage investigators and one for established investigators. The two are reviewed by different panels, which has been “very well received by the early-stage investigators,” said Lorsch. To pay for the program, established investigators in the program with more than $400,000 in direct costs from NIGMS received an average cut of 12 percent, which generated objections but allowed for greater funding of early-stage investigators.

People who received MIRA grants could be compared with those who received R01 grants, and people who transitioned to MIRA grants could be compared before and after that point. Funding started in 2016 and had grown to about half of the R01 pool, “so this is a very large experiment,” Lorsch said. More than 4,000 competing and noncompeting grants are funded each year, and about 80 percent of early-stage investigators have been funded over the past 3 years through MIRA, which has contributed to an almost three-fold increase in the number of applications from early-stage investigators.

Before the program began, people were concerned that it would disadvantage certain groups, including women. But disparities from the applicant pool have not emerged among underrepresented groups and women, “so that looks promising,” Lorsch said. Moreover, he continued, the applicant pool for early-stage investigators is significantly more diverse than for established investigators, although it is not as diverse as the college-age population or the U.S. population in general. In contrast, the established-investigator pool still has the disparities seen in the R01 pool. However, Lorsch noted, women in the established-investigator pool are doing significantly better statistically than men in the pool. “We don’t know why,” he said. “It’s the opposite of what a lot of people predicted.” Another unexpected development is that the people in the early-investigator pool have been getting progressively younger, both when they

Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

apply and when they get funded. “It’s more appealing for people to apply early to MIRA, and they are getting funded earlier, and that’s been consistent every year in the program so far,” reported Lorsch.

It has been difficult to assess the scientific outcomes of the program after only 6 years of funding, but MIRA investigators are doing slightly better than matched R01 investigators in field-normalized citations. “Some people were worried they were going to do worse, since we gave them so much freedom,” Lorsch stated, “but they’re certainly not doing worse, and early indications are they may be doing slightly better.”

At the beginning of the Q&A session, moderator Williams asked Lorsch about a hypothetical experiment in which a philanthropist proposed to supplement the salaries of postdoctoral fellows to gauge the impacts on retention in science, research impact, or some other measure—what would be his reaction to such a proposal? Raising the salary of all postdocs would require an enormous amount of money, Lorsch observed. A more realistic experiment might be to support a limited number of postdoctoral fellows to do an independent project within a particular time limit. The experiment would be risky, he pointed out, in that the rest of the postdoctoral system would be pushing against such a change because of such factors as publication times or expectations from hiring committees. But such an experiment might reveal information about whether people could be successful in a fairly different system, and that information could be used to change the overall system. “It would have to be something different enough from the current status quo that you might get a useful answer out of it,” Lorsch said.

Lorsch was also asked about the possibility of randomly assigning investigators to either project- or person-based grants. To some extent that is already happening, he said. For example, early-stage investigators can apply for both kinds of grants at the same time, although they are not randomly assigned to one or the other.

AN ARPA TOUR

Adam Russell (University of Maryland1) has been on what he described as an “ARPA tour,” referring to advanced research project activity or agency. He was a program manager at the Intelligence Advanced Research Projects Activity (IARPA) during its earliest days, went to the Defense Advanced Research Projects Agency (DARPA), and then helped set up the Advanced Research Projects Agency for Health (ARPA-H). Although they are designed to pursue high-risk, high-reward research, these agencies actually have much in common with the research ecosystem of which they are a part, Russell said, in that they are seeking to solve a signal-detection problem—the signal being the projects that need to be funded. And their success at finding those projects is improved by experimenting with business models, attracting new talent, and other characteristics.

___________________

1 Since the time of the event, Russell has moved to the University of Southern California.

Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

At IARPA, for example, one problem revolved around the idea of collective intelligence, the idea “that all of us somehow know more than any of us,” stated Russell. An answer to this problem was the Intelligence Community Prediction Market, which “radically enhanced geopolitical forecasting accuracy,” he said. Another problem was to aggregate information to predict whether someone could be trusted, “because in the intelligence community that’s very important.” Russell stood up the first incentive challenge at IARPA to address this problem, which again resulted in significant improvements over the state of the art at the time.

Another forecasting pilot effort involved asking proposal evaluators to provide a probabilistic forecast of whether a proposed project would be successful and what its impact would be. This task raised the problem of exactness, Russell said. Asking people to provide a forecast forces them to be exact about their predictions. “If you’re telling me you think it’s a strong proposal because you give it an 85 percent chance that it’s going to hit this milestone, or it’s going to have a 5 percent chance of getting a milestone but a 90 percent chance of having this radical consequence if it is successful, now I can work with you.” People tend not to want to be that specific, he said, but such estimates provide valuable information. For example, they can indicate the variance in the evaluation of a proposal, which, as other speakers observed, can point toward innovative ideas.

This kind of expertise develops through education, Russell said. Sometimes education is seen as a distraction from what program managers are supposed to be doing, but it helps build an understanding of how to invest in high-risk, high-reward research. One way to build this understanding is to create feedback loops in programs, he stated, “but that means you have to be okay with being wrong.” People do not like to be wrong, even though the point of science is to resolve uncertainty, which means being wrong some of the time. “You should be willing to be wrong because you care,” Russell said, “and that’s going to help, ultimately, all of us.”

Finally, Russell observed that it can be difficult to conduct experiments with ARPAs, because program managers and program management are constantly changing. But experiments need to be done to improve, he insisted, which requires the backing and support of leadership. “Can I trust the people I’m doing an experiment with to understand that this is not a failure, that we will know more at the end of this than we knew before?,” Russell asked.

Williams asked Russell how it might be determined a decade from now whether ARPA-H was more successful than the rest of NIH in funding research. Russell began by guaranteeing that 10 years from now any organization will be able to point to its successes, but he also mentioned some intriguing possibilities about how such a question might be answered, including modified funding lotteries or simulations that make it possible to do counterfactual reasoning, such as asking: What could have happened if things had been done differently? In particular, counterfactual simulations may provide a “better grasp of complexity,” said Russell. “We’re going to need machines’ help. We can’t do this intuitively.”

Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

EXPERIMENTATION AT THE TECHNOLOGY, INNOVATION AND PARTNERSHIPS DIRECTORATE

Erwin Gianchandani (NSF) said that the Directorate for Technology, Innovation and Partnerships (TIP)—announced by NSF in March 2022 and authorized by the CHIPS and Science Act (P.L. 117-167) passed later that year—is itself an experiment, but its goal is not to be another ARPA. Rather, it seeks to support use-inspired research and to accelerate the progress of use-inspired research to impact. TIP does not have DARPA’s association with the Department of Defense or ARPA-H’s association with NIH. Rather, NSF supports the full breadth of science and engineering from fundamental research to translational work. He noted that TIP builds on the foundational research that NSF has been funding since 1950 and the translational research that it pioneered in the late 1970s when it launched the first Small Business Innovation Research program. TIP is positioned within this spectrum of work, he said. “We want to imbue some of the values and perspectives that have made NSF so successful and impactful for 70-plus years while at the same time . . . accelerating technology development and technology to impact.”

The language of the CHIPS and Science Act authorized the TIP Directorate to think about and experiment with the merit-review process, Gianchandani said. For example, the science press has reported that NSF is contemplating the golden ticket methodology, which is an experiment that Gianchandani considers interesting. Innovations in the merit-review process are part of TIP’s broader goal of transforming the nation’s research and innovation enterprise. And to that end, he also heard, sometimes on the same day, arguments that (1) TIP is moving too rapidly in support of its goals, to the detriment of traditional NSF mission, and (2) conversely, NSF cannot move fast enough to allow TIP to enable the transformation described above. That he hears these extremes of the debate, he said, “probably indicates that we’re headed in the right direction.”

Gianchandani also pointed out that the directorate is not starting its examination of merit review from scratch. NSF has been doing experiments in this area for a long time. For example, the agency gives reviewers the ability to say that they particularly like a specific proposal and that it should be advanced. Also, program officers at NSF have a great deal of empowerment and are subject-matter experts in their own right. They do not take the outputs of review panels in lockstep. Rather, they bring their subject matter expertise into the review process, make final decisions about proposals, and are empowered to create well-balanced portfolios. “That sort of autonomy and flexibility, in many ways, is characteristic of the golden ticket methodology already in practice.”

NSF also has a history of conducting experiments and innovations on a pilot basis across a range of programs. For example, it has run prize challenges with specific goals that it is trying to achieve and, in response, has received proposals from outside the traditional NSF research community. It has piloted asynchronous review processes and opportunities for applicants to receive

Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

feedback from reviewers and then respond to that feedback, with program officers adjudicating the responses in light of the feedback. It has sought to promote and advance transparency to the communities that it is serving, both applicants and reviewers. And it has made efforts to measure the progress of pilot programs with quantitative rigor, whether to demonstrate success or the lack of a tangible impact.

NSF has an interest in not disrupting the integrity of the processes on which it is based, said Gianchandani. Rather, it wants to experiment in more nuanced ways to understand the implications of interventions without torquing a system that is working well. It wants to experiment while “ensuring that the outputs that you have are going to have the greatest possible impact that they can,” he explained.

Williams asked Gianchandani to say more about the golden ticket proposal in the context of looking retrospectively at what happened with such projects, while acknowledging that retrospective analyses can give misleading results if changing to a new model changes who applies for a program. Gianchandani replied that there are various ways to design an experiment in which golden tickets are involved. For example, prospective experiments would be possible, although NSF would want to be transparent about its intent to run that experiment in a particular program in a particular context on a pilot basis. The problem with prospective experiments, he continued, is that it can take years to draw conclusions about projects that are funded compared with those that are not. “Maybe not 15 years but some number of months or years as well,” he said. “There are trade-offs that we should all be thinking about as we think about this type of experimentation and what is the goal that we want.”

PANEL DISCUSSION

A workshop participant noted that a common theme among the three presenters was the emphasis on empowering program managers with experience and good judgment; the participant asked how to evaluate whether that discretion is being exercised judiciously. Lorsch pointed out that different parts of NIH use a range of funding strategies, from strict pay-line cutoffs dependent on peer review to systems that provide much more flexibility. NIGMS is on the more flexible side of the spectrum, particularly with the MIRA program. Studying the benefits of this flexibility would be interesting, he acknowledged, although “you would have to be very careful about how you did it.”

Gianchandani said that his directorate has been having conversations within the foundation about the flexibility given to program officers—a key strength for the agency. Relevant natural experiments are also possible: for example, a past translation program transitioned at one point to encouraging program officers to exercise more autonomy with the projects in their portfolios, and the impacts of projects could be studied before and after the transition.

Russell said he would commit the entire federal government to studying such questions, but the ARPA model poses some complications. Analyses of outcomes take time, and part of the ARPA model is to move fast. The model also

Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

places value in what he terms “organizational amnesia”—the ability to forget things that did not work in the past because they might work in the present—which can complicate the study of past programs and outcomes.

Asked about the scale at which experiments are most promising, Russell noted that program efforts at an ARPA are not small. At the point that a program is organized, it has been deemed important enough, following some smaller-scale investments, to warrant substantial support: “You don’t want to fail in that area just because you ran out of resources,” he said. If a program does fail, the goal is to have an “intelligible failure” so that the cause of the failure is understood.

In contrast, Lorsch described staged experiments that start out as small pilot programs and then grow if the pilots are successful. “You’re balancing the do-no-harm aspect of things with trying to gain some insight into one model versus another and which works better without betting the whole store right from the beginning.”

Gianchandani also pointed to the value of pilot programs. “You have a hypothesis that you’re trying to test, but you can’t perturb the whole system,” he said, “so you start with a pilot and expand from there.”

In response to a question from Williams about why the MIRA model has not been more widely adopted at NIH, Lorsch said that the institutes and centers have different histories, subject matters, cultures, and philosophies. For example, the basic research funded by NIGMS differs from the clinical research funded by other institutes. Other parts of NIH are experimenting with the structure of programs, but they have done different kinds of experiments, such as trying to replicate the model of the Howard Hughes Medical Institute.

Despite the difficulties of replicating successes among organizations, Russell called attention to the power of social learning, including the learning that occurs through social networks at federal agencies like NIH or NSF. Gianchandani agreed, pointing out that NSF is a relatively small organization of just 2,000 people. However, he noted, it can sometimes be difficult to implement new ideas, given the pressure of running programs and funding research.

In response to a question about how to mitigate biases of various types among peer reviewers and program managers, Gianchandani said that “we can always do better in this space.” Part of the rationale behind the establishment of the TIP Directorate was to engage a more diverse group of people and institutions to accelerate technology to impact. In addition, as noted during his presentation, program managers often work in clusters to manage portfolios of research, a dynamic that works to increase balance across the kinds of biases that exist. Other ways of reducing the effects of bias are the construction of review panels and the outreach done in soliciting proposals.

Lorsch pointed to layered decision making as a way of reducing bias. At NIH, program officers make recommendations rather than funding decisions, and these recommendations go up the chain of command for final decisions. In fact, the MIRA program, recognizing that early-stage Asian applicants were more likely to have applications denied than other demographic groups, put in place a layered decision-making process that eventually eliminated the disparity.

Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

Russell speculated about the use of machine learning to recognize and help counter biases, since “it’s not obvious to me that you’re going to solve bias by trying to debias people.”

Williams cited the scouts program mentioned by Tilghman (see Chapter 1) as a complement to these other efforts: “Scouts can intentionally seed across fields, across geographics, across whatever dimension where you’re worried that you’re missing things.”

In response to a question about the roles nonfederal government organizations can play in experimentation, Gianchandani expressed great enthusiasm about the opportunities for states, municipalities, civil society organizations, tribal governments, and other groups to shed light on research questions. In addition, he said, these groups should be involved in shaping the research directions that others pursue, which means having them work closely with federal agencies and federally supported researchers. He asked: “What is the hypothesis that you’re testing? What’s the research activity that you’re trying to pursue? How are you going to pilot that prototype in that same city or community or with that same civil society organization?”

In that regard, Lorsch mentioned the Institutional Development Award (IDeA) Networks of Biomedical Research Excellence, which support research and capacity-building in states that historically have not received much NIH funding. NIH also has started a second branch of its medical scientist training programs targeted at IDeA state institutions, historically Black colleges and universities, and tribal colleges and universities, to produce MD-PhD students at a much broader range of institutions.

“We need all hands on deck to find the signal,” said Russell. “You will not get it if you are sitting in a single space talking to people who are just like you.”

In response to a question about how best to construct portfolios of research projects and how to analyze variations in portfolios, Gianchandani said that “the notion of portfolio development and balance is very explicit and ingrained in the ethos of how we think about our programmatic investments across NSF.” For example, in some NSF directorates, program officers work in groups to review projects that have come through the merit review process and then develop portfolios that meet particular criteria, such as having a range of risks. They may even decide not to fund a set of proposals that fared well in the review process, although it is important to explain why those projects are not being funded. “Portfolio construction is ultimately tied to the goals of the program,” he said. Thus, a portfolio for basic research in biology might look very different than a portfolio for a regional innovation program.

Lorsch agreed that portfolios are a good way to organize research, but he also worries that they may overlook new areas that do not fit into a framework. He also noted that, in analyzing portfolios, NIH has found striking differences between them, such as a preponderance of established rather than early-career researchers. However, what to do with that information is not clear. “Does it tell you anything?,” he asked.

Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

Portfolios are also very much part of the mentality in the ARPAs, said Russell, even if the term is not necessarily used. But, again, the ARPA model is distinct in that it tries to move on quickly from unsuccessful projects, so the mix of projects within a portfolio can change rapidly. On that point, Gianchandani said that “a failure shouldn’t be defined as a failure. There’s a lot to be learned from being in a failure mode.” For example, some of the 10-year programs being planned by TIP may not be funded for the entire 10 years. “We have to reorient the definition of success in some respects,” stated Gianchandani. In response to a question about whether metrics can be identified that would recognize and quantify high-risk, high-reward research, Lorsch pointed out the timescale involved in realizing benefits from basic research: the period between a fundamental discovery and its application is often extended, which can make identifying such metrics difficult.

In response to a question and reflecting on ongoing outside discussions about establishing a national network for critical technology assessment that would try to identify areas ripe for investment to sustain U.S. leadership, Gianchandani noted that the use of bibliometric data (publications and conference proceedings) alone as a metric is not the answer, since in some cases the ultimate measure of success is not only publications but also societal and economic impacts. Although foundational work that results in publications is “the lifeblood of use-inspired research,” different kinds of metrics are needed to gauge the potential of different kinds of research. Other examples of metrics might include disclosures filed, patents awarded, and start-up companies established and acquired.

ARPAs have more specific goals that relate to their customers, said Russell. For example, DARPA is trying to increase defense capabilities, as is IARPA in the area of intelligence. Russell also mentioned a series of questions developed by former DARPA director George Heilmeier to help evaluate proposed research programs:2

  • What are you trying to do? Articulate your objectives using absolutely no jargon.
  • How is it done today, and what are the limits of current practice?
  • What is new in your approach and why do you think it will be successful?
  • Who cares? If you are successful, what difference will it make?
  • What are the risks?
  • How much will it cost?
  • How long will it take?
  • What are the mid-term and final “exams” to check for success?

Perhaps these questions suggest metrics that can be applied more broadly, he said.

___________________

2 See https://www.darpa.mil/work-with-us/heilmeier-catechism

Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

This page intentionally left blank.

Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 43
Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 44
Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 45
Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 46
Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 47
Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 48
Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 49
Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 50
Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 51
Suggested Citation: "5 Challenges with Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 52
Next Chapter: 6 Structuring Support for Experimentation in Government
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.