Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop (2024)

Chapter: 7 Models for Institutionalizing Experimentation in U.S. Federal Agencies

Previous Chapter: 6 Structuring Support for Experimentation in Government
Suggested Citation: "7 Models for Institutionalizing Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

7

Models for Institutionalizing Experimentation in U.S. Federal Agencies

The sixth and final workshop panel was on models for institutionalizing experimentation in U.S. federal agencies. Caleb Watney (Institute for Progress) pointed out that the core issues include how to make scientific funding more experimental, how to fund younger researchers, and how to improve the productivity of the research ecosystem. In thinking about models for how to institutionalize experimentation, an important point to keep in mind, he said, is that the incentives and the kinds of infrastructure needed for experimentation are quite different depending on goals.

One-off experiments can rely on a single willing program officer to work on low-profile questions that can avoid scrutiny by the public or Congress. But the actual goal is to run many experiments in a consistent and iterative way to build information on the productivity of the enterprise. That requires an infrastructure to conduct experiments and to scale interventions that work, which in turn requires thinking carefully about such issues as:

  • fashioning responsive hiring and promotion models,
  • creating buy-in among the rank and file of an organization,
  • communicating findings to the public,
  • integrating individuals with specialized skill sets into the design of experiments,
  • soliciting and prioritizing the types of questions to be investigated,
  • securing funding streams and institutional autonomy to keep experiments going, and
  • scaling experiments over time.

“The correct model for institutionalizing experimentation is going to vary depending on the specific organization, its characteristics, its congressional mandate, the budget, and the talent structure that they have,” Watney said. “But there’s a lot we can learn from the kinds of innovation or experimentation centers

Suggested Citation: "7 Models for Institutionalizing Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

that do exist within the government, or from people within the federal government who have been trying to do this on a day-to-day basis.”

OPPORTUNITIES FOR EXPERIMENTATION AT THE U.S. PATENT AND TRADEMARK OFFICE

The U.S. Patent and Trademark Office (USPTO) does many pilots and experiments, said Martin Rater (USPTO) (for an example, see Chapter 2). But the experiments often encounter the problem of attacking complex problems with a single idea. “We’re looking to optimize time, quality, cost,” he said, “and we’re trying to do all three of those things at the same time.” The result is a lack of return on investments in experimentation and less ability to control experiments that are underway. So many initiatives may be going on at the same time, said Rater, that their quality and timeliness go down and their costs increase.

Yet the USPTO remains fertile ground for experimentation, Rater insisted. It has probably 10,000 people involved with patents, including engineers, scientists, and statisticians who know how to experiment. It has 8,300 patent examiners doing essentially the same job, making it easy to create treatment and control groups, although different groups are focused on different technologies. It also deals with some 50,000 attorneys and other agents representing their clients’ interests.

As an example, Rater briefly described an experiment involving 36 patent examiners with expertise in many different technology areas who have been combined into a group through which experiments and pilot programs can be carried out. These examiners can be compared among themselves and with examiners who are not in the group. “I can find a doppelganger for every one of those 36 and we can monitor that group even though they’re not part of our unit,” Rater said. Ultimately, similar models could be applied elsewhere in the patent office or with select groups of applicants to learn more about potential improvements.

In response to a question, Rater addressed the issue of how the bargaining unit representing patent examiners tends to interact with researchers. Negotiations between the bargaining unit and researchers may change the conditions under which a hypothesis is tested or the implementation of a new idea, he said. One way to address this issue is to form smaller units of patent examiners with which to conduct an experiment and possibly to do phased implementation based on the data that become available. The USPTO is also very responsive to its customers, said Rater, and conducts many surveys about their experiences, and it solicits its employees for ideas about how the services it provides can be improved.

Asked what success—or failure—of experimentation at the USPTO would look like 10 years from now, Rater noted that his agency maintains a variety of specific measures of productivity and quality, and one indication of success would be improvements in those measures. But a more fundamental measure would be a change of culture, and “we’re already seeing it through the

Suggested Citation: "7 Models for Institutionalizing Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

ideation we have,” he said, Sitting down and talking with people about the problems they have and how to solve those problems can shed new light on troublesome issues—for example, “understanding that it’s not a people problem, it’s a process problem.” Changing the culture is particularly crucial with the younger people in the agency, he said, who are going to be leading the agency in the future.

The Achilles’ heel of this process, Rater continued, is change management. The customers of the USPTO greatly value predictability, and change can be disruptive. In addition, the USPTO’s customers are international, he said, so “we have to ideate and find solutions with those international offices as well.” In some cases, this may require taking the lead from patent offices in other countries or listening to customer bases that have been overlooked in the past. “I’m a big fan of integrative learning. What can we observe from what other agencies are doing, maybe [agencies] that are not in the patent business?”

INNOVATION IN THE HEALTH CARE SYSTEM

In 2010, enactment of the Affordable Care Act (P.L. 111-148) created the Center for Medicare and Medicaid Innovation (CMMI) under the Centers for Medicare & Medicaid Services (CMS): it subsumed the preexisting Office of Research Data and Information. But CMMI had several new features that were unusual among federal agency research units, said Hoangmai Pham (Institute for Exceptional Care). CMMI was empowered to test models that had the potential to either generate savings for Medicare and Medicaid while maintaining quality or—the inverse—improve quality while not increasing spending. Furthermore, success would be defined through a combination of a formal assessment by the CMS Office of the Actuary and the imprimatur of the secretary of health that a model had met the criteria for scaling. If it did, the secretary had the authority to scale that model without an act of Congress. Congress also gave CMMI authority to waive many existing laws and regulations for Medicare, although less so for Medicaid, and to exercise exemptions from judicial review, the Paperwork Reduction Act (P.L. 104-13), and other requirements.

CMS represents the two largest U.S. health care payers—Medicaid, at over $1 trillion of annual expenditures, and Medicare, at almost $1 trillion—“collectively, more than what the Defense Department spends in a year,” Pham said. When CMS gets regulations wrong, it can wreak economic havoc, according to Pham, who was CMMI’s seventh employee. “There’s tremendous opportunity and tremendous risk.”

The foundational elements of CMMI have had a major effect on building and maintaining the organization’s portfolio and on scaling up experiments. “We didn’t have the luxury of not thinking about scaling,” said Pham. CMS is such a large payer and regulator that its experiments have moved the market whether it wanted them to or not. Furthermore, she stated, the stakes were large, in that the leadership of the organization wanted to change the culture of health care toward a mindset that was more value oriented than volume oriented. Considerable time,

Suggested Citation: "7 Models for Institutionalizing Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

energy, and money were spent on deciding who would say what to whom using what podiums.

CMMI has spent billions of dollars on innovation grants and learning systems. In the process, it built a portfolio of more than 30 different payment models. This was important to generate momentum and to convince health care systems to change, Pham said. But at a certain point the organization had to switch to a more curated approach so that it did not send mixed messages to the marketplace. She stated that “people know generally the direction you want to go, but they don’t know which of the 35 things you’re offering they should choose from. That was one implication for portfolio management.”

The Affordable Care Act also created a permanent accountable care organization in Medicare called the Medicare Shared Savings Program, and CMMI, despite the resources at its command, was always in service to that much larger program, Pham explained. As a result, decisions about building the infrastructure needed for experimentation were difficult. CMMI officials also needed to think about how experiments would be perceived and what kind of reactions they would generate. “We were scientists,” she said. “We tried hard to be data driven. But there was PR [public relations] involved.”

These novel circumstances had consequences for the evaluations that were done. The leaders in both CMS and CMMI were former operational health care officials, and they were not necessarily experienced with randomized controlled trials or other forms of experimentation. Pham’s job was in part to act as a bridge between the researchers and the administrators: for example, she had a slide deck entitled “What We Talk About When We Talk About Evaluation.” The experiments also had such massive effects in the health care marketplace that constructing control groups was difficult. “When practically every major health care organization in the land has signed some kind of value-based payment contract, what exactly is nonintervention in that context?,” she asked. “That continues to muddy the waters.”

Eventually, a decision was made that researchers would provide the administrators with one or more of the following:

  • a binary answer about whether a program had beneficial effects;
  • qualitative data on what was learned in the program;
  • performance tracking data; and,
  • for large investments in mission-critical domains, Bayesian results that provided some sense of probability so that decision makers could act.

“That has not resulted in a floodgate of models that have met expansion criteria,” Pham said. But it has provided evaluations with more face validity for decision makers, as well as for program participants.

In response to a question about the features of CMMI that have made it effective and what she would have changed to make it more effective, Pham pointed to the special hiring authority the agency was given. For instance, she

Suggested Citation: "7 Models for Institutionalizing Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

observed, some of their most productive employees were new master’s graduates who had some real-world experience—“kind of a best athlete, willing to do anything, my phrase was ‘look around, pick up a shovel.’” Pham also emphasized the importance of having staff members who understand both policy and operations. “In my experience, you couldn’t actually do the work well unless you understood both. And those folks were rare. . . . If you don’t understand how one attributes beneficiaries to a given service provider in Medicare, there are only so many ways that you can make hardware and software move on that,” she said. In particular, the checks provided by operational people on the experiments being considered were critical in that some proposed experiments would have been impossible or counterproductive to carry out given the overall structure of the health care system.

Pham also said that she would have spent the money available to the agency more slowly. “I would have been more deliberate about picking fewer, potentially high-impact places to invest rather than the scattershot approach.”

PANEL DISCUSSION

Moderator Watney asked the panelists about the skill sets among employees of their agencies that are most valuable for experimentation. “The ability to see outside the box,” said Pham. “Give me someone who is not constrained by priors and is willing to be creative. That will take me pretty far.”

Rater returned to the idea of integrative learning. Many of the 36 examiners involved in their experimental group had outside interests, such as hobbies that could inform their work. “We want you to bring that skill in here,” he said. “Instead of talking work–life balance, let’s start talking work–life integration. We’re going to embrace that, and we’re going to make both worlds better because of it.”

In response to a question about how exchanges with external stakeholders affect experimentation, Pham reiterated that CMMI has extensive interactions with external stakeholders—“and I would count in that bucket not just industry actors but also Congress, the trade press, and the broader media.” Her decision was to be extremely transparent about what her team was doing and tell people about their plans and challenges. “We were going to codesign this model with them, and many elements of that first payment model reflected things that we heard in direct conversation with health care providers as well as private payers.” This can result in “pretty frank feedback,” but it is better to hear from program participants directly than through the press. The one lacking element, she added, is a good way to involve the beneficiaries of Medicare and Medicaid in experimentation. “We like to say, ‘nothing about them without them,’ but that hasn’t translated into hardwired processes.”

Rater mentioned the public advisory committee to the USPTO—“we have a good interaction with them”—which oversees whether the office has evidence supporting its action. The USPTO also collects customer data, although

Suggested Citation: "7 Models for Institutionalizing Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.

it has to be cautious not to make changes just on the basis of what it hears. “There are always competing priorities that we have to balance,” he said.

Finally, Pham pointed out that Medicare and Medicaid combined spend around $2 trillion a year, while CMMI spends on average about $1 billion annually. “Most pharmaceutical executives would get fired for spending that small a percentage on R&D,” she said. Yet CMS has the largest investment for policy experimentation across the federal government outside of the Department of Defense. Pham noted, “That says something about what could be possible.”

Suggested Citation: "7 Models for Institutionalizing Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 63
Suggested Citation: "7 Models for Institutionalizing Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 64
Suggested Citation: "7 Models for Institutionalizing Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 65
Suggested Citation: "7 Models for Institutionalizing Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 66
Suggested Citation: "7 Models for Institutionalizing Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 67
Suggested Citation: "7 Models for Institutionalizing Experimentation in U.S. Federal Agencies." National Academies of Sciences, Engineering, and Medicine. 2024. Experimental Approaches to Improving Research Funding Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27244.
Page 68
Next Chapter: 8 Closing Roundtable
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.