During the final session of the workshop, a panel of discussants each sought to identify the top challenges and areas that could be pursued for evaluating genomic information in the era of next-generation sequencing. The group addressed such issues as a framework for reimbursement of genetic testing; understanding the clinical context in which testing information is used, or “evidence fit for purpose,” as David Veenstra, the workshop chair, said; forming data resource collaborations, such as ClinGen; and population-based studies for evidence generation.
Box 6-1 lists the major themes that emerged during the workshop. Box 6-2 contains suggestions and proposals from individual workshop speakers for assessing genomic sequencing information.
BOX 6-1
Topics That Were Addressed During the Workshop
David Veenstra, the workshop chair, listed several points that had been mentioned repeatedly during the course of discussion about evaluating evidence:
BOX 6-2
Proposals Made By Individual Speakers
Robert McDonough of Aetna said that the biggest challenge is coming up with a logical, pragmatic framework for reimbursement. “We need to be able to try to come to some consensus and have some consistency around what type of genomic testing is useful,” he said.
With regard to reimbursement, Shashikant Kulkarni, director of cytogenomics and molecular pathology at the Washington University School of Medicine, observed that work is under way to identify genetic tests with established clinical utility so that reimbursement makes sense. But a lack of information about next-generation sequencing hinders reimbursement decisions. For example, amplicon-based tests cost much less than large-scale sequencing but are not equivalent to whole gene panels. “When it comes to reimbursement, the payers should take into consideration these differences in approaches.”
The most significant difficulty is the lack of evidence, said Robert Green of Brigham and Women’s Hospital and Harvard Medical School. “Lacking evidence is not something that is entirely new to doctors. Doctors have been practicing medicine without evidence for a long time and continue to do so in lots of domains. [But more evidence is] definitely something we need.” In particular, Green called for more coordinated sharing of genotype–phenotype correlations over the next 5 to 10 years. The ClinGen collaboration is a good first step, he said, but even that “is probably underfunded for what is going to happen.” Jonathan Berg of the University of North Carolina at Chapel Hill agreed that projects like ClinGen provide an opportunity to share data in a common format and language but that a clinically relevant resource is needed for mining variants from different sources.
“There is a significant body of data out there which we and others are mining: the Cancer Genome Atlas and the International Consortium of Cancer Genomics, which is beginning to produce an enormous amount of data,” Kulkarni said. “Still, it’s a huge amount of data which has to be mined.” The data analysis and interpretation is time consuming, so even with a significant amount of information, he said, the field of oncology suffers from a similar lack of evidence for the majority of genetic variants.
Sequencing Standards
Establishing quality standards for sequencing studies—and also for how to report on such studies—would be valuable, said Katrina Goddard of the Kaiser Permanente Northwest Center for Health Research. For example, sometimes a study is rated as being of poor quality because the information needed to assess the quality of the study is not included in the literature. Green added that this idea could be implemented if journals led the way. Standards have been established for both conducting and reporting on randomized clinical trials, he observed, and something similar could be done for gene association studies.
Kulkarni also called attention to the lack of sequencing standards for such parameters as sensitivity and specificity. For example, some groups are using 200 nanograms of DNA for detecting 10 percent of the tumor cells, he said, while others claim that only 5 or 10 nanograms provides sufficient sensitivity. These issues are even more pressing in cancer,
where the frequency of alleles and composition of cells within a tumor can differ. “We need to address standards to understand what types of minimal requirements are essential,” he said.
Because of the concern that extrapolated data from high-risk populations may not be generalizable to the larger population, there is a need for collecting data from large population-based studies, Goddard said. Information from clinical studies and from research studies must be combined in order to arrive at valid conclusions at the population level, she said. “By combining across different efforts, you may be able to get a sufficient sample size,” she said. Green noted that longitudinally collecting such information would be very expensive, to which Goddard responded that simply starting with unselected populations would be a step in the right direction.
Goddard pointed to initiatives that are using EHRs as a source of research data. Green also pointed to the need to look beyond single patients to entire families. Given there will be HIPAA challenges, it will be useful to use EHRs that contain phenotype data and link that information with genotypes and phenotypes from other family members, he said. Genetics does this at an individual level, but it has not made the transition to a macro level. Jessica Everett of the University of Michigan Comprehensive Cancer Center noted, however, that the sequencing of family members is typically not reimbursed, even when the information would be extremely useful in understanding a condition.
Large-scale genome sequencing efforts are now under way in the United States,1 the United Kingdom, and Saudi Arabia,2 Green said (Callaway, 2013). Berg observed, however, that the challenge is doing the phenotyping. “It’s trivial to sequence a million genomes compared to phenotyping a million people,” he said. Until enough people with rare variants are phenotyped, the penetrance of those variants will be largely unknown, he added. A million genomes may not even be close to the sample size that is needed for generating the evidence, Veenstra said.
_____________________
1Regeneron and Geisinger Health System Announce Major Human Genetics Research Collaboration, http://investor.regeneron.com/releasedetail.cfm?ReleaseID=818844 (accessed May 15, 2014).
2Saudi Human Genome Program, http://rc.kfshrc.edu.sa/sgp (accessed May 15, 2014).
“Next-generation sequencing is a disruptive technology,” Veenstra said. In fact, it is likely also disruptive to the process of evidence-based medicine, especially with the issues related to many possible causative variants and secondary findings or incidental findings. The way these issues can be addressed, he said, is by continuing to increase our understanding of how policy and treatment decisions are made in an era of limited evidence and a large volume of information.