Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series (2024)

Chapter: 2 Planning of Systematic Reviews and Meta-Analyses

Previous Chapter: 1 Introduction
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

2

Planning of Systematic Reviews and Meta-Analyses

This chapter describes the presentations and discussions that took place during the first workshop, titled Use of Meta-Analyses in Nutrition Research and Policy: Planning of Meta-Analysis, which took place on September 19, 2023. The objectives of the workshop were to

  • Apply criteria to select studies for inclusion in systematic reviews (SRs) and meta-analyses (MAs), with a focus on PICO (Population, Intervention [including treatment, dose, duration], Comparators [with consideration of diet], and Outcomes [with consideration for adjustment for confounders/covariates]);
  • Account for subgroup and sensitivity analyses when planning an SR and MA; and
  • Use an appropriate data management system for extracting data.

Following the opening remarks, planning committee member Amanda J. MacFarlane of Texas A&M University welcomed attendees and introduced the day’s two main presenters, Celeste Naude of Stellenbosch University and Lee Hooper of the University of East Anglia. Naude and Hooper’s presentations addressed the foundational aspects of planning and delivering a high-quality nutrition SR and MA. Their presentation was divided into two parts, Naude delivering the first section on planning for a successful SR and MA and Hooper speaking about methods for SRs and MAs in the second half. A panel discussion moderated by MacFarlane followed the presentations. The panel included Naude, Hooper, and two additional discussants,

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

Sydne Newberry of the RAND Corporation and Christopher Schmid from the Brown University School of Public Health.

PLANNING FOR SYSTEMATIC REVIEWS AND META-ANALYSES

Naude and Hooper’s presentation was titled “Systematic Reviews & Meta-Analysis for Developing Nutrition Guidance: The Core Pillars of Planning and Methods to Deliver High Quality, Useful Synthesized Evidence.” The presentation addressed the following questions, which were posed in advance by the workshop sponsor:

  • What are best practices in identifying and avoiding extraction errors and errors in calculating mean differences and confidence intervals from the primary studies that are included in a meta-analysis, given that these errors are common in published literature?
  • How should an MA be evaluated for methodological quality when extraction or data errors are present? At what point do data errors (in kind and number) reach a level that invalidates the conclusions of the MA?
  • How to consider risk of bias when evaluating diet and disease relationships?
  • What are the best practices for addressing publication bias?
  • What criteria should be used to determine whether individual nutrition studies have too many clinical or methodological differences (e.g., treatment, dose, population, mean body mass index, duration, comparators/diets, results) to be combined into the same MA?
  • What are best practices for planning appropriate subgroups and sensitivity analysis a priori?
  • How can MA be used to evaluate the strength of the evidence when different outcomes are reported in different studies? And how can MA be used to evaluate the strength of the totality of evidence?

Naude began by providing a brief overview of the presentation topics. She noted the importance of the planning phase of an SR or MA and how the planning process can set up a research team for success. Naude framed her involvement in the field of nutrition research, noting that she is the codirector of Cochrane Nutrition and is involved in other Cochrane groups. She is also a founding member of the South Africa Grading of Recommendations, Assessment, Development and Evaluation (GRADE) Network. GRADE is a research tool used to develop and present summaries of evidence and provide a systematic and transparent approach for assessing the certainty (or quality) of the evidence and making recommendations. Naude stated that she receives no industry funding. As she introduced the

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

topics and content of her presentation, Naude contextualized that much of the content had been guided by the latest online version of the Cochrane Handbook for Systematic Reviews of Interventions, version 6.4 (updated August 2023).1

Naude explained the basic concepts of SRs and MAs. She stated that while many people confuse the two, there are important differences between them and they can be effectively used in conjunction with each other. An SR is a well-defined and described research method, and an MA is a method of statistical analysis, which may be part of the SR process. SRs pull together the results of many primary studies that meet pre-specified criteria to answer clear and well-framed research questions and are used to minimize bias when reviewing and assessing evidence. The analysis and interpretations used in an SR consider the internal validity of the studies involved and additional factors that may impact the certainty of the evidence. Naude stated that SRs should be performed in rigorous, methodological ways to reduce reviewer selection bias, and should have pre-determined rules for identifying and including studies.

As Naude described, an MA is a statistical method that may be part of the SR process. She emphasized that all MAs should be informed by a rigorous SR. Naude explained that MAs combine the numerical results of studies2 (e.g., mean differences, odds ratios, risk ratios, or confidence intervals) and require transparency and careful planning. The results of an MA, as well as an SR, can be misleading if the underlying methods are not sound. Naude explained that setting up an SR or MA with clear questions and methods a priori is essential for minimizing bias and improving the reliability of the results. Naude acknowledged that conducting SRs and MAs can be time consuming and complex but reiterated that completing them correctly is essential because, when done right, they can be useful tools in the development of evidence-based policies and guidelines.

Reviewing a body of evidence is challenging due to the growing volume of research, variability in study designs, and haphazard and biased access to research. To this end, Naude noted that decision makers face challenges in using vast amounts of primary research to enable informed policy development, making SRs and MAs key to the planning and execution of effective policies.

Naude described the multitude of benefits that MAs and SRs provide for the policy development process. When completed correctly, MAs and SRs can increase transparency and objectivity, reduce bias, capture the totality of evidence on a subject, create more precise results, capture

___________________

1 https://training.cochrane.org/handbook (accessed January 3, 2024).

2 Discussions on combining the numerical results of primary studies for a MA are in the second workshop in this series.

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

the uncertainty of existing findings, and point to the need for additional research on a topic. Again, Naude reiterated the need for strong, consistent methods to ensure proper protocols and sound decision making.

Naude also spoke about the planning of SRs, describing the four pillars of the planning process:

  • Gathering the necessary background information;
  • Assembling an author team;
  • Assembling adequate resources for the SR; and
  • Establishing the question that the SR seeks to answer.

Naude provided a thorough explanation of each pillar. When considering the first pillar of background information, Naude noted that it is important to develop an SR that is “fit to the purpose,” which she explained as considering the purported use of the SR, the end product to be informed by the guidance produced by the SR, and the target audience. She also explained that SRs are commonly used when there is a “guidance gap”—a lack of sufficient information to create reliable guidance on an issue or a lack of clarity on how existing guidance should be applied.

Speaking on the second pillar of developing the author team, Naude noted the importance of minimizing conflicts of interest in both funding and nonfinancial interests. She said that one should seek to assemble a “dream team” consisting of specialists with expertise in both methods and subject matter. The team should also include the perspectives of key stakeholders and have a project lead with strong project management and relational skills. The authors of the SR, importantly, should not have a vested interest in the findings of the SR. Objectivity is key, Naude reinforced.

The third pillar is having adequate resources to perform the SR and adhering to an established timeline. The third pillar also focuses on “reliability,” which includes good data management and quality assurance processes that allow for data replicability and enhanced credibility. Naude noted that transparent reporting and an audit trail for decision making further enhance credibility. She suggested that new software, and more recently the use of artificial intelligence (AI)-enabled programs, can be used to support this process.

The fourth pillar focuses on the development of the question to be answered by the SR. Naude explained that this question guides all future work during the SR process. She spoke of the importance of taking time to clearly frame and develop the question with the entire team, noting that the question might be a problem statement. She said that the question needs to be clear, but it could be either broad or narrow, depending on the purpose of the SR, which stems from the needs of the user and the context of how

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

the SR will be used. To develop the question, Naude stated that the team should begin by considering the aim of the review and then use the PICO or Population, Exposure, Comparator, Outcome (PECO) construct to develop the question. Using this framework can help the team decide what factors are most critical in the development of their research question.

Naude highlighted that a logic model and conceptual framework are key parts of the planning process for SRs. Because nutrition studies are inherently complex and multi-factorial, logic models are useful for unpacking this complexity and refining the question being asked in the review. Logic models can also help guide the review process in general. The PICO construct might be used at three stages of the review process to help guide decision making. For example, the review PICO planned during the protocol stage helps to decide on the inclusion and exclusion criteria for study eligibility. The PICO for synthesis, which is also planned during the protocol stage, helps the researchers determine the planned comparisons, such as intervention and comparator groups, or groupings of outcomes and population subgroups. Finally, the PICO of the included studies, determined during the review stage, considers what was examined in the studies that have been included in the SR and helps the team understand the extent to which data from these studies, and the review overall, are generalizable.

Naude concluded her presentation by noting the necessity of choosing which outcomes are included in an SR and weighting these outcomes by level of importance. She said that this process fundamentally impacts the strength of an SR. Naude noted that nutrition studies often have many variables, and it is common to encounter different ways of measuring the same outcome. When rating the importance of variables, Naude suggested considering what outcomes are of greatest importance to policy makers that will use the review to inform policy and guideline development. Another way to choose outcomes of interest is by understanding their clinical relevance, which can be enhanced by including subject-matter experts on the review team. Naude also urged researchers to be mindful of the conflicts of interest that may arise with industry involvement in research. Finally, she reminded the audience of the inherent complexity of nutrition research and the common finding of a multiplicity of results. Different studies, even those measuring the same inputs or nutrients, may show varying results. For example, a positive effect could be seen from a nutrient in one study, but another study may show no effect, creating difficulty for teams to combine these results. In these cases, the use of experts remains particularly important to the evaluation and analysis processes.

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

METHODS FOR REDUCING THE RISK OF BIAS IN SYSTEMATIC REVIEWS AND META-ANALYSES

“Doing a meta-analysis is easy. Doing one well is hard.” Quoting Ingram Olkin, a former professor of statistics and education at Stanford University, this sentiment was spoken by Hooper during her remarks at the first of three workshops on the use of MAs in nutrition research and policy.

Hooper began her presentation about the methods for reducing the risk of bias in SRs and MAs. She noted that the methods and resources discussed were largely drawn from Cochrane guidance3 and training4 materials. Hooper disclosed that while she had not received industry funding during the last decade, she had received funding from a variety of research organizations. She stated that she has been an editor for the Cochrane Heart Group and Oral Health Group and noted that she is a member of the World Health Organization (WHO) Nutrition Guidance Expert Advisory Group (NUGAG).

Hooper introduced the seven methodological pillars for reducing risk of bias in SRs. They include:

  • Writing the protocol;
  • Searching for studies;
  • Selecting studies and collecting data;
  • Assessing risk of bias of included studies;
  • Analyzing the data;
  • Interpreting the findings; and
  • Reporting the review.

Hooper began by describing the second pillar, searching for studies. She said that it is the foundation of a good SR, explaining that searches should span across years, databases, languages, and paradigms. Too narrow a search can add bias, as can a search that only includes research published in English or does not include searches of trials registers. She suggested the importance of working with an expert in developing sensitive and specific search strategies (including searching in “gray” literature, which may include unpublished work, theses, and government papers) to ensure important studies are not missed in this key stage. Hooper stated that the search process should be peer reviewed, detailed, and reproducible. The review

___________________

3 For guidance resources, see https://training.cochrane.org/handbook; https://community.cochrane.org/mecir-manual; https://jbi-global-wiki.refined.site/space/MANUAL (all accessed January 3, 2024).

4 For training resources, see https://training.cochrane.org/interactivelearning; https://www.human.cornell.edu/dns/who-cochrane-cornell-summer-institute (both accessed January 3, 2024).

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

question (the PICO or PECO formatted question) should inform both the search strategy and the pre-specified criteria for selection of studies.

With respect to the third pillar, selecting studies and collecting data, Hooper suggested that the team consider which papers found in the search fulfill the team’s predetermined inclusion criteria and that the assessment of inclusion is done through independent duplication. The research team should refer to the PICO or PECO question and develop rules to resolve disagreements around which studies to include or exclude. Hooper suggested that any disagreements be resolved based on the established inclusion criteria with reference to how the review question is most reliably addressed, and after such a discussion, criteria may be refined (and such refinements reported). The process of assessing inclusion needs to be systematic and recorded in detail.

After studies are selected, data collection occurs. Hooper explained that data collection should be carried out independently in duplicate, and any discrepancies in the data should be discussed within the team. Hooper said that it is important for SRs to gather data on included study methods, participants, interventions, comparators, flow (numbers recruited, dropping out, analyzed) and outcomes. Nutrition reviews also need to collect data on baseline nutrition status and how fully the nutrition intervention was implemented (what exactly was the intervention and how well did the participants stick to it, and the equivalent for observational studies if these are included). She emphasized that outcome data are not monolithic and suggested reaching out to a study’s authors when any data outcomes are unclear or presented in a way that cannot be compared with that of other studies, urging vigilance when data do not seem believable. She further stated that rejecting fraudulent data is very important as such data can add extreme bias to an SR.

Hooper addressed several questions that were provided by the workshop sponsor prior to the workshop about best practices for identifying and avoiding extraction errors and miscalculations in mean differences and confidence intervals from the primary studies included in an MA. Hooper stated that when conducting an SR, the key stages are data collection being conducted independently in duplicate with disagreements discussed as a team, pre-specifying data to collect using a trialed data collection sheet and contacting primary study authors to clarify outcome data (and methodological details) when required. In response to the sponsor questions about how to evaluate the quality of an MA when data extraction errors or data errors in the primary studies are found to be present and whether there is a level of error that invalidates the conclusion of an MA, Hooper stated firmly that errors in data should be considered a “red flag” and any SR or MA with data errors should be avoided. Errors in the data and

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

methodology call into question the methodological rigor, value, and validity of the entire SR, she explained.

Hooper detailed some tools that may be useful during the data collection and study selection phases of conducting SRs. Covidence is a tool that enables the independent duplication of assessment of inclusion criteria, data extraction, and risk of bias assessment and allows for remote collaboration with teams around the globe.5 Hooper explained that much of the review process can be conducted within Covidence, and she stated her opinion that it is an essential tool for a successful review.

Hooper also discussed Rayyan,6 an AI tool that enables collaborative reviews conducted across a broad research platform and easily allows researchers to independently assess the data in duplication, but it does not allow data extraction and risk of bias assessment to the extent that Covidence does. But it is useful and, importantly, free of charge. Another tool, EPPI Reviewer,7 supports a range of reviews, including meta-ethnography. Hooper also suggested the use of PlotDigitizer and Microsoft Paint, which can obtain numbers from a visual plot. Hooper noted that these tools enable teams to assess the quality of the evidence to be reviewed as clearly as possible without adding additional bias.

On the topic of the fourth pillar, assessing risk of bias, Hooper noted that bias is an inherent part of all studies. What is critical, she said, is understanding what the bias is, the level of bias, and where the risks of bias may be greatest in each study; the goal cannot be to avoid bias entirely, but to identify, expose, and clarify the bias, helping the author and readers of the SR to understand to what degree the review “answer” is likely to deviate from the true answer to the SR question. Hooper referenced the saying “garbage in, garbage out,” explaining that the risk of bias present in a review is a key factor in assessing the quality and relevance of the review. It is important to realize that an MA of biased studies can lead to a precise estimate of the wrong answer. To this end, Hooper offered some methods for avoiding and addressing bias in the review process. First, she said that teams should aim to exclude studies during the screening stage that are believed to have a high risk for bias, such as non-randomized studies. Once studies have been included, she suggested that teams tabulate and report the risk of bias in all included studies (ideally presenting risk of bias for all studies across all domains of bias, and also as part of any forest plot). As part of this process, Hooper suggested that teams run sensitivity analyses within the MA, including all relevant studies in the main analysis, then removing studies at highest risk of bias (such sensitivity analyses should be pre-specified in

___________________

5 https://www.covidence.org/ (accessed January 3, 2024).

6 https://www.rayyan.ai/ (accessed January 3, 2024).

7 https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=2914 (accessed January 3, 2024).

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

the review protocol). If sensitivity analysis results differ markedly from the main analysis, Hooper stated, then bias should be suspected and reported. Throughout this process, Hooper suggested using tools such as GRADE to assess, analyze, and present findings on the reliability of the review findings (including bias) for each review outcome. She detailed some unique forms of bias that may be present in observational nutrition studies. For example, dietary exposures are often accompanied by numerous confounders such as socioeconomic status, physical activity levels, or smoking status. Hooper reviewed the risks of bias and study limitations of observational studies, based on information from the GRADE handbook (see Table 2-1).

Hooper also shared some benefits of using cohort studies, such as the potential for a longer follow-up period, the potential to assess a broader set of outcomes, and larger sample sizes than randomized controlled trials (RCTs), making it potentially easier to observe population health effects. Hooper said that both RCTs and cohort studies have their own unique strengths and weaknesses, and it is important to consider the differences between the study types when deciding whether and how to include them in a review. Ideally, if observational studies are systematically reviewed, RCTs should also be searched for and included.

TABLE 2-1 Limitations in Observational Studies

Limitations Explanation
Failure to develop and apply appropriate eligibility criteria (inclusion of control population)
  • Under- or over-matching in case-control studies
  • Selection of exposed and unexposed in cohort studies from different populations
Flawed measurement of both exposure and outcome
  • Differences in measurement of exposure (e.g., recall bias in case-control studies)
  • Differential surveillance for outcome in exposed and unexposed in cohort studies
Failure to adequately control confounding
  • Failure of accurate measurement of all known prognostic factors
  • Failure to match for prognostic factors and/or adjustment in statistical analysis
Incomplete or inadequately short follow-up Especially within prospective cohort studies, both groups should be followed for the same amount of time

SOURCES: Presented by Lee Hooper on September 19, 2023, at the workshop on Use of Meta-Analyses in Nutrition Research and Policy: Planning of Meta-Analysis (Schunemann et al., 2013).

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

Addressing an additional question of how to consider risk of bias when evaluating diet and disease relationships, Hooper suggested that teams assess the risk of bias, report the risk, and use a sensitivity analysis to see if their results change when removing the data considered to be at highest risk for bias.

Hooper spoke about the role of unpublished studies in creating bias in the literature. SRs of observational studies have much greater problems with selective reporting and publication bias than SRs of RCTs as observational studies are less likely to be published than trials if they do not show “statistically significant” effects—so that the published set of studies is unlikely to be representative of the true evidence base. It is easier to identify unpublished and “negative” or “neutral” RCTs than other types of unpublished studies, as RCTs are required to register in advance of being conducted and thus are discoverable through searching of trials registers. Trials registers also provide pre-specified study methods for both published and unpublished studies, allowing research teams to check whether published studies include all planned outcomes, analytical methods, subgroup analyses, etc. The ability to screen for these factors in RCTs enables better assessment of risk of bias for individual trials within the SR, and ultimately for the SR itself. However, Hooper explained that a similar process is not available for identifying unpublished cohort studies or the pre-specified methods of published observational studies. Often only positive associations are published, and negative or non-existent associations may remain unpublished (and so excluded from the SR). Additionally, significant associations may be manufactured by selective subgroup assessments or altered analytical methods. She emphasized that it is critical to analyze the study methods, registry entries, and protocols in every study being considered for inclusion in a review.

Hooper spoke to the question of best practices for addressing publication bias, stating the importance of identifying, acknowledging, and addressing the studies that are missing from the published literature. Hooper emphasized the importance of qualifying SR results by noting the degree to which information may be missing; this is part of the GRADE process.

Hooper addressed pillar five, analyzing the data, and discussed the importance of the protocol that guides the analysis process. She noted that teams should specify in the protocol what effect size is clinically relevant to the research topic and explain the methods of data analysis. Teams should prespecify the comparisons to be made, the effect measures, and all data analysis questions. The protocol should also prespecify methods of data analysis including comparisons to be made, how heterogeneity will be assessed, and what subgroup analyses and sensitivity analyses will be run.

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

Addressing the question of what criteria should be used to determine whether individual nutrition studies have too many clinical or methodological differences to be combined within the same MA, Hooper responded that teams should only include studies that truly answer the research question and should not combine data from studies that used different methodologies (e.g., differences in treatment, dose, population characteristics, duration, comparators, or diets). She noted that researchers can use sub-groupings to answer sub-questions.

Addressing another question about best practices for planning appropriate subgroup analyses and sensitivity analysis a priori, she provided an example using the nutrient selenium. Hooper said to set up the main question and sub-questions, and in this example, the main question might be: “What is the effect of increasing selenium on cognition?” Sub-questions might include, “Does this effect differ by baseline selenium status? Does it differ by baseline cognitive status? Does the effect differ depending on the source of selenium?” To address these sub-questions one would plan meta-analytic subgroups based on baseline selenium status, baseline cognitive status, and selenium source. These subgroups should be integrated into the whole review process, included from the beginning of the review, and relevant information to assess to which subgroup each study belongs should be collected during the main data collection process.

Hooper addressed best practices for interpretation of findings, the sixth of the seven pillars. She stated that interpretation of the results is often the most difficult task, because information is almost always incomplete. She reiterated the importance of using the GRADE tool to assess the certainty of evidence for each outcome. She discussed how GRADE can be used to assess specific indicators of the certainty and reliability of evidence. These are risk of bias, inconsistency, imprecision, and indirectness. Hooper provided examples of how to assess each of these domains, basing assessment of risk of bias on the risk of bias assessment for each outcome (including sensitivity analysis results), inconsistency on measures of heterogeneity, imprecision on the 95% confidence interval around data from the relevant meta-analyses, and imprecision on the generalizability of the included participants to those for whom the answer is needed. GRADE works to evaluate certainty in reviews of both trial and observational evidence. When assessing RCTs, GRADE assumes a higher certainty of evidence but downgrades for problems, while when assessing cohort studies GRADE assumes a lower certainty of evidence but can upgrade for strong evidence (e.g., for a strong dose-response relationship).

For the final portion of her presentation, Hooper focused on the process of reporting a review, the last of the seven pillars. She suggested that researchers use the Preferred Reporting Items for Systematic Reviews and

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

Meta-Analyses (PRISMA)8 statement to ensure that their SR is reported comprehensively. PRISMA is an evidence-based minimum set of items for reporting an SR or MA that includes a flow diagram of potential studies and a checklist of the key components to be reported.

Hooper addressed the questions of how MAs can be used to evaluate the strength of evidence when different outcomes are reported in different studies and how MAs can be used to evaluate the strength of the totality of evidence. In response, she provided an example from the development of WHO guidance on saturated fat (SF) consumption (WHO, 2023). WHO commissioned an SR that focused on RCTs of at least 2 years duration that assessed the impact of reducing SF intake on a variety of noncommunicable diseases, such as cardiovascular disease and cancer. In parallel WHO commissioned an SR of prospective cohort studies that assessed associations between SF intake and the same health outcomes. It also commissioned an update of an SR and regression analyses of SF intake and lipid outcomes in highly controlled metabolic studies over short periods of time. For each key outcome within each review the evidence was assessed using GRADE. Consistency in results was found across the three reviews, which was seen as strengthening the totality of evidence. Hooper used this example to demonstrate how a team could approach a complex nutrition research question by making use of different types of evidence to produce a single best answer.

Finally, Hooper mentioned that the commissioning of new SRs and MAs for nutrition guidance may not always be necessary. An alternative solution, she said, may be to locate existing robust SRs that address the topic of interest. The Risk of Bias in Systematic Reviews (ROBIS) tool can be used to quantify whether a particular SR is strong enough to be used for the development of dietary guidance (directly or through updating).9 The target audience for ROBIS is guideline developers and authors of SRs. Robust existing SRs may need to be updated, and it is good practice, she said, to work with the original review team to accomplish this, and potentially to pre-specify new outcomes, subgroups, and/or sensitivity analyses for the review update to provide the most useful data for guideline development.

___________________

8 For information on PRISMA, see http://prisma-statement.org/prismastatement/checklist.aspx?AspxAutoDetectCookieSupport=1 (accessed January 3, 2024).

9 For information on ROBIS, see https://www.bristol.ac.uk/population-health-sciences/projects/robis/robis-tool/ (accessed January 10, 2024).

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

PANEL DISCUSSION

A panel discussion was convened, moderated by Amanda J. MacFarlane. MacFarlane began by introducing the additional discussants, Sydne Newberry of the RAND Corporation and Christopher Schmid from Brown University School of Public Health, who joined the conversation with Naude and Hooper. MacFarlane facilitated the discussion, inviting comments from the panelists and questions from audience members.

Transparency

Hooper said that making good decisions, being fully transparent, and not adding additional bias during the SR process is challenging but critical. She suggested that SR authors aim to be clear in their process and intentional in attempts to avoid bias. MacFarlane added that maintaining transparency and reporting any challenges can help preserve trust in a study. Hooper highlighted the importance of teamwork and transparency in the GRADE process, where teams work together to assess the relative quality of their assembled evidence. Newberry agreed, further emphasizing the benefits of transparency at every stage of a review, including the planning stage. She stated the importance of this transparency to the quality of the evidence and the need for high-quality evidence in policy development. Newberry said that policy makers who seek to use the evidence to inform policy and guidelines should have a clear understanding of what the evidence suggests and promotes. For this reason, she noted, researchers should clearly plan and report their SRs to make the evidence both understandable and useful.

Planning and Team Development

Newberry said that one of the biggest challenges of SRs and MAs is working with stakeholders to frame research questions in ways that are truly answerable. She referenced comments from Hooper and Naude about the importance of focusing on the clinical significance of outcomes. However, for many outcomes of interest, Newberry said, the clinically important differences are unknown. Given that many nutrition questions have an overwhelming body of literature and studies have finite sample sizes, Newberry suggested that limits on the quantity of primary studies to gather and assess should be established as part of the review protocol development process.

Newberry disagreed with a comment that Hooper had made during her presentation in which she suggested that study searches should include studies published in languages other than English. Newberry expressed

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

concern that seeking studies in other languages could incur additional costs and time delays for translation. However, Hooper responded that online translation tools have become better and more affordable and are sufficient for an initial translation, allowing reviewers to decide whether a study is otherwise appropriate for inclusion. Additionally, Hooper suggested that students serving as research assistants may also be a useful, low-cost option for further translation needs.

As part of the planning process for SRs, Newberry described the benefits of searching for existing SRs on the topic of interest prior to initiating a new study, noting that many nutrition research questions have already been addressed in SRs published in the past 5 years. This process of seeking existing reviews can be included in the planning process, and those intending to conduct a review to inform policy should decide whether these previous reviews can be used. If not, Newberry suggested mining existing reviews for sources or referencing them in the current work. This suggestion echoed Hooper’s previous comment about the use of existing SRs if they are of high quality and truly address the guideline’s research question.

Schmid contributed to the discussion of proper planning and reinforced that SRs are only as good as the studies they include and the sampling schemes on which they are based. Including more studies does not inherently mean including the right studies or producing a better review. Schmid further contributed to the discussion on research teams and protocols, saying that one could compare the SR process to that of an RCT. Protocols may be revised numerous times as issues arise, but changes should be documented and agreed upon by all team members. Having all experts involved agree to the protocol is crucial to the process. Schmid shared his experience of the struggles that can result from lack of agreement on the protocol, including having to revise work, which slows down the review. He recommended being as explicit as possible with protocols in advance. Hooper added that having teams do a “trial run” before the review begins can improve the overall efficiency of the process. Although this step may seem burdensome, and MacFarlane noted that the process may be iterative, Hooper advocated that it ultimately helps the process run more smoothly by ensuring that everyone on the team can identify and address disagreements up front. Schmid added that addressing questions about eligibility criteria before beginning the review will lead to increased trust in the study results.

Replying to Schmid’s comment about the process of assembling and training teams, Naude noted that for researchers with baseline training in nutritional epidemiology, experiential learning is key. People learn best through experience performing reviews, and repetition builds skills. She also noted that SRs do not always provide definitive answers to research questions, but they move researchers closer to understanding the level of

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

confidence in the answer, which can help inform decision making and policy development. Schmid added that SRs are a wonderful teaching tool for those who want to understand how research is conducted and analyzed. He remarked that if more students outside of the research field had exposure to the SR process during their education, the public’s general understanding of science might be improved.

While the discussion centered on the topic of creating effective teams and organizing the review process, audience members asked about the process for building the “dream team” for conducting the review. Specifically, they inquired about who should be included and how roles and responsibilities should be divided among team members. Naude replied that different reviews require different approaches. Generally, she said, a team will include a “lead” or project manager and other members with different areas of expertise. For example, topic area expertise is very important for understanding the biological relevance of outcomes. In other phases of the review, teams may rely less on subject-matter experts and more on data expertise and information specialists. Some review teams may delegate the data extraction process for efficiency purposes. There is no one-size-fits-all approach, she said, and finding a system that works requires repetition and experience. However, Naude acknowledged that the bulk of a review will often be completed by two to four people, with additional experts contributing to the process as needed.

Hooper reinforced the benefits of including experts from the beginning of the process, including protocol development. She said that subject-matter experts, statisticians, search specialists, and SR methodological experts should all be involved in the planning process, to ensure that adequate standards are established. Hooper discussed how to address areas of disagreement during a review and suggested that teams work through points of disagreement together, refining their inclusion and exclusion criteria as questions are resolved. She noted the importance of having a method that the entire team understands and reiterated the significance of planning, noting that SRs require clear guidelines and protocols for an effective process.

Assessing and Addressing Bias

Schmid said that teams should be careful to report as much of the data as possible when those data are of high quality. For example, if a team discovers 100 high-quality studies but only reports on 15 of those studies, that report may not produce a high-quality answer. Hooper concurred, noting that trimming study selection down from 100 to 15, as is necessary in the creation of a forest plot, will add bias and uncertainty. Hooper suggested that while it may not be possible to eliminate this type of bias entirely, it

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

is essential to the process that it be acknowledged and reported, discussing the findings and implications of the missing studies.

Schmid stated his opinion that the meaning of the ratings in the GRADE process are not sufficiently discussed or considered. He added that there are new AI and machine learning tools that can be useful for both screening and extracting data, a point on which Hooper concurred. Schmid mentioned that his team at Brown University has developed an SR repository and a program abstractor that use AI and machine learning. He reinforced the need for strong review teams, with experts in every domain involved in the process.

MacFarlane noted that there are many ways to assess and address bias, including through use of risk of bias tools. Numerous tools exist, and it may be difficult for researchers to determine which tool or method is best for their study. Naude suggested using the most recent Cochrane tool, “Risk of Bias 2,” for reducing bias in RCTs. Naude explained that when using the Risk of Bias 2 tool, reviewers have the option to change the weighting of different domains. With study designs that are not RCTs, Naude suggested the use of other tools to reduce bias and urged having rules in place a priori to manage overall judgments. Consistent application of methods and tools is key, she reiterated.

Hooper agreed that Risk of Bias 2 is a very useful tool, although she and Naude agreed that it is slightly more difficult to operate than Risk of Bias 1. Hooper added that it is critical to focus on the risk of bias that is most relevant to the specific area of study or the bias that would have the greatest impact on the results of the review. Newberry added that since most nutrition studies are cohort studies, not RCTs, the Risk of Bias in Non-randomized Studies–of Interventions (ROBINS-I) or Risk of Bias in Non-randomized Studies–of Exposures (ROBINS-E) may be useful to assess risk of bias. Newberry also suggested the Newcastle-Ottawa Scale (NOS) tool for assessing quality of non-randomized studies in MAs. However, she noted that these tools may not be sensitive enough to detect all types of bias. Schmid and Hooper suggested conducting a sensitivity analysis, first conducting meta-analysis of all studies, then eliminating studies that have a high risk of bias and running the analysis again to see whether the results change. Schmid and Hooper concluded that when the results change during this process, there may be uncertainty about the usefulness of the results.

MacFarlane posed a question to the panel about how to address the bias that may result from the lack of clear control groups in nutrition studies. For example, she explained, drug trials often have clear treatment and placebo groups, but nutrition interventions may rely on baseline nutrition status as a control group. She noted that this can cause challenges given that baseline nutrition status can be an imprecise and poorly reported measurement. Naude replied, stating that research teams should look for nuance

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

in the study design of the included studies to understand whether their question or intervention has been altered to accommodate the imprecision of baseline nutrition status as a control group. She also said that subgroup analyses can be difficult to conduct in this context. Naude added that this challenge emphasizes the need to improve reporting of the primary research and ensure that every study included meets basic reporting guidelines. Poor, inadequate, and vague reporting in primary studies will negatively impact the review process and outcomes. Naude noted that this highlights the importance of having established methods and applying tools consistently to avoid imprecision and bias.

In closing, Naude said that the key to addressing problems in SRs and MAs is to continue to highlight and discuss them. She also noted that developments in technology promise advancements in the field in general, and improvements to data collection, organization, and analysis.

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.

This page intentionally left blank.

Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 7
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 8
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 9
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 10
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 11
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 12
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 13
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 14
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 15
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 16
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 17
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 18
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 19
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 20
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 21
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 22
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 23
Suggested Citation: "2 Planning of Systematic Reviews and Meta-Analyses." National Academies of Sciences, Engineering, and Medicine. 2024. Use of Meta-Analyses in Nutrition Research and Policy: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/27481.
Page 24
Next Chapter: 3 Exploring Best Practices of Conducting Meta-Analysis
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.