Concerns about inaccurate claims related to science and scientific findings have been prominent in social discourse in the United States long before the present day, but recent concerns about an apparent rise in the prevalence of misinformation about science and its potential harms have prompted an explosion of research. This research is hampered to some extent by lack of clarity about what does and does not qualify as misinformation about science. For example, determining whether to establish information as misinformation at the level of the individual claim (e.g., accuracy of any given individual post on a social media platform) or at the level of narratives (i.e., repeated individual pieces of misinformation about science over time) (Wardle, 2023) remains an open question. Additionally, in both public discourse and in peer-reviewed research, the term misinformation has been used as an umbrella term to refer to various types of false, inaccurate, incorrect, and misleading information. The broad nature of the term has made it difficult to develop a coherent understanding of the nature, scope, and impacts of misinformation broadly, and by extension, misinformation about science.
As noted in Chapter 1, the committee developed a definition of misinformation about science to both guide its own work and as a possible guidepost for the broader research community. The committee’s definition states:
Misinformation about science is information that asserts or implies claims that are inconsistent with the weight of accepted scientific evidence at the time (reflecting both quality and quantity of evidence). Which claims are determined to be misinformation about science can
evolve over time as new evidence accumulates and scientific knowledge regarding those claims advances.
The current chapter describes how the committee arrived at this definition of misinformation about science, the inherent challenges in developing a single definition, and the limitations of the definition the committee developed. The chapter begins with a broad discussion of information and misinformation, briefly summarizing different approaches to defining misinformation generally. We then turn to a discussion of misinformation about science specifically, beginning with an explanation of the processes of science and how these processes produce empirical evidence that provides robust insights about the natural, physical, and social world. The committee’s definition takes into account characteristics of misinformation generally and also the unique characteristics of knowledge and evidence generated by science. Finally, the committee discusses the limitations of our definition, as well as how it can be used effectively to guide research.
In the view of this committee, misinformation is a subgenre of information. Information is broadly defined as “anything that is processed to provide meaning of the world” (Wanless & Berk, 2021), a definition that incorporates both content (the concepts or ideas the information is intended to convey) and context (the identity and social position of the actors who are sharing the information, the intent of these actors, the surrounding information, the platform, etc.) as elements that would affect interpretation. Misinformation is a type of information that emerges in relation to reliable information. Reliability (or lack thereof) is determined by the system that produces the information; for misinformation about science, the contrasting system is “science” itself (i.e., a system that produces reliable science information). Definitions of misinformation, whether in general or specifically of misinformation about science, typically vary in the extent to which they incorporate different elements of context or focus primarily on the specific content of the information being conveyed.
The concept of misinformation and its variations appear in centuries-old publications such as Thomas Hobbes’ Leviathan and Adam Smith’s The Wealth of Nations. Samuel Johnson’s 18th century dictionary of the English language defines misinformation as “false intelligence; false accounts” (Johnson, 1773). However, this long history has not led to a precise and widely-shared definition. A recent survey of experts suggests that while there is agreement on some elements of a definition of misinformation, there is disagreement on others (Altay et al., 2023). Similarly, a recent review of health-related research on misinformation
identified 75 different definitions for misinformation and related terms (terms such as disinformation, fake news, malinformation, and infodemic; El Mikati et al., 2023).
Almost universally, definitions of misinformation used by researchers encompass false, inaccurate, or incorrect information. Often, definitions include “misleading” information, that is, information that is not entirely false, but that could lead to inaccurate interpretations (e.g., van der Linden et al., 2023). Concepts of falsity, deception, and harm are also common components of definitions of misinformation, but how each of these concepts is understood and the role it plays varies from definition to definition. Sometimes, definitions in the literature require that the creation or sharing of false or misleading information be unintentional in order for it to be considered misinformation (e.g., Wardle & Derakhshan, 2017). Other definitions require that the information be both false and potentially harmful (e.g., Freelon & Wells, 2020); the focus on potential harm often drives content moderation policies of social media platforms (Green et al., 2023).
The committee also considered how misinformation might be distinguished from disinformation (untrue information shared by an actor who knows it is untrue). Due to its perceived increased potential to harm, some scholars have focused attention on disinformation in particular, defined by some as information that “includes all forms of false, inaccurate, or misleading information designed, presented, and promoted to intentionally cause public harm or for profit” (de Cock Buning, 2018; Freelon & Wells, 2020). This is different from propaganda, which is “designed to link together brands, people, and nations with the goal of influencing ideas and attitudes” (Tripodi, 2022). However, due to the difficulty of determining intentionality, and the fact that it is not an intrinsic aspect of the truth claim itself, some scholars have eschewed the term disinformation (Krause et al., 2022; Swire-Thompson & Lazer, 2020).
In its deliberations around the definition of misinformation, the committee grappled with the role of an actor’s awareness or intent—that is whether an actor creating or sharing a piece of information knows it is inaccurate and shares it intentionally despite or because of its inaccuracy. As noted above, intent or awareness is not an attribute of information itself; rather it is an attribute of an actor who creates or shares the information. It can be very difficult to determine intent, especially as information is shared across networks of individuals. As a result, the committee excluded intent (to mislead—or not) as an essential, definitional component of misinformation. The committee also excluded harm as a definitional requirement in part because, presumptively, misinformation is at least minimally harmful (because it undermines individual agency due to its intrinsic potential to create a false understanding of the world); and in part because whether and
how much misinformation is harmful is still an open question and an area of active research (as we discuss in Chapter 6).1
Any definition of misinformation about science has to account for the unique characteristics of knowledge that is generated by the scientific community through scientific inquiry. Science emphasizes the importance of empirical evidence and testable hypotheses about the world. Processes of science are intended to allow for discussion and evaluation of bodies of evidence as they evolve over time. This emphasis on empiricism and on revision of knowledge as new evidence comes to light has implications for any definition of misinformation about science. That is, the definition needs to incorporate both the notion that scientific knowledge is rooted in empirical evidence and that it can be revised over time as new evidence emerges.
In the sections below we first describe, briefly, how science works: that is how scientific evidence is gathered and tested, how conclusions are drawn, how scientists reach consensus on how to interpret existing evidence, and how and when scientific explanations might be revised when new evidence emerges. Based on this discussion, we then discuss implications for a definition of misinformation about science.
“Science is a mode of inquiry that aims to pose questions about the world, arriving at the answers and assessing their degree of certainty through a communal effort designed to ensure that they are well grounded [in evidence]” (National Academies of Sciences, Engineering, and Medicine [National Academies], 2019, p. 27). This mode of inquiry has four key goals: first, describe the world (e.g., classifying plant species); second, explain that world (e.g., how those plant species evolved over time); third, predict what will happen in the world (e.g., how climate will affect survival of a particular plant species); and fourth, intervene in certain processes or systems to achieve a goal (e.g., consider moving threatened plant species to a more climate conducive environment; National Academies, 2019).
To achieve these goals, scientists generally employ four core practices in pursuing their inquiries. These practices include
___________________
1 We do suggest that the field should (and generally does) focus on misinformation that has greater prospects for leading to harmful effects, including encouraging harmful behaviors.
As scientists introduce ideas, build theories, or test hypotheses, they generate data, observations, and other measurements collectively known as evidence. The scientific process requires evidence to provide accurate descriptions of the world and to avoid false descriptions (Goldman, 1999). Generating evidence in and of itself, while necessary, is not sufficient for scientific inquiry. Using logic and reasoning to weight the quantity and quality of evidence, and associated uncertainty, according to the standards of any individual scientific field, yields expert opinions on the strength of a hypothesis or theory. That evidence, once published, provides other scientists with the details needed to find additional connections and patterns for further experimentation in the iterative testing of various hypotheses, thereby refining (or refuting) the theory. Of course, no individual research study is perfect. Assumptions, hypotheses, results, and conclusions can and should be challenged as emerging research suggests alternative conclusions and new theories. If the initial results cannot be reproduced by independent laboratories, the initial reported (but irreproducible) results are generally discarded by the scientific community. Scientific inquiry thus advances through repeated and methodical exercise of—and steadfast adherence to—this core set of scientific practices that define the craft.
In this way, science is a cumulative activity. Repeatable observations and experiments generate explanations that describe the world more accurately and comprehensively. These explanations in turn suggest new observations and experiments that can be used to test and extend the explanation. In this way, scientific explanations improve over time, as subsequent generations of scientists, often using technological innovations, work to correct, refine, and extend the work done by their predecessors.
Science is thus a social process, whose core practices (above) are based in shared principles and assumptions that shape a scientist’s approach to inquiry. Five key principles of science, adapted from Reproducibility and Replicability in Science (National Academies, 2019a, pp. 30–33), are reproducibility, generalizability, collectivity, uncertainty, and refinement:
___________________
2 We note that in some fields, like the social sciences and biology, “equivalent context” is often difficult to specify; for example, the same experiment, a year later, with similar subjects may not constitute “equivalent” (e.g., see Munger, 2019).
The core practices and shared principles characterize the nature of science as a whole and the information derived from a huge variety of approaches to scientific inquiry. Of course, “science” does not speak with a single voice; likewise, there is not a single set of methods for producing knowledge that is used across all domains of science. Science is inherently heterogenous. Furthermore, as mentioned above, science is intrinsically uncertain: findings today might be rendered obsolete by tomorrow’s research. The above core practices and shared principles do not counter this. Rather, they establish standards that allow for advancement through iteration, disagreement, and uncertainty.
In science it is not possible to prove with absolute certainty that a given explanation is complete and final. Some of the explanations advanced by scientists turn out to be incorrect when they are tested by further observations or experiments. Many scientific ideas that once were accepted are now known to be inaccurate or to apply only within a limited domain. However, many scientific explanations have been so thoroughly tested that they are
very unlikely to change in substantial ways as new observations are made or new experiments are analyzed. These explanations are accepted by scientists as being true and factual descriptions of the natural world. The atomic structure of matter, the genetic basis of heredity, the circulation of blood, gravitation and planetary motion are just a few examples of a very large number of scientific explanations that have been overwhelmingly substantiated.
Misinformation about science, to which we now turn, represents a distortion in the representation of information derived from the practices, principles, and approaches described above that comprise scientific inquiry and evidence-building. The definition of misinformation about science the committee adopted focuses on the (mis)match between (a) claims regarding specific scientific findings and (b) the weight of scientific evidence at the time the claim is made. Often, misinformation about science is constructed to foreground and exploit the heterogeneity and uncertainties of science. Below we highlight characteristics of scientific knowledge and scientific consensus that pose particular challenges for defining, identifying, and combatting misinformation about science.
As noted above, scientific knowledge evolves as more evidence is generated, particularly when the science itself is unsettled. The idea of “settled” science requires some explanation. It should not be implied that the knowledge on any given topic is final—all knowledge is partial—but that there is greater consensus in the case of “settled” science as opposed to on topics where scientific knowledge is still emerging. Claims may still be upended in “settled” science, albeit more slowly. An example is the screening age for mammography. There has been considerable debate over whether mammography screening to detect breast cancer should start at age 50 or much earlier. Over several decades, based on evolving science, the consensus according to the Centers for Disease Control and Prevention’s (CDC) U.S. Preventive Services Task Force (USPSTF) settled on starting the screening at age 40 in the United States (USPSTF, 2024). On the other hand, consensus on emerging topics such as COVID-19 was less settled given the novelty of the virus and rapid evolution of knowledge.
Another point worth noting is the role that “at the time” plays in the committee’s definition of misinformation about science. The nature of scientific inquiry, at its best, is to continually explore hypotheses that are counter to and may overturn current orthodoxy. Claims made today that are consistent with the weight of the scientific evidence may not be accepted tomorrow if new, contradictory evidence emerges. When and how long scientific claims are accepted, are questioned, and even overthrown has implications for defining misinformation.
COVID-19 serves as an interesting illustration of some of these principles, though they are not unique to COVID-19. Explanations of the origins and transmission of COVID-19 evolved continuously over several months as new knowledge and evidence accumulated. What had been a consensus about a virus that is transmitted through surfaces changed, as evidence began to accumulate, to a consensus around airborne transmission. Similarly, there was uncertainty about the potential effectiveness of hydroxychloroquine as a treatment for COVID-19 at the beginning of the pandemic. Over the next few months as more studies were published assessing the potential therapeutic benefit of hydroxychloroquine, the picture became clearer that it was not an effective treatment (Abella et al., 2021; Bull-Otterson et al., 2020; Hennekens et al., 2022; The RECOVERY Collaborative Group, 2020). The speed at which the definition of misinformation about COVID-19 began to change was extremely rapid, given the unsettled nature of the scientific knowledge regarding the virus, as opposed to the more established or long-standing evidence on such topics as the harmful effects of lead in water. That is, how settled the science is matters, but this should not imply that settled science cannot change: it can, but more slowly.
Consensus is also a reflection of power in some ways. Authorities or those in power may hold greater resources and platforms to establish consensus compared to those who may be holding dissenting views. For example, scientific consensus is generally established through peer-reviewed journal articles, scientific and medical bodies developing consensus guidelines (e.g., USPSTF), and funding agencies, among others. Such bodies and scientists working within these paradigms are not immune to bias and could potentially discount views that challenge the prevailing consensus or even promote erroneous scientific claims due to highly biased assumptions (e.g., racial superiority through genetic inheritance; Gould, 1996). Thus, given the stature of such bodies, platforms they hold, and cultural authority they have, they can influence what is considered to be reliable and legitimate science information, and by definition, what is considered to be misinformation about science, and in some cases, such determinations may be based on shared values and homophily in addition to the weight of the scientific evidence at the time (e.g., safety and benefits of new innovations and technologies; Dietz et al., 1989). As such, this may also preclude dissenting views from being aired prominently, and dissenters—whether other scientists or advocates—may or may not have comparable resources and platforms to challenge the dominant paradigm(s), thus opening themselves to criticism of spreading misinformation. Given how long it takes in the deliberative process of science to establish consensus and update it based on new knowledge, initial views on what is scientifically accepted information and misinformation about science may evolve over time with occasional
dissenting views entering the mainstream. Importantly, while offering a definition of misinformation about science in keeping with its charge, the committee also recognizes the ways in which definitional decisions can drive stigma and facilitate the accumulation of economic, social, political, and cultural capital, with potential to create or exacerbate inequalities (Metzl & Hansen, 2014).
The notion of the “precautionary principle,” which proposes that in the face of uncertainty, the priority should be to avoid risks even when benefits may be clear, is also relevant to the frame of reference that establishing what counts as misinformation rests on (Kriebel et al., 2001). That is, new and emerging science should undergo extensive peer-review, testing, and review by appropriate experts in the interest of protecting human safety and welfare. For instance, with respect to medical interventions, the presumption is that a new drug or procedure is not efficacious and may be risky until evidence demonstrates its value and safety. Thus, in this context, the definition of legitimate science information and misinformation about science may change as new knowledge around safety accumulates and as scientific consensus evolves as claims get tested and adjudicated. Indeed, shared values and homophily among scientists can and have often served the advancement of scientific understanding well (e.g., upholding key principles for the approach to scientific inquiry), but also underscore the complexity of applying definitions of misinformation in cases where value judgements must be deployed.
As with many social scientific categories, the operational boundaries of misinformation are not drawn with bright lines. Furthermore, it is not a normatively neutral term—the goal of studying misinformation (unlike information more generally) is to seek to understand and control a phenomenon that may be having adverse effects on individuals or society (e.g., misunderstanding the risks of vaccines leading to decisions with adverse health effects).
In this section we discuss some classes of information that are encompassed by the committee’s definition of misinformation about science. These include false science information, misleading science information, and disinformation. We also highlight two essential boundary concerns: (a) the confounding of differences in values with misinformation and (b) the line between misinformation and benign simplification. We discuss further issues in operationalizing misinformation in Chapter 8 of this report.
Some of the literature suggests that determining whether a claim related to scientific findings or explanations of the world should be based on a comparison to the scientific consensus on that issue (Chou et al., 2020; Swire-Thompson & Lazer, 2020) or to the “best available scientific evidence” (Southwell et al., 2022). The committee agrees that any judgment regarding the truth or falsity of a claim must be based on an assessment of the body of scientific evidence that informs the consensus position at the time. As discussed above, scientific consensus is rooted in an assessment of the relative quality and quantity of various findings—the weight of accepted scientific evidence—at that moment. While there is recognition that new evidence might call current consensus into question, for well-established theories, there are typically multiple lines of confirmatory evidence.
False science information distorts this context and exploits the notion that science is made up of sets of findings that are uncertain, evolving, and sometimes competing and conflicting. The committee defines false science information as a mischaracterization of the “weight” of evidence as found in the literature at a particular moment in time and that underpins the consensus position. Thus, for example, in 2024, the claim that “vaccines cause autism” is false, because, within a large literature carefully studying the question, the weight of evidence is decisively inconsistent with that claim (e.g., Gidengil et al., 2021). “Weight” thus reflects hierarchies of evidence; for example, a high-quality, well-designed randomized, controlled trial about hormone replacement therapy can supersede a higher quantity of observational evidence (Prasad & Cifu, 2015). However, the methodology and hierarchy of evidence must be commensurate with the empirical question at hand (Greenhalgh, 2020). Evidence from randomized controlled trials cannot be assumed to always supersede all other evidence; which evidence provides the most “weight” depends on the context.
“Misleading information” has not been precisely defined in relevant literature. In this report, the committee interprets “misleading information” as information that is not intrinsically false, but that causes false beliefs or inaccurate understandings of science. A challenge with this construction is that “misleading” is not an attribute of the information but of the interplay of information and a given recipient (what is informative to one person may be misleading to another). “Misleading” is also potentially a challenge to operationalize, since it requires an assessment of the causal effects on cognition of a given piece of information. Put another way, for information to be misleading, we must know whether the information actually misleads
people (and which people it misleads), rather than researchers’ judgment about whether a piece of information is misleading.
While this committee believes that misleading science information is an important phenomenon, it is viewed to be definitionally adjacent to the concept of misinformation. The committee also argues that empirical evidence that individuals are misled by the information is a requirement in this case. Therefore, the committee adopts the construction “implies” to capture the relevant attribute of misinformation. Thus, for example, The Washington Post published an article titled, “Vaccinated people now make up a majority of COVID deaths” (Beard, 2022; note: the article title was subsequently changed). This headline was true and reflected the fact that the overwhelming majority of the most vulnerable individuals were vaccinated (and thus at higher risk of dying of COVID-19, even if vaccinated). The rapid recirculation of this headline within the anti-vaccine community signaled that this community understood it to imply that vaccines were not effective (Goel et al., 2023). The implication that COVID-19 vaccines are not effective would constitute misinformation.
Throughout this report, the committee specifically highlights disinformation alongside misinformation, both because the former is a part of the charge and also a commonly-used, related term. In the view of this committee, disinformation about science is defined as a sub-category of misinformation that is circulated by agents that are aware that the science information they are circulating is false. We deliberately distinguish between (a) the action of spreading false information without knowing that it is false and (b) the action of knowingly spreading false information, and spreading it for a variety of motivations including altruistic reasons, political, financial, and other motives (see Chapters 4 and 5 for further discussion of the reasons and motivations that institutions and individuals spread misinformation about science).
However, we note that, as a matter of scientific measurement, it is often difficult to ascertain the intent or beliefs of the actor(s) sharing misinformation, and this presents an operational challenge to researchers. The intent or beliefs of the sharer of misinformation are also immaterial to the prospective harm that misinformation about science might cause to recipients. That said, it is clear that some of the most harmful episodes of misinformation about science were knowingly driven by actors with strong material incentives to mislead people (see examples in Chapters 4 and 5). The definition we offer for misinformation is therefore agnostic with respect to the beliefs or intentions of the people or entities that share that information, and thus encompasses disinformation.
It is important to be circumspect when using the term misinformation so that it does not simply paper over different values or salient markers of identity, such as perspectives on risk tolerance in the face of uncertainty. For instance, scientific research offers insight on morbidity, mortality, and risk, but it does not tell societies how to weigh individual versus societal risks and benefits, or the threshold at which individuals or a society should act. Those thresholds can only be reasonably and defensively set at different levels depending on the frame of reference.
For policy decisions that need to be informed by scientific evidence, it is especially important to understand when disagreements over the implications of scientific findings reflect differences in values or priorities versus misinformation about science. Science alone does not provide all of the information necessary to make sound policy decisions. Policymakers also need to consider things like relative financial costs of different policy approaches, values and goals of communities, and who benefits or does not benefit from a particular policy approach (NRC, 2012b). Careful weighing of scientific evidence in the context of this wide variety of additional factors can mean that policymakers choose courses of action that do not represent the “ideal” approach that would be chosen by scientists or other policymakers (NRC, 2012b). For example, the use of electronic cigarettes as a tool for smoking cessation is recommended by the National Health Service in the U.K. while the pros and cons are heavily debated within the U.S. scientific community (Herbst et al., 2022). Therefore, based on differences in values and goals, the U.K.’s harm reduction strategy could be viewed as inaccurate science information within other policy and government contexts that promote total cessation from the use of combustible tobacco.
Another example of this is the aforementioned ongoing debate about starting mammography screening for breast cancer at either 40 or 50 years of age. As documented by Friedman (2023), both sides of the debate have written in peer-reviewed literature about how the other side has either spread misinformation (Kopans, 2024) or recommended unethical actions (Woolf & Harris, 2012). Yet, data support either recommendation, with the former camp prioritizing case detection (accepting the risk for false positives) and the latter seeking to minimize overdiagnosis and conserve scarce resources (accepting the risk for missing cases; Friedman, 2023).
Within an academic frame of reference, a number of studies have focused on the increasing use of hype- and breakthrough-associated language in scientific publications. While emotionless styles of speaking have a long history of being favored as a way to imply objectivity (Rosenfeld, 2018), hyperbolic language has been increasing in scientific journals (Millar et al., 2019), in university press releases (Sumner et al., 2014), and in news
content about science (Adams et al., 2019; see Chapter 4). Although such language may have been previously considered inappropriate for academic debate, it has become increasingly adopted, and sometimes expected, in successful funding applications (Millar et al., 2022) and in the physical and biological sciences (Hyland & Jiang, 2021).
It is also critical to acknowledge that differences in the contexts for discussions around evolving scientific knowledge also have implications for misinformation about science. Debates and dissenting views are built into the process of science where claims are constantly tested and subjected to review by other scientists. Moreover, such a process is designed for claims to be challenged and the empirical evidence supporting such claims contested. While evolving consensus is both embraced and appreciated within the science arena, new updates to consensus may not often penetrate into the public arena. In the contemporary information ecosystem marked by flattened hierarchies (see Chapter 3), conversations and debates among scientists are now taking place in the public sphere for many reasons, including those discussed in Chapter 4—publicity by institutions that produce science or scientists seeking a more public role. The challenge, however, is that changing consensus due to new evidence may inadvertently create confusion among non-scientists being perceived instead as expert disagreement and conflicting information (Nagler et al., 2023).
How inaccurate a claim can be before it must be labeled misinformation is also partly a matter of setting thresholds based on context. For example, explanations about science are often simplified in the context of education, popular culture, and journalism. In fact, research on how people process complex information makes clear that sometimes simplifications can yield more accurate beliefs than complicated and more precisely correct representations (Reyna, 2021). In some sense, information is always being simplified or presented outside of its original context, making misinformation an inevitable feature of a “sound bite” media system.
While some theories make clear recommendations on how high-quality simplifications can leave people with “gists” that are both meaningful and accurate (Reyna, 2021), the fact is that even carefully planned messages can propagate out of their original context, creating confusion or misunderstandings. As such, the boundary between simplification and misinformation must rely on both normative and objective criteria to distinguish which simplifications stray too far from the original. Simplifications of science such as metaphors are normatively accepted in education, journalism, or certain healthcare settings to facilitate understanding. In physics
and chemistry, for example, we learn the first law of thermodynamics and conservation of energy: that energy can neither be created nor destroyed. Only in advanced physics does one learn exceptions revealing the oversimplification: the law can be violated, if such violations happen only for infinitesimally short periods of time, in line with Heisenberg’s uncertainty principle. As another example, physicians prescribing anticoagulants routinely use the factually incorrect metaphor of “blood thinners” to explain the medicine’s mechanism of action to patients.
The line where simplification stops serving goals like education and the increase of accurate understanding, and becomes misinformation is gray and interpreted through socio-cultural norms. The conservation of energy and anticoagulation examples demonstrate that a helpful simplification in one context could be considered false or misinformation in another, suggesting that any definition of misinformation focused solely on the veracity of content irrespective of context is likely insufficient. Context matters; otherwise, science educators and journalists could be credibly accused of spreading misinformation for conceptual simplifications that many would consider part of their professional responsibilities. But without a sufficient empirical basis on which to set those context-dependent thresholds (either due to lack of data, or questions that science cannot answer [Weinberg, 1972]), the basis for such decisions defaults to other values like culture, norms, or identity. This set of issues is an important area for future empirical research.
Conclusion 2-1: In both public discourse and in peer-reviewed research, the term misinformation has been used as an umbrella term to refer to various types of false, inaccurate, incorrect, and misleading information. The broad nature of the term has made it difficult to develop coherent understanding of the nature, scope, and impacts of misinformation, and by extension, misinformation about science. To address the lack of a consistent definition, the committee has developed the following definition: Misinformation about science is information that asserts or implies claims that are inconsistent with the weight of accepted scientific evidence at the time (reflecting both quality and quantity of evidence). Which claims are determined to be misinformation about science can evolve over time as new evidence accumulates and scientific knowledge regarding those claims advances.
In the view of this committee, this definition affords operationalization and measurement and offers a lens for assessing the potential impacts of misinformation about science as well as the potential efficacy and
plausibility of intervention efforts. Determining what constitutes legitimate science information, scientific uncertainty, and misinformation about science is nontrivial. Additionally, power lies with those who can make such definitional decisions as well as set the thresholds for when other variables besides veracity (i.e., norms, values, identity, and context) take precedence. Misinformation about science is a concern because it can yield misunderstandings of the world that, in turn, misaligns individual or collective preferences and choices (see Chapter 6). Moreover, misinformation about science might undermine trust in important societal institutions (Ognyanova et al., 2020), an issue that we discuss more in Chapter 3.
This page intentionally left blank.