Misinformation comes from a wide range of sources that employ a number of different strategies and tools to enhance spread, and that are driven by a variety of motivations. This chapter catalogues some of the main sources of misinformation about science, discussing how and why each source promulgates misinformation about science. Understanding the range of sources is critical to efforts to mitigate the flow and influence of misinformation. The chapter begins with a review of research on the prevalence of misinformation about science in order to develop an understanding of the scope of the problem. We then detail the institutions, communities, and individuals from which misinformation about science originates, and the reasons, both intentional and unintentional, that misinformation proliferates from these sources. The next chapter builds on this one and describes the factors that contribute to misinformation’s spread.
In determining the prevalence of misinformation about science, the committee thought it was important to distinguish between science (mis)information and science (mis)belief (Levy et al., 2021). Scientific information is pieces of information regarding science. Misinformation about science, then, is pieces of information that incorrectly characterize science or the state of scientific knowledge (see Chapter 2). In contrast, scientific beliefs are the beliefs that people hold regarding science. Beliefs that are misaligned with science are scientific misbeliefs. For example, a statement that adverse health effects are directly attributable to consuming genetically
engineered (GE) foods is misinformation, whereas the belief that eating GE foods poses a higher risk to human health than from eating non-GE foods is a misbelief. One of the potential negative impacts of misinformation about science is that it may lead to misbeliefs, which in turn may lead to decisions that have a negative effect on individuals. The effects of misinformation about science on misbeliefs are discussed more fully in Chapter 6.
In its review, the committee found that there is more literature on the generalizable prevalence of misbeliefs than of misinformation, arguably because the former is easier to measure. Misbeliefs are typically measured through precisely articulated false statements (“Do you think early childhood vaccines cause autism?”) provided to a representative sample of individuals to evaluate their truth value. A powerful summative statistic regarding vaccine misbelief might be: 11% of the U.S. adult population believes that childhood vaccines “are more dangerous than the diseases they are designed to prevent” (Reinhart, 2020). Measuring the prevalence of misinformation content and exposure brings a wide range of distinct challenges. Measuring prevalence of misinformation first requires an evaluation of the truth value of a statement, and second, a “population of bits” of information to generalize to. Both of these requirements pose difficult challenges. First, some scientific claims (like those concerning the laws of the physical universe) are clearly true or false; however, there are many claims that include a degree of scientific uncertainty. “Vaccines are safe” is less misleading than “vaccines are dangerous”; however, neither is precisely correct, in that vaccines that are widely used are generally safe, but some vaccines can pose some dangers to some people. This characteristic of a general but not universal truth applies to many policy-related issues to which science is relevant. Thus, definitive truth assessments may be difficult to make.
Generalizing to a “population of bits” is also hard, because some forms of information that people engage with are difficult to measure. For example, one can imagine wanting to measure every bit of information someone is exposed to regarding childhood vaccines, including from doctors, friends, books, popular culture, various online sources, and so on. This, however, is a nearly impossible task. Alternatively, one might imagine an effort to measure how many statements regarding vaccines on X (formerly Twitter) are correct. This is a far more doable task (subject to the challenge of truth assessments), but much less comprehensive. This is not only because it is just one information medium, but further, the existence of content does not equal exposure. To say that a certain percentage of tweets regarding vaccines are false tells us nothing about how many people have
seen those tweets, or longer-term effects on cognition for individuals who have seen them.
In determining the prevalence of misinformation about science, it is important to distinguish between two related but sometimes conflated concepts: (a) the prevalence of misinformation, scientific or otherwise, within a given media channel(s) and (b) the degree of aggregate exposure to misinformation about science in a given population. While we might expect the two to be somewhat correlated, it is possible that specific bits of misinformation about science that would appear small from a quantitative standpoint might still reach large audiences, or that large amounts thereof might only be viewed by few people or none. Here, we emphasize the former while mentioning the latter as relevant. Such measures of the amount of misinformation about science will inevitably elide vastly different types thereof, with potentially different degrees of impact. For example, one individual could be repeatedly exposed to various bits of misinformation over a given period of time, but only one or a few might measurably affect their attitudes or behavior (see Chapter 6 for more discussion of the factors that shape individual-level exposure to and engagement with information, including misinformation about science).
A recent, systematic review of 69 papers on the prevalence of health misinformation on social media before 2019 by Suarez-Lledo & Alvarez-Galvez (2021) reflects some of the key challenges to measuring prevalence. Notably, many of the included papers are not on health misinformation, as defined in this consensus report (see Chapter 2 for the committee’s definition); rather they are misinformation adjacent. For example, in re-examining the set of papers ostensibly on vaccine misinformation (the most common category of misinformation examined) on Twitter/X (the most common platform studied), the committee determined that only a single paper (Love et al., 2013) of the 69 actually evaluated the scientific accuracy of vaccine claims on Twitter (coded as “unsubstantiated”). This sample of tweets, in turn, was very small (only 369 tweets were evaluated as substantiated or unsubstantiated by scientific evidence) and of limited generalizability (the sample was one week of data collection via NodeXL in 2012 using three vaccine keywords). This is not to criticize this paper—it was an early exploratory effort—but to highlight the limited evidentiary basis to make claims about the prevalence of misinformation about science. Previously, Johnson et al. (2020), which was not included in Suarez-Lledo and Alvarez-Galvez’s review, examined how vaccination sentiments map across Facebook community spaces like public pages, finding that while the anti-vaccination population consists of a smaller minority of total users, twice as many anti-vaccine pages exists for user engagement compared to the pro-vaccination population. It is important to note that this study did not actually measure the content on these pages or from other parts of
Facebook such as individual profiles or private pages, which limits what inferences can be made about prevalence.
Another recent, systematic review looked at 57 studies addressing the prevalence of misinformation about COVID-19 vaccines in online platforms and survey respondents’ knowledge (Zhao et al., 2023). But differences in the measures used across the studies make it difficult to draw widely applicable conclusions about the prevalence of COVID-19 vaccine misinformation. Some of the studies included in the review calculated prevalence based on samples of general social media posts about COVID-19, while others sampled only anti-vaccine posts. Some assessed respondents’ awareness of various misinformation bits (exposure) through surveys, while others analyzed belief in misinformation. These limitations point to a need for researchers to reach agreement on definitions of prevalence and approaches to measurement within different communication domains to make studies more directly comparable.
Other studies have examined COVID-19-related misinformation on particular online platforms. An examination of the most viewed videos on YouTube regarding COVID-19 vaccines found a significant share (11%) had information that was in contradiction to the World Health Organization or Centers for Disease Control and Prevention (CDC; Li et al., 2022). Similarly, a study of early information being disseminated regarding COVID-19 found that more than a quarter of the top videos contained misleading information (Li et al., 2020), and Shin & Valente (2020) found that vaccine-related searches on Amazon.com mostly pointed consumers to vaccine-hesitant books. Finally, a burgeoning body of research on the popular video-sharing site TikTok has found moderate (<50%) levels of COVID-19 misinformation in their samples (Baghdadi et al., 2023; Basch et al., 2021; van Kampen et al., 2022).
There is conflicting evidence concerning the proportion of news Americans encounter online as compared to legacy media such as television, radio, and print. While consumption-based metrics such as Nielsen and Comscore indicate that TV is the dominant American news source (Allen et al., 2020), more recent survey data suggest that an increasing number of Americans engage with news via digital devices (Pew Research Center, 2024c). Nevertheless, current understanding of the prevalence of misinformation within offline channels like television, radio, film, and books is hampered by limited data about such media. Given what is known about, for example, the role of partisan media outlets (defined based on the ideological slant of their news content and/or audiences; Budak et al., 2016; Robertson et al., 2018) with large audiences in distributing misinformation about science (see later discussion on “partisan media outlets”), this represents a substantial knowledge gap.
Assessing the contribution of generally credible sources of information to the amount of circulating misinformation is also not a strong focus in the
current scholarship on misinformation about science. A notable exception is Broniatowski et al. (2022), which finds that about 25% of articles from low credibility sources contain false claims, as compared to 5% of articles from higher credibility sources. Sites rated “not credible” were 3.67 times more likely to contain false claims than those rated as “more credible.” Beyond that study, there is relatively little evaluation of how much news content from mainstream news media may be characterized as misinformation about science. Additionally, some studies of science-related content in mainstream news media do not use of the term “misinformation” but are essentially examining misinformation about science as defined in the context of this report. For example, a cross-national study of the quality of news coverage regarding COVID-19 (Mach et al., 2021) found that “overall scientific quality of news reporting and analysis was lowest on the populist right of the political spectrum.” The committee notes that while much of the low-quality content likely meets the definition of misinformation about science presented in this report, the researchers did not label it as such. Similarly, Sumner et al. (2014) looked at the role of misleading academic press releases in subsequent news coverage, and what this report would define as misinformation (e.g., reporting that a study on rodents holds relevance for humans when the study did not make such a claim) is described as “exaggeration.”
The scholarly focus on internet domains like websites also means that factually incorrect statements on social media platforms are generally not considered to be “misinformation”. For example, while Yang et al. (2023) found that 23% of politically-related sampled images on Facebook were misinformation, these posts would have been entirely missed by judgments about misinformation only at the domain level. There is also some emergent literature on misinformation on less visible platforms like WhatsApp, Facebook Messenger, and Telegram (e.g., Almodt, 2024; Curley et al., 2022; Ng & Loke, 2021; Zhong et al., 2024); but it would be difficult to produce enough data to generalize about the prevalence of misinformation about science on such platforms given that an analysis of the unedited contents of these networks would be an unacceptable breach of privacy, and end-to-end encryption (E2EE) of some platforms makes this task extremely challenging, if not impossible. Research on health misinformation does repeatedly find substantial quantities of health misinformation in particular domains (see review by Suarez-Lledo & Alvarez-Galvez, 2021), but often does not indicate what fraction of all content is misinformation (i.e., it does not specify what the denominator for quantity of information is).
There are also multiple practical issues in making a proper estimate of the distribution of exposure to misinformation about science. The first is the challenge of measuring exposure, given the multitudinous channels of information and the lack of measures to many of those channels. A second issue is the level of effort and expertise (and therefore cost) it takes to make an attribution as to whether particular science content is misinformation. This cost is often acute in evaluating misinformation about health or science, which requires technical expertise. Many studies of science or health information used people who had obtained MDs and/or PhDs to evaluate on an item-by-item basis whether or not content was misinformation—an extremely costly approach. Third, there is a need to distinguish between and analyze the relationships between misinformation prevalence and exposure in a more systematic fashion. These are not, in principle, insurmountable challenges; however, they do point to the need for substantial investments in multiplatform measurement in this space.
There are myriad sources of misinformation about science, and understanding these sources is instructive for studying its impacts and identifying leverage points of intervention. Misinformation about science can originate from ordinary citizens, as well as institutional sources including industry, media, and governments. It also comes from within science itself, via scientists and medical professionals. In some cases, production of misinformation about science may involve an organized effort, meaning that many actors and organizations work together to seed and amplify false information about a particular science topic to achieve economic and/or political goals. These organized, intentional efforts are a form of disinformation, which is a sub-category of misinformation (see Chapter 2 for the committee’s discussion of these terms). In other cases, misinformation about science is an unintended byproduct of the incentives and constraints in our media, political, and scientific institutions.
Misinformation often, but not always, flows through mediated channels. As described in Chapter 3, the media system of the 21st century is best characterized as a hybrid of interconnected technologies and media logics (Chadwick, 2017), as opposed to the more clearly delineated media system of the 20th century. It no longer makes sense to distinguish sharply between television, radio, film, newspapers, magazines, books, and other familiar legacy media—not to mention their digital analogues, because content that appears in one medium often also appears in others. Additionally, content providers can take advantage of the internet’s cheap distribution costs to
produce articles, videos, podcasts, and other types of media of varying degrees of veracity. Given misinformation about science tends to be produced by a small number of repeat purveyors (Nogara et al., 2022; Pierri et al., 2023), we spend much of the next few sections examining such sources and the audiences they reach. While the following discussion is not meant to be an exhaustive accounting of the sources of misinformation, it reviews those that have emerged prominently in the literature and underscore important dynamics about the spread of misinformation. We also discuss sources that the literature suggests are systematic producers of misinformation about science; in many cases, these sources also reach large audiences.
When scientific evidence has shown a link between an industry or an industry product and environmental or public health harms, scholars report that some industries have mobilized to produce and spread misinformation that contradicts or distorts the relevant science (Björnberg et al., 2017; Dunlap & McCright, 2011; Farrell, 2015; Holder et al., 2023; Kearns et al., 2016; McCulloch & Tweedale, 2008; Oreskes & Conway, 2010b; Supran & Oreskes, 2017, 2021; Williams et al., 2022). More specifically, some industries have been reported to suppress evidence and propagate misinformation through public relations campaigns (see Chapter 5 for a discussion on industry public relations strategies) and the advertising and marketing strategies used to sell products and services (Aronczyk & Espinoza, 2021; Michaels, 2008, 2020; Oreskes & Conway, 2010b).
One of the most well-documented examples of such coordinated, systematic efforts in the literature involves the efforts of some fossil fuel companies, utility companies, public relations firms, think tanks, foundations, trade groups, politicians, partisan media, and scientists who were reported to work in concert to deny climate science and exert undue influence on policymaking around environmental issues (Björnberg et al., 2017; Dunlap & McCright, 2011; Farrell, 2015). It has also been reported that since the 1970s, some fossil fuel companies have played a role in undermining climate science and promoting misinformation about the reality, causes, and significance of climate change, in an effort to avoid financially punitive regulations on their business (Dunlap & McCright, 2011; Holder et al., 2023; Supran & Oreskes, 2017, 2021). Other work has shown that some electric utility companies have also used similar strategies to promote climate change denial (Williams et al., 2022).
Kearns et al. (2016) have also shown that in the 1960s and 1970s, the sugar industry funded influential research to challenge sugar’s contributions to coronary heart disease and shift the blame to fat, and this was reported to be an effort to protect their market share. Additionally, some
companies within the food and beverage industry have also been reported to use misleading marketing to sell products to consumers, often targeting children, women, and low-income communities and communities of color (Bailin et al., 2014). Researchers have also described similar campaigns to conceal the health risks of particular products or activities that have been undertaken by other industries and companies ranging from the asbestos industry (McCulloch & Tweedale, 2008) to tobacco companies (Barnes et al., 1995; Oreskes & Conway, 2010b) to the NFL (Fainaru-Wada & Fainaru, 2014; Michaels, 2020).
Many scholars also report that the supplement industry often uses health claims that are unsupported by scientific evidence to promote its products and boost sales (Ayoob et al., 2002; Rachul et al., 2020; Wagner et al., 2020; also see later section on Alternative Health and Science Media), as does the rapidly growing cannabidiol (CBD) industry (Wagoner et al., 2021; Zenone et al., 2021). Likewise, direct-to-consumer advertising for pharmaceuticals (Hollon, 1999; Wolfe, 2002), clinical genetic testing (Gollust et al., 2002), stem cell therapies (Murdoch et al., 2018), and cancer services (Hlubocky et al., 2020) can also be a potential source of misinformation about science, by overstating benefits and downplaying risks. Additionally, “greenwashing,” is another form of deceptive advertising whereby some companies try to appeal to environmentally conscious stakeholders by exaggerating the positive environmental impact of their products or practices (Lyon & Montgomery, 2015). Greenwashing can involve various methods, such as using vague or meaningless claims (e.g., “all natural,” “eco-friendly”), false labeling, selective disclosure, and misleading visual imagery (Aronczyk et al., 2024; Baum, 2012; Holder et al., 2023; Lyon & Montgomery, 2015).
One of the most consequential industry-led disinformation campaigns that has been documented in the literature involved the efforts by the pharmaceutical company Purdue Pharma to promote its opioid painkiller, OxyContin. Michaels (2020) reported that Purdue Pharma engaged in marketing and public relations campaigns to mislead regulators, doctors, and patients with false claims that OxyContin was not addictive and that it was a safe and effective treatment for not only acute pain but also chronic pain. Specifically, researchers report that the company relied on a selected set of small-scale studies from the medical literature to support its claims, funded new studies, paid doctors to promote the drug, and engaged in aggressive sales practices targeting primary care physicians (Armstrong, 2019; Michaels, 2020). Michaels (2020) also reported that Purdue advanced a new diagnosis, “pseudo-addiction,” based on a single study with a single patient, that claimed that addictive cravings for opioids were driven by inadequate treatment for pain, thereby requiring more opioids. Additional research found that Purdue’s efforts led to an exponential boom in
opioid prescribing, which increased the risk for dependence and addiction (Rummans et al., 2018). It is estimated that in 2016 prescription opioids were involved in 40% of all opioid overdoses (Seth et al., 2018), and pharmaceutical industry marketing of opioids to doctors has been linked to deaths from opioid overdoses (Hadland et al., 2019).
It is important to recognize that industries can also be sources of accurate science information. For example, the pharmaceutical industry engages in valuable science communication, marketing, and education to doctors, policymakers, and the public that enables patient access to health-promoting and life-saving medicines and treatments. Likewise, corporate social responsibility has become a guiding framework for strategic corporate practice, including around the environment and climate-related issues (Latapí Agudelo et al., 2019). For example, fair trade is an example of responsible corporate practice in the environmental space. Relatedly, scholars have noted the growth of private environmental governance, whereby the corporate sector, motivated at least in part by market considerations, self-regulates to address environmental problems like climate change (Vandenbergh & Gilligan, 2017; Vandenbergh et al., 2024). The committee was not able to identify examples in the literature of industries or individual companies that have proactively and responsibly adapted to new scientific evidence that was in direct conflict with their economic interests. That said, such examples would serve as important subjects of future research and as potential models for how businesses may be able to effectively and transparently communicate and act on scientific evidence even in the face of economic risks.
Governments and politicians can also be sources of misinformation about science. For example, research indicates that state-sponsored propaganda made by online Russian troll accounts linked to the Internet Research Agency spread vaccine misinformation on Twitter during the 2016 U.S. elections in an effort to promote political discord (Broniatowski et al., 2018). Studies also suggest that Russian state news sites were prominent sources of misinformation about genetically modified organisms (GMOs; Dorius & Lawrence-Dill, 2018). Other research shows that the Brazilian government engaged in a widespread misinformation campaign during the COVID-19 pandemic to downplay the risks of COVID-19, discredit scientifically-backed mitigation measures, and decrease trust in health decision makers (Ricard & Medeiros, 2020).
Yet, the propagation of misinformation about science by governments and politicians is not unique to social media. Both online and off, evidence suggests that some governments and politicians have been prominent
sources of misinformation on science issues, ranging from COVID-19 (Blevins et al., 2021; Evanega et al., 2020) to climate change (e.g., De Pryck & Gemenne, 2017) to vaccines (Bing & Schectman, 2024; Jamison et al., 2019a). Importantly, political leaders are often highly trusted by their constituents and are afforded large media platforms, which can make misinformation from these sources especially pernicious.
Think tanks, which are another potential source of misinformation about science, are typically non-profit, public policy research or advocacy organizations. Through books, reports, editorials, and experts, think tanks can influence news coverage and public discourse about science-related issues (Dunlap & Jacques, 2013; Jacques et al., 2008). Some think tanks may also be aligned with a political position, which can lead to bias in the advice that they provide (Farrell, 2016, 2019). For example, there is research showing that think tanks that are tightly connected to political actors, including philanthropic foundations and corporate funders, politicians, and like-minded media outlets, tend to amplify misinformation about science through such networks (Farrell, 2016, 2019). Ideology-based think tanks can also engage across different science domains, whereby the same think tank might disseminate misinformation about two different topics—for example, COVID-19 and climate change (Lewandowsky, 2021b). It is important to note that the extant literature on think tanks as sources of misinformation has largely focused on conservative think tanks and less so on liberal and centrist think tanks (Dunlap & Jacques, 2013; Jacques et al., 2008; McCright & Dunlap, 2000, 2003). However, given that activities of all think tanks, regardless of ideology, have the potential to be influenced by their funders, and that there is often a lack of transparency around funding and conflicts of interest, greater research attention is needed to better understand the role that think tanks as a whole play in the production of misinformation about science.
News organizations and journalists are key mediators of science information and as such misinformation as well. As noted in Chapter 3, U.S. adults tend to rely on general news outlets to acquire science information. This is especially true during times of crisis, where public uncertainty and interest are high, and journalists become critical frontline communicators, as seen during the COVID-19 pandemic (Altay et al., 2022; Van Aelst et al., 2021). However, news from major media organizations, regardless of platform or format, can also be a source of misinformation about science
due to several factors. For one, disingenuous actors often target news organizations to seed stories that contain misleading information (e.g., Armstrong, 2019; Oreskes & Conway, 2010b). In other cases, news coverage can inadvertently disseminate misinformation through unintended misrepresentations of scientific research by featuring sources or guests who make problematic claims or simply by reporting on or even trying to debunk misinformation about science.
Science is endemic to most journalistic “beats,” from politics to business to lifestyle. Yet many newsrooms, especially smaller outlets, lack staff with science training (Voss, 2002), which may open the door to inaccuracies in their reporting. For example, health and medical news, (which the committee considers to be a type of science journalism) is often reported on by lifestyle or generalist reporters who may not have specialized training in science or research methods (O’Keeffe et al., 2021; Tanner, 2004; Voss, 2002). Due to time constraints and market pressures, even well-resourced outlets may over-rely on press releases and other information subsidies that can inflate or misconstrue research claims (Bratton et al., 2019; Sumner et al., 2014).
Mainstream news media organizations follow a set of professional norms and values that govern journalistic practice and shape the nature of news coverage. These norms and values, while important for legitimizing the professional practice of journalism, can also give way to misinformation about science. One chief example is the norm of objectivity, which journalists typically operationalize by giving voice to both sides of controversial issues. In the context of science, this can result in “false balance,” whereby journalists cover both sides of a scientific debate, even when scientific evidence overwhelmingly points in one direction—a journalistic practice that has been exploited by actors seeking to undermine the scientific consensus (see Chapter 5). For example, the majority of news coverage of climate change in leading U.S. newspapers between 1998 and 2002, and in television news programs between 1995 and 2004 presented a balanced account regarding the existence of anthropogenic climate change and the need for climate action (Boykoff, 2008; Boykoff & Boykoff, 2004). By giving equal weight to advocates and skeptics of climate science, news media distorted the scientific consensus on global warming. Indeed, experimental evidence has shown that false balance can reduce the public’s perceptions of agreement among experts on high-consensus issues (Koehler, 2016). While this practice receded in the early 2000s, it likely inflicted lasting damage on public perceptions of climate change (Boykoff, 2007; McAllister et al., 2021). There is also some evidence for false balance in media coverage of the autism-vaccine controversy. Between 1998 and 2006, about a third of U.S. and British newspaper coverage provided balanced coverage of the link between vaccines and autism, despite scientific consensus that vaccines
do not cause autism (Clarke, 2008). In the British press specifically, another third of coverage only presented claims that supported a link between vaccines and autism (Clarke, 2008). False balance was also found to be more likely in science news stories from the 1980s to 2010s about GMOs and nuclear power, whereas stories about climate change and vaccines during that time, tended to hew toward the expert consensus (Merkley, 2020); in this case, however, this finding could very well reflect legitimate scientific debate on GMOs and nuclear power (e.g., although GMOs are not linked to human health concerns, more research is needed on their environmental impacts [National Academies, 2016a]).
Journalists also follow an “indexing” norm, whereby their reporting closely tracks the range of views expressed by government officials (Bennett, 1990), and they also rely on official sources (including elected officials and other prominent elites). In the case of COVID-19, research suggests that U.S. TV news coverage engaged in indexing practices by covering misinformed elite viewpoints, thus affording them greater prominence (Muddiman et al., 2022). Along similar lines, mainstream media also cover high-profile examples of misinformation as newsworthy events, which can contribute to their dissemination (Tsfati et al., 2020). Policies on press access at federal science agencies can further hamstring journalists’ coverage of science, by restricting reporters from talking directly to government scientists and instead requiring them to speak to spokespeople and communications officers who lack subject matter expertise (Cohen, 2009; National Association of Science Writers, n.d.). At the same time, some academic scientists, who can play an important role in clarifying and contextualizing scientific research in news stories, may be unwilling to speak to reporters, due to institutional pressure to focus on publishing rather than on media and other public outreach (Woolston, 2018).
Another professional norm of journalists is valuing information that is novel, dramatic, and conflictual, as this is more likely to attract news audiences. This emphasis can give way to sensationalistic, misleading, or incomplete coverage of science. There is wide-ranging evidence of news media misrepresenting and misreporting scientific studies, medical developments, and health issues (Brechman et al., 2011; Cooper et al., 2012; Greiner et al., 2010; Houn et al., 1995; Lai & Lane, 2009; Nagler et al., 2015; Rachul & Caulfield, 2015; Selvaraj et al., 2014; Shi et al., 2022; Stefanik-Sidener, 2013; Walsh-Childers et al., 2018; Woloshin & Schwartz, 2006; Woloshin et al., 2009b; Yavchitz et al., 2012; Zuckerman, 2003). This is likely due to a confluence of factors, including journalistic norms and informational biases, over-reliance on public relations and other information subsidies, exaggerations and omissions in the original scientific articles, and lack of resources and scientific expertise on the part of journalists and news organizations (Woloshin et al., 2009b). In the digital media environment, where
audience engagement is easily quantified, some newsrooms use A/B testing to determine which headlines receive the most online traffic, a practice that can sometimes result in more sensational or clickbait-y headlines (Fürst, 2020; Hagar & Diakopoulos, 2019).
In contrast to mainstream news outlets, partisan media outlets present information from a specific ideological view. Unlike the partisan press of America’s early democracy, where newspapers were sponsored by political parties, contemporary partisan media in the United States are typically identified based on the ideological slant of their content (Budak et al., 2016; Levendusky, 2013; Stroud, 2010) and/or the ideological composition of their audiences (Bakshy et al., 2015; Gentzkow & Shapiro, 2011; Robertson et al., 2018). Although precise measures of exposure to partisan media are difficult to obtain, current data suggest that the aggregate audience is quite large. For example, 2023 cable news ratings from Nielsen indicate that approximately two million viewers, on average, watch the most popular prime-time programs on Fox News and MSNBC each night (Johnson, 2023); yet these numbers do not reflect the total audience who tune into such news sources. A recent study that pooled data from multiple sources that unobtrusively monitor TV consumption estimates that one in seven Americans watches over eight hours per month of partisan TV news, and that many of these viewers only consume information from outlets that match their partisan orientation (Broockman & Kalla, 2024). Several other studies have reached similar conclusions regarding the size and segregation of the partisan news audience (Muise et al., 2022; Prior, 2013; Stroud, 2011).
Currently, research on misinformation about science in partisan media has mainly focused on politicized topics such as climate change and COVID-19. Studies looking at climate change, for example, have found that conservative media have been a source of climate denial and skepticism of the broader scientific enterprise, including the integrity of scientists and peer-reviewed research (Dunlap & McCright, 2011; McKnight, 2010). Further, a systematic content analysis of cable news transcripts from 2007 and 2008 revealed that coverage of climate change on Fox News cable TV network often included claims that challenged the scientific agreement on climate change, for example, by challenging the reality of climate change or that it is caused by human activities (Feldman et al., 2012). The study showed that the network also tended to feature more climate change skeptics as interview guests, relative to CNN and MSNBC, whose coverage was more consistent with the weight of scientific evidence on climate change (Feldman et al., 2012). In print media, an analysis of opinion editorials
written by 80 different columnists between 2007 and 2010 revealed that nationally syndicated conservative op-ed columnists who wrote about climate change included skeptical arguments that questioned the reality, causes, and seriousness of climate change, as well as the feasibility of solutions (Elsasser & Dunlap, 2013).
In the context of COVID-19, conservative media outlets were reported as more likely to reference misinformation about COVID-19 at the beginning of the pandemic than mainstream news outlets (Motta et al., 2020). On social media, two studies found that right-wing media accounts were disproportionately responsible for spreading and amplifying COVID-19 misinformation (Yang et al., 2021; Zhang et al., 2023). Moreover, several studies have supported a causal relationship between exposure to information from Fox News and lower rates of protective behaviors in response to COVID-19, including social distancing and vaccination (Ash et al., 2024; Pinna et al., 2022; Simonov et al., 2022). But, while these studies suggest a relationship, it is important to note that a direct link between misinformation about science and adverse behaviors in response to COVID-19 has not been definitively established (see Chapter 6).
Studies on the role of partisan media outlets in the production and spread of misinformation about science, mainly climate change and COVID-19, have also largely focused on conservative media outlets and less so on more liberal-leaning media outlets. However, one study by Merkley (2020) that examined over 280,000 news stories found that liberal-leaning media outlets are more likely to engage in false balance and feature claims from polarizing viewpoints when covering issues about which Democrats are skeptical of the expert scientific community’s position, including the safety of nuclear power and GMOs (Merkley, 2020, Figure H1).
Studies that have focused on the circulation of misinformation more broadly (i.e., not exclusively misinformation about science) by analyzing exposure to and engagement with online news from untrustworthy sources, as well as news from domains rated as false by third-party fact-checkers, have found that misinformation is disproportionately more likely to come from right-leaning media sources than from mainstream media or left-leaning media sources (González-Bailón et al., 2023; Grinberg et al., 2019). These patterns are important because they suggest that some individuals who are primarily engaging with more right-leaning media sources may be more likely to be exposed to misinformation, including about science, than individuals who engage with a diversity of media sources.
Although most of this report focuses on misinformation about science in the English language, misinformation among non-English speaking
and multilingual communities is an important yet understudied dimension of misinformation spread. Ethnic, diasporic, and community media (e.g., media produced for and often by specific communities defined by ethnicity, language, etc.) function as trusted means for non-English speaking and multilingual communities to access in-language news and information, which can fall outside of mainstream English language media outlets (Nguyễn & Kuo, 2023). The media networks that distribute ethnic, diasporic, and community media can span local, regional, national, and transnational circulation (Nguyễn & Kuo, 2023). Media may include a range of formats, including print media, radio, cable television channels, social media sites, video streaming, etc. created by and for immigrants, groups from marginalized ethnicities and languages, and Indigenous populations (e.g., Gerson et al., 2020; Lopez, 2021; Rajagopalan, 2021). These networks may also include social media accounts and sources on YouTube, blogs, podcasts, or mobile messaging apps created by individual influencers and commentators (Nguyễn & Kuo, 2023). Media from “home countries” may also be consumed as part of this media diet and may be in English and/or in the home language (referred to as “in-language”), and may span thousands of platforms and outlets (Nguyễn & Kuo, 2023). Diasporic media considers “audience preferences, influence of viewpoints in host countries, accessibility to sources of information, and community-serving attitude” for migrants outside of the home countries (Pham, 2021, p. 512). It can act as a transnational bridging tool, transmitting information between those who identify with communities coming from a diffusion across space, cementing identity formation to one’s homeland, and affording boundary maintenance within the host society (Brubaker, 2005).
While many ethnic media sources can be credible, they can also carry political biases, whether supported by local governments, or financially backed by political parties and factions—for example, The Epoch Times, an international, multilingual newspaper, is funded through the Falun Gong, an anti-Chinese Communist Party (CCP) spiritual movement (Owen, 2021; Roose, 2020). Some popular in-language outlets have been shown to carry different “homeland/hostland” political biases as a means of assimilation, acculturation, or survival (Shams, 2020). Popular in-language outlets that are financially backed by political parties and factions facilitate ties to opinionated and biased news for non-English readers who hold deep trust for these in-language news sources (Owen, 2021). Importantly, the distinct and specialized identities that ethnic media producers must convey in relation to mainstream media, mainstream society institutions, and the communities they serve become a strain on providing wholly accurate information in their reporting (Matsaganis et al., 2011).
Alongside ethnic media, private messaging platforms are prevalent in non-English and multilingual communities as means to connect between
home and host countries, develop and maintain intimate trusted relationships, and all within the access of in-language information (Nguyễn & Kuo, 2023). These platforms include, but are not limited to, WeChat, KakaoTalk, WhatsApp, Viber, Line, Facebook Messenger, Telegram, and Zalo. In a 2020 voter survey, nearly one in six Asian Americans reported using private messaging platforms such as WeChat, WhatsApp, and KakaoTalk to discuss politics (AAPIData, APIAVote, and Asian Americans Advancing Justice, 2020). Latinx communities tend to favor WhatsApp, Line, and Facebook Messenger; South Asian communities commonly use WhatsApp and Telegram. Additionally, other non-English specific platforms such as Zalo may be popular within respective communities in Vietnam, while ephemeral media (e.g., reels) and direct messages on mainstream platforms, such as Instagram or TikTok, are widely utilized by diverse cultural groups, though not exclusive to any particular community (Nguyễn & Kuo, 2023).
It is worth noting that these preferences may vary based on language, cultural ties, and regional availability. Closed platforms are often intimate and trusted spaces for non-English speaking and multilingual communities, particularly as individuals share information across platforms and nation states, yet they are also spaces where misinformation can spread across these internal networks (Nguyễn & Kuo, 2023). Zhang’s (2018) pioneering study of WeChat and how misinformation flows through Chinese diasporic communities on the platform reveals a combination of factors that amplify misinformation. These include a low barrier to entry in branded content publishing; a lack of local news coverages on issues of particular interest to Chinese immigrants; and intimate communication spaces where users are connected by common—mostly identity-based—affiliations. More recent research on misinformation about science on the WeChat platform explored the strategies users employ to evaluate information credibility in the context of COVID-19 (Zhu et al., 2022), general medical matters (Wu et al., 2024), and general science (Wang et al., 2023). Among other findings, these studies point to older relatives as common distributors of misinformation about science on WeChat (Wu et al., 2024; Zhu et al., 2022), and to fearmongering (Wang et al., 2023) as a common characteristic of the information shared. But aside from these findings, little is known about the nature of misinformation sharing within other closed platforms given limitations that researchers face to data access. Moreover, hosts and developers of closed platforms have issues identifying, moderating, and training data on problematic information posted and shared across these platforms, given how information is shared as well as tensions between commitments to privacy (e.g., end-to-end encryption) versus moderation (Nguyễn & Kuo, 2023).
Alternative health and science media provide health and medical advice that is outside of mainstream science and can contain misinformation related to various science and health topics and domains. Such sources include popular health-related TV talk shows, health blogs and websites, and social media accounts, all of which can advocate for treatments, including diets, detoxes and supplements that are not backed by scientific evidence (Stecula et al., 2022). Research also shows that alternative health and science websites can promote health- and science-related conspiracy theories and stoke fear and distrust of mainstream medicine and the regulatory processes used by the U.S. Food and Drug Administration (FDA), for example, to authorize treatments and vaccines (Marcon et al., 2017; Stecula et al., 2022).
Researchers examining health claims specifically on the health-related TV talk shows The Dr. Oz Show and The Doctors found that for a significant percentage of these recommendations either no scientific evidence to support them was found (39% and 24%, respectively) or they contradicted the best available evidence (15% and 14%, respectively; Korownyk et al., 2014). Some studies found that popular websites on complementary and alternative medicine for cancer promoted “cancer cures” that were not supported by scientific evidence and, in some cases, advised patients against conventional therapies such as chemotherapy (Delgado-López & Corrales-García, 2018; Peterson et al., 2020; Schmidt & Ernst, 2004). Likewise, many articles about stem cell therapies published on alternative science sites exaggerated claims about stem cells and stem cell science (Marcon et al., 2017). Moreover, many of the sites feature the same content and included hyperlinks to each other, creating a coordinated network of misinformation (Marcon et al., 2017). Alternative health and science sites can also promote misinformation about genetically modified foods. For example, an analysis of nearly 95,000 online articles shared on social media between 2009 and 2019 found that the most visible and persistent content related to genetically modified foods originated from alternative health sites (Ryan et al., 2020). Although this study did not determine whether these articles contained misinformation, this finding indicates that the most widely shared online content about genetically modified foods is coming from unvetted websites.
Alternative health sites are also prominent sources of vaccine misinformation (Kata, 2012). Research shows that some anti-vaccination advocates have created densely connected and highly visible online communities that include discussion forums, parenting blogs, websites, social media pages and profiles, and other social media accounts on which vaccine misinformation proliferates (Hoffman et al., 2019; Puri et al., 2020; Smith & Graham, 2019; Yuan et al., 2019). Moreover, many anti-vaccination sites
can resemble official scientific sites, making it more difficult for individuals to discern their veracity (Davies et al., 2002).
Wellness culture, a version of alternative health, has flourished in the internet era, becoming a billion-dollar industry (Baker & Rojek, 2020), and may incentivize the promulgation of misinformation in order to sell products, including supplements, cleanses, and essential oils. Indeed, some popular wellness and lifestyle companies have even been implicated in making health claims about products that are not backed by scientific evidence (Garcia, 2018). Notably, scholars and journalists have also identified a connection between conspiratorial content and the treatments and supplements that are promoted on wellness sites, whereby wellness advice is sometimes presented alongside conspiratorial content in the marketing for health products (Baker, 2022).
Finally, alternative health-related media sources are significant not only in the kind of information they provide, but in that they reach large audiences. For example, at their peak in the early 2010s, The Dr. Oz Show attracted more than three million daily viewers and The Doctors nearly two million daily viewers (Block, 2013). Additionally, in 2020, the alternative health site Naturalnews.com, for example, reportedly attracted 3.5 million unique visitors (Collins & Zadrozny, 2020). A national U.S. survey conducted in 2018 and 2019 found that approximately 26% of Americans sometimes or regularly watch health- and medical-related TV talk shows, 22% sometimes or regularly follow social media accounts dedicated to alternative health, and 15% sometimes or regularly read alternative health blogs (Stecula et al., 2022). The survey also showed that consumers of alternative health media tend to be women, lower income, less educated, non-White, and have lower levels of trust in medical experts (Stecula et al., 2022). Thus, misinformation about science from this sector has greater potential for harm for these groups given the significant reach.
As a source of misinformation about science, entertainment media have not been studied as widely as news and social media, but it is an area worthy of more research given that many popular scripted entertainment programs feature science topics and scientists, from medical dramas like Grey’s Anatomy and House to comedies like The Big Bang Theory. Research has found that fictional entertainment can shape people’s beliefs and attitudes, including in the domains of health and science, due to its ability to absorb audiences into a narrative world (Dahlstrom, 2014; Frank et al., 2015; Green et al., 2003; Hoffman et al., 2023). Moreover, fictional and factual narratives, even when clearly labeled as such, have been shown to be equally persuasive (Appel & Malečkar, 2012; Green & Brock, 2000;
Strange & Leung, 1999). Research also shows that people often develop misbeliefs based on factual errors contained in fictional media, even when their prior knowledge contradicts this misinformation and even when they are reminded that fiction might contain factual inaccuracies (Butler et al., 2009; Fazio et al., 2013).
Although it is expected that fictional entertainment is less concerned with empirical fact than with narrative and the verisimilitude of experience (Dahlstrom, 2021), it can still be a source of misinformation about science. Science in fictional entertainment may be over-simplified, exaggerated, or otherwise misrepresented for the sake of a compelling story or for cinematic effect. For example, producers and writers of TV crime dramas that depict forensic science like CSI and Bones judge scientific realism based on narrative plausibility rather than scientific accuracy (Kirby, 2013). In such shows, forensic evidence, including DNA tests, is widely used to solve crimes, and the unrealistic ease and certainty with which this evidence is deployed on TV has led to anecdotal concerns about a so-called “CSI effect,” whereby real-life jurors demand DNA evidence from the prosecution before finding a defendant guilty—although empirical evidence indicates that this effect may be overstated (Podlas, 2005; Shelton et al., 2006).
Another source of misinformation about science in entertainment relates to the representation of scientists. Entertainment TV programs and films sometimes depict scientists in stereotypic ways (e.g., the “mad scientist”) and tend to underrepresent women and people of color in scientist roles. This can skew perceptions of the scientific profession and affect trust (Kirby, 2017).
One of the few systematic content analyses concerned with the depiction of misinformation about science in entertainment examined 51 fictional entertainment TV episodes from 2018 to 2020 that featured vaccine-related plotlines (McClaran & Rhodes, 2021). The study found that 86% of episodes presented at least one anti-vaccination argument, and 60% of those episodes contained arguments based on misinformation, such as that vaccines pose a serious health risk or are part of a conspiracy. Nine episodes (40%) included the argument that vaccines cause autism, despite having been clearly debunked by the scientific community.
Movies about science, such as the 2004 blockbuster climate disaster film The Day After Tomorrow—which featured an unrealistically large, storm surge-driven tidal wave—may also inaccurately depict scientific phenomena and events, even as they help raise public awareness about issues like climate change (Sakellari, 2015). Although relatively rare, some fictional films feature plotlines that are entirely based on misinformation, such as the 2015 thriller Consumed, in which a mother discovers genetically modified food might be behind her son’s mysterious illness, defying wide scientific agreement that GM food does not carry human health risks.
Outside of fictional entertainment, science and environmental documentary films—which are now finding wider audiences online and via streaming platforms—also use narrative storytelling, but in a way that is intended to be received by the audience as fact (Smith & Rock, 2014). Many science and environmental documentaries are advocacy-oriented, meaning that they are designed to be explicitly persuasive by presenting evidence to support a clear ideological viewpoint (Cooper & Nisbet, 2017). Despite the realism of the documentary genre, some documentary filmmakers have admitted to bending facts in the interest of the film’s overall narrative (see Aufderheide et al., 2009). Science documentaries can sometimes make overly simplistic and misleading claims in order to advance their argument (Yeo & Silberg, 2021). Audiences’ expectations that documentaries accurately reflect scientific fact make them especially concerning as a source of misinformation.
The veracity of some documentaries has also been called into question by scientists (Mellor, 2009). For example, some climate scientists have argued that the documentary The Great Global Warming Swindle (2007) displays “erroneous or artificially manipulated graphs, and presents incorrect, misleading, or incomplete opinions and facts on the science of global warming and the related economics” (Rive et al., 2007. p. 1). Other documentaries have been reported to perpetuate unsubstantiated claims about the measles, mumps, and rubella (MMR) vaccine (Bradshaw et al., 2022), and one study found that the release of the documentary Vaxxed: From CoverUp to Catastrophe (2016) was immediately followed by a significant uptick in anti-vaccine Facebook posts that discussed vaccine refusal as a civil right (Broniatowski et al., 2020). During the first few months of the COVID-19 pandemic, the 26-minute documentary Plandemic was released online and shared virally on social media, achieving eight million views in the first week. Notably, Frenkel et al. (2020) reported that this documentary was part of a broader disinformation campaign that promoted multiple conspiracy theories and unsubstantiated claims, including that the pandemic was planned by global elites as a form of population control, that vaccines are harmful, and that wearing masks increase the risk of contracting COVID-19.
Celebrities, too, serve as an influential source of misinformation about science, due to their large followings and the strong emotional connections they foster with fans. For example, celebrities contributed to the amplification of misinformation about vaccines being the cause of autism (Mnookin, 2012) and are a pervasive source of health misinformation (Caulfield, 2015). Celebrities also contributed to the amplification of COVID-19 misinformation on social media (Brennen et al., 2020).
The scientific community can also be a source of misinformation about science, and this manifests in several forms. First, misinformation from within science can arise as a byproduct of hype (i.e., when media, university public relations (PR) offices, and scientists themselves exaggerate research findings in an attempt to get publicity for science; Caulfield, 2018; Caulfield & Condit, 2012; Weingart, 2017; West & Bergstrom, 2021). Scientists may artificially inflate the novelty of their research and/or the significance of their findings to increase the likelihood of getting their research published and/or funded, particularly given institutional incentives to publish and the competitive nature of the academic environment (Millar et al., 2019, 2020). Tellingly, the use of hyperbolic language in grant applications to the National Institutes of Health increased between 1985 and 2020 (Millar et al., 2022). In the worst cases, such pressures to publish may encourage scientific fraud (West & Bergstrom, 2021).
University press offices, seeking media coverage of researchers’ work, also play a role in creating hype by exaggerating or over-simplifying research claims, omitting important details, or overstating causal influence in their press releases (Bratton et al., 2019; Sumner et al., 2014; Woloshin et al., 2009a). Science reporters, who often rely on these press releases, may broadcast these hyperbolic claims to wide audiences or embellish them further to gain views or readers. Studies have found a strong association between exaggeration in press releases and exaggeration in health news stories, suggesting that improving the accuracy of press releases could be a way to reduce inaccuracies in science and health news that reaches broad audiences (Bratton et al., 2019; Sumner et al., 2014).
Second, misinformation from within science also results from biases or distortions in the analysis, interpretation, presentation, and/or publishing of scientific data (West & Bergstrom, 2021). Whereas industry and political actors often inflate Type II error, or false negatives, by inappropriately emphasizing scientific uncertainty when the weight of evidence overwhelmingly supports a specific interpretation, individual scientists or teams of scientists historically engaged in “p-hacking” where scientists build theory around statistically significant findings that were discovered after examination of the data, rather than accurately reporting them as post hoc observations. P-hacking also includes testing out different covariates to see which yield significant findings or only reporting effects on dependent variables that turned out to be significant (Head et al., 2015). Although p-hacking was once common practice among scientists, concerns about replicability shined a light on its role in driving false-positive results, and it is now recognized as a questionable research practice that increases the likelihood of Type I error, or false positives. Steps can be taken to avoid it,
for example by pre-registering an analysis plan. Publication bias is another example of Type I error inflation; it occurs when researchers are unwilling to publish papers that show null results, or when journals refuse to publish such papers (West & Bergstrom, 2021). This, too, can lead to misleading conclusions about the state of scientific evidence. Shortcomings in scientists’ presentation of numeric data, such as through misleading or sloppy data visualization or unfair comparisons, can also be a source of misinformation (West & Bergstrom, 2021).
A third way that misinformation about science can emerge from within the scientific community is through preprints. This is especially true in moments of rapidly emerging, high-uncertainty socio-scientific issues or challenges. For example, preprints played an outsized role during the initial phase of the COVID-19 pandemic, when the rapid dissemination of scientific data was crucial for responding to a quickly unfolding disease outbreak (Fraser et al., 2021). News media coverage of preprints likewise surged during the pandemic; yet journalists inconsistently provided context that described preprint research as preliminary and unvetted (Fleerackers et al., 2022; van Schalkwyk & Dudek, 2022). Although preprints can provide timely access to important research findings, they also offer a breeding ground for misinformation by allowing early and unvetted scientific results to quickly spread through online platforms. For example, one of the most widely shared preprints on social media during the early pandemic was a study that erroneously claimed strong similarities between the new coronavirus and HIV (see Fraser et al., 2021), which fueled conspiracy theories that the virus had been intentionally designed (Gerts et al., 2021). The study was quickly withdrawn by the authors following criticisms from the scientific community that the results were based on a false positive (Brierley, 2021). Although some in the scientific community applauded the swift withdrawal of this paper as a win for science, particularly compared to the lengthy retraction process at many peer-reviewed journals, others lamented the fact that it circulated in the first place (Oransky & Marcus, 2020). Research articles posted on preprint archives before they have been vetted through a peer-review process, as well as the “pay-to-play” model used by predatory publishers—which allows researchers to pay to have their research published without undergoing rigorous peer-review—can both result in low quality and, at worst, deceptive and misleading scientific information circulating in the public domain (West & Bergstrom, 2021).
At times, some scientists may choose to engage in public debates about science on social media in order to correct misinformation about science that has appeared in media or political discourse. Still, in some cases, scientists’ engagement on social media “can change the way they use and represent evidence,” such that they prioritize persuasive goals over the careful communication of reliable, systematic research findings (Brossard
& Scheufele, 2022, p. 614). Even when intended to correct misinformation, the desire to convince skeptical audiences on social media can lead scientists and other actors to rely on anecdotes and single-study results to support their claims, or to speak outside their area of expertise which can perpetuate misleading information. Therefore, it is important for scientists to carefully consider best practices for science communication in public fora, including social media.
Previous work by Peters et al. (2008) demonstrated that media training is likely to make scientists “more confident” in communicating with the media, such as with science journalists. Additionally, a more recent study showed that scientists who received science communication training were more likely to enjoy engaging the public on science than those who did not (Parrella et al., 2022). Several organizations around the world, such as the American Association for the Advancement of Science (AAAS)1 and the European Science Communication Network (Miller & Fahy, 2009) have developed extensive training modules to encourage scientists to engage the public and increase their confidence in their ability to communicate science to non-experts.
As previously noted in Chapter 3 of this report, scientists are generally well trusted by the public, making them important sources of information and effective engagement with the public critical. However, not all types of scientists are trusted equally. For example, Americans are more likely to trust university scientists than industry scientists (Krause et al., 2019). Trust in government science agencies, such as the Environmental Protection Agency (EPA), the National Oceanographic and Atmospheric Association (NOAA), the National Aeronautics and Space Administration (NASA) and the CDC, varies across organizations (e.g., NASA is more highly trusted than EPA; Cerda, 2024; Krause et al., 2019) and can be context dependent (e.g., trust in CDC declined during the COVID-19 pandemic; Hamilton & Safford, 2021; Latkin et al., 2020). Given a large body of research that points to the influence of source credibility—which is a function of beliefs about a source’s trustworthiness and expertise—on acceptance of science information (Hocevar et al., 2017), misinformation about science from more trustworthy sources is apt to be more consequential.
Finally, medical professionals can be a source of misinformation due to inadequate knowledge or false beliefs. Doctors are also highly trusted by the public (Pew Research Center, 2019a; also see Chapter 3), and yet, healthcare professionals can be misinformed about science and subsequently
___________________
1 AAAS Communicating Science Workshops provide scientists and engineers with training and support to more effectively engage with the public through modules based on the latest science communication research and public engagement best practices. See https://www.aaas.org/programs/communicating-science-workshops
communicate that misinformation to their patients or incorporate it into their care. For example, while some published medical research is not reliable (as discussed above), most healthcare professionals are not aware of this and may lack the skills to effectively parse the medical literature (Ioannidis et al., 2017). Doctors may also hold false beliefs due to racial and cultural biases. For example, some medical professionals falsely believe that Black and White people are biologically different and thus have different pain thresholds, which influences their treatment recommendations (Hoffman et al., 2016). Some medical textbooks have even propagated misinformation related to the racialization of pain and disease (Deyrup & Graves, 2022; Li et al., 2022a; Sheets et al., 2011), suggesting how deeply ingrained such misinformation is in the medical community. As noted earlier, doctors have also been the target and conduit for disinformation from some industries. In the 1950s, the tobacco industry disseminated literature to doctors assuring them that cigarettes were not a cause for concern (Oreskes & Conway, 2010b). In marketing opioids in the 1990s, the pharmaceutical industry convinced primary-care physicians that chronic pain was under-treated in society and that opioids were a safe, non-addictive way to treat that pain; as mentioned above, an aggressive pharmaceutical sales force recruited physicians not only to prescribe opioids themselves but also to influence their medical peers to do the same (Meier, 2018; Michaels, 2020).
Ordinary citizens can also be a source of misinformation about science. While there is some research indicating that only a small minority of internet users share content from untrustworthy sources on social media (e.g., Grinberg et al., 2019; Guess et al., 2019; Nelson & Taneja, 2018), the evidence here is limited by the difficulty with measures of misinformation and credibility as discussed earlier in this chapter. Some of the studies that have been conducted to better understand information sharing among users on social media suggest that individuals who share misinformation are more likely to be older, politically conservative, male, and less educated (Guess et al., 2019, 2021; Lazer et al., 2020; Morosoli et al., 2022a; Osmundsen et al., 2021). One specific study found that during the first three months of the COVID-19 pandemic, the majority of COVID-19-related misinformation on social media came from ordinary people; however, misinformation from elites—including politicians, celebrities, and other prominent figures—accounted for more social engagement (Brennen et al., 2020). Thus, while ordinary citizens share misinformation about science on social media, misinformation from elites is most likely to gain traction, possibly due to elites’ relatively large followings. Other research likewise shows that low credibility social media content about COVID-19 is attributable to a
few influential actors, or “superspreaders,” who tend to be high-status, verified accounts (Yang et al., 2021). Some reasons that individuals might share misinformation about science, either intentionally or unintentionally are discussed in detail in Chapter 5.
Misinformation about science is produced from a wide range of sources, including corporations, governments, the news media, partisan news outlets, a variety of “alternative” health websites and social media accounts, popular culture, the scientific and medical community, and highly motivated individuals. However, current limitations related to measurement and data access impede a comprehensive assessment of the prevalence of misinformation about science across different levels and sectors of society. In particular, more research is still needed on the prevalence of misinformation about science in traditional media contexts (e.g., TV, radio, print), closed, private messaging platforms (e.g., WhatsApp, KaKaoTalk), and across all major online platforms.
Conclusion 4-1: Misinformation about science is widely understood to originate from several different sources. These include, but are not limited to:
Conclusion 4-2: Not all misinformation about science is equal in its influence. Rather, misinformation about science has greater potential for influence when it originates from authoritative sources; is amplified by powerful actors; reaches large audiences; is targeted to specific populations; or is produced in a deliberate, customized, and organized fashion (e.g., tobacco industry campaigns to cast doubt about the health risks of smoking).
Conclusion 4-3: Journalists, editors, writers, and media/news organizations covering science, medical, and health issues (regardless of their assigned beat or specialty areas) serve as critical mediators between producers of scientific knowledge and consumers of science information. Yet, financial challenges in the past two decades have reduced newsroom capacity to report on science, particularly at local levels.
Conclusion 4-4: Science reporting for the general public may be particularly prone to the unintentional spread of misinformation about science. Several factors can influence this, including journalistic norms (e.g., giving equal weight to both sides of a scientific debate, even when the scientific evidence overwhelmingly points in one direction), informational and ideological biases, over-reliance on public relations and other information subsidies (e.g., university press releases), exaggerations and omissions of important details from the original science articles, and insufficient scientific expertise, among journalists, particularly at under-resourced news organizations.