DAVID L. EATON, BERNARD D. GOLDSTEIN, AND MARY SUE HENIFIN
David L. Eaton, Ph.D., is Professor Emeritus of Environmental and Occupational Health Sciences, and Former Dean and Vice Provost of the Graduate School at the University of Washington.
Bernard D. Goldstein, M.D., is Professor Emeritus of Environmental and Occupational Health, and Former Dean, School of Public Health at the University of Pittsburgh.
Mary Sue Henifin, J.D., M.P.H., is Of Counsel at Buchanan Ingersoll & Rooney, P.C.
CONTENTS
Fundamental Principles of Toxicology
The Use of Toxicological Information in Risk Assessment
The Role of Toxicology in Establishing Causation
Toxicology and Exposure Science
Use of Biomarkers in Toxicology Exposure Assessment
In Vivo Research: Use of Live Animals in Toxicity Testing and Safety Assessment
Determining Dose–Response Relationships
Acute Toxicity Testing and the Lethal Dose 50% (LD50)
No Observed Adverse Effect Level (NOAEL)
Linearized Non-Threshold (LNT) Model and Determination of Cancer Risk
Maximum Tolerated Dose (MTD) and Chronic Toxicity Tests
New Approach Methodologies (NAMs)
Extrapolation from Animal (In Vivo) and Cell (In Vitro) Research to Humans
Toxicological Processes and Target Organ Toxicity
Demonstrating an Association Between Exposure and Risk of Disease
What Is Known About the Chemical Structure of the Compound and Its Relationship to Toxicity?
Is the Association Between Exposure and Disease Biologically Plausible?
Specific Causal Association Between an Individual’s Exposure and Onset of Disease
Were Other Factors Present That Can Affect the Distribution of the Compound Within the Body?
What Is Known About How Metabolism in the Human Body Alters the Toxic Effects of the Compound?
What Excretory Route Does the Compound Take, and How Does This Affect Its Toxicity?
What Is the Likelihood That the Disease Would Have Occurred Without the Specific Exposure?
Are the Complaints Specific or Nonspecific?
Do Laboratory Tests Indicate Exposure to the Compound?
What Other Causes Could Lead to the Given Complaint?
Is There Evidence of Interaction with Other Chemicals?
Has the Expert Considered Data That Contradict Their Opinion?
Are They Board Certified in Toxicology or in a Field Such as Occupational Medicine?
What Other Criteria Does the Proposed Expert Meet?
General References on Toxicology and Chemical Risk Assessment
FIGURES
2. Idealized comparison of a linearized non-threshold and threshold dose–response relationship
TABLE
1. Sample of Selected Toxicological Endpoints and Examples of Agents of Concern in Humans
The discipline of toxicology is concerned primarily with identifying and understanding the adverse effects of external chemical and physical agents on biological systems, including the prevention and amelioration of such adverse effects.1 The field of risk assessment, when applied to chemicals, uses the basic principles of toxicology to help make informed decisions about how likely it is that a given proportion (e.g., 1 in 100,000) of individuals in an exposed population are likely to develop the toxic endpoint of concern. Such judgments are an essential component of regulatory toxicology in establishing health protective guidance and regulation for exposures to chemicals in the workplace, and chemical contaminants in air, drinking water, food, cosmetics, and pharmaceuticals.2 In the context of toxic tort litigation, toxicology expert opinion is focused on the likelihood that an alleged exposure to an individual, or in some instances a group of individuals, was more likely than not a significant cause or contributor to the specific disease in question. Although it is an age-old science, toxicology has become a discipline distinct from pharmacology, biochemistry, cell biology, and related fields only in the past sixty years.
There are three central tenets of toxicology. First, “the dose makes the poison”: This implies that all chemical agents are intrinsically hazardous; whether they cause harm is only a question of dose.3 Even water, if consumed in large quantities, can be toxic. Second, each chemical or physical agent tends to produce a specific pattern of biological effects that can be used to establish disease causation.4 Third, the toxic responses in laboratory animals are useful predictors of toxic responses in humans, with appropriate qualifications based on an understanding of how the chemical is metabolized in humans and experimental animals, and how the chemical causes its toxicity (so-called mechanism or mode of action). Each of these tenets, and their exceptions, is discussed in greater detail in this reference guide.
1. Casarett & Doull’s Toxicology: The Basic Science of Poisons (Curtis D. Klaassen ed., 9th ed. 2019) [hereinafter Casarett & Doull’s].
2. Elaine M. Faustman, Risk Assessment, in Casarett & Doull’s, supra note 1, at ch. 4.
3. For a discussion of more modern formulations of this principle, which was articulated by Paracelsus in the sixteenth century, see David L. Eaton, Scientific Judgment and Toxic Torts—A Primer in Toxicology for Judges and Lawyers, 12 J.L. & Pol’y 5, 15 (2003); Lauren M. Aleksunes & David L. Eaton, Principles of Toxicology, in Casarett & Doull’s, supra note 1, at ch. 2. A short review of the field of toxicology can be found in Curtis D. Klaassen, Principles of Toxicology and Treatment of Poisoning, in Goodman and Gilman’s The Pharmacological Basis of Therapeutics 1739 (11th ed. 2008).
4. Some substances, such as central nervous system toxicants, can produce complex and nonspecific symptoms, such as headaches, nausea, and fatigue.
The science of toxicology attempts to determine at what doses foreign agents produce their effects. The foreign agents classically of interest to toxicologists are all chemicals (including foods and pharmaceuticals) and physical agents in the form of radiation, but not living organisms that cause infectious diseases.5 However, the poisons that are produced by living organisms (properly called toxins), such as snake venom or botulinum toxin, are generally considered under the framework of toxicology. The discipline of toxicology provides scientific information relevant to the following questions:
This reference guide focuses on toxicology concepts used in regulatory, administrative, and toxic tort cases, with an emphasis on human health effects, rather than ecological effects. A set of model questions for evaluating the admissibility and strength of an expert’s opinion is also included. Following each question is an explanation of the type of toxicological data or information that may be offered in response to the question, as well as a discussion of its significance. The relationship between toxicology, epidemiology, and exposure assessment is also addressed, with cross references to information included in the Reference Guide on Epidemiology and the Reference Guide on Exposure Science and Exposure Assessment.
Toxicological expert opinion also relies on formal safety and risk assessments. Safety assessment is the area of toxicology relating to the testing of chemicals and drugs for toxicity. It is a relatively formal approach in which the potential for toxicity of a chemical is tested in vivo or in vitro using standardized techniques. The protocols for such studies usually are developed through scientific consensus and are subject to oversight by governmental regulators or other watchdog groups.
After a number of bad experiences, including outright fraud, government agencies have imposed codes on laboratories involved in safety assessment,
5. Forensic toxicology, a subset of toxicology generally concerned with criminal matters, is not addressed in this reference guide, because it is a highly specialized field with its own literature and methodologies that do not relate directly to toxic tort or regulatory issues.
6. In standard risk assessment terminology, hazard—the ability of a chemical to cause (a) specific type(s) of toxic effect(s)—is an intrinsic property of a chemical or physical agent, while risk—the likelihood/probability that the effect will occur in an individual or population under a given set of circumstances—is dependent both upon hazard and on the extent of exposure (both the concentration or amount and the duration/frequency of exposure).
including industrial, contract, and in-house laboratories.7 Known as good laboratory practices (GLPs), these codes govern many aspects of laboratory standards, including such details as the number of animals per cage, dose and chemical verification, and the handling of tissue specimens. GLPs are remarkably similar across agencies, but the tests called for differ depending on the mission. For example, there are major differences between the Food and Drug Administration’s (FDA) and the Environmental Protection Agency’s (EPA) required procedures for testing drugs and environmental chemicals.8 The FDA requires and specifies both efficacy and safety testing of drugs in humans and animals. Carefully controlled clinical trials using doses within the expected therapeutic range are required for premarket testing of drugs, because exposures to prescription drugs are carefully controlled and should not exceed specified ranges or uses.
Although new chemicals manufactured for use as food additives, pharmaceuticals, pesticides, and cosmetics have required substantial toxicity testing prior to regulatory approval since at least the 1950s, the vast majority of industrial chemicals had little, if any, premarket toxicity testing until the mid-1970s. In 1976, Congress established new legislation, called the Toxic Substances Control Act (TSCA), to require chemical manufacturers of new chemical entities (chemicals not on an established list of the then approximately 60,000 chemicals in commerce) to submit premarket notifications that would require EPA evaluation and approval prior to the manufacture of significant quantities of the chemical. Although well intended, the legislation “lacked teeth” and over the years was widely viewed as ineffectual. In 2016, the TSCA was extensively revised under the Frank R. Lautenberg Chemical Safety for the 21st Century Act.9 This revised TSCA legislation (the Lautenberg Act) increases substantially the expectations for toxicity evaluation for existing chemicals, with a focus on those that the EPA believes pose the highest risk to human health and the environment, as well as giving the EPA more oversight prior to approval for the manufacturing of new chemicals. The EPA now provides extensive guidance to industry on the types of information and data
7. A dramatic case of fraud involving a pharmaceutical company reporting toxicology research to support the safety of a consumer product is described in Schueneman v. Arena Pharmaceuticals, Inc., 840 F.3d 698 (9th Cir. 2016). The defendants made positive public statements to investors about the safety of its weight loss drug, specifically indicating that the drug was not carcinogenic and referred to supporting “animal studies.” Id. at 700. However, it was discovered that the defendants were conducting a study in which the drug was given to lab rats, and the drug was in fact causing cancer in the rats. Id. at 701. The Ninth Circuit found that the plaintiff, one of the defendant’s investors, successfully showed that the defendants intentionally and deliberately did not disclose the negative findings of the rat study to investors. Id. at 707–08.
8. Comparison of FDA, EPA, OECD GLP (Apr. 7, 2015), https://www.fda.gov/inspections-compliance-enforcement-and-criminal-investigations/fda-bioresearch-monitoring-information/articles.
9. See Frank R. Lautenberg Chemical Safety for the 21st Century Act, Pub. L.114-182,130 Stat. 448 (2016), https://perma.cc/B4T2-9PV7.
required under the TSCA Premanufacture Notification guidelines.10 While the law and regulatory guidance often do not explicitly specify which of the numerous available toxicity testing procedures must be performed, they usually do provide a framework. Such frameworks, which may vary between different regulations, enable both industry and the EPA to assess the potential harm a new chemical may pose to people or the environment.11 Much of the evaluation relies upon chemical structure and comparison with other similar chemicals for which substantial toxicological and environmental data may be available.
In a similar manner, the European Union Regulation on Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) requires extensive testing of both new chemicals and chemicals already in commerce.12 Moreover, because exposures are less predictable, doses usually are given in a wider range in animal tests for nonpharmaceutical agents.13
10. See EPA, Points to Consider When Preparing TSCA New Chemical Notifications (June 2018), https://perma.cc/H9G5-7SAT.
11. The 2016 revisions of the TSCA set forth five possible EPA determinations on new chemical notices:
The lack of toxicity data for new chemicals (or new uses of existing chemicals) introduced into commerce has led the EPA to propose methods of evaluation using (1) in vitro toxicity pathway testing, (2) computational toxicology approaches using structural analogs for which toxicity information is available, (3) followed by whole-animal testing where warranted. See EPA’s Review Process for New Chemicals, https://perma.cc/U65D-WANT.
12. REACH is a regulation of the European Union, adopted in 2007 to “improve the protection of human health and the environment from the risks that can be posed by chemicals, while enhancing the competitiveness of the EU chemicals industry. It also promotes alternative methods for the hazard assessment of substances in order to reduce the number of tests on animals.” See European Chemicals Agency, Understanding REACH, https://perma.cc/AC3M-UTYF.
The legislation was updated in 2022. See European Chemicals Agency, Upcoming Changes to REACH Information Requirements, https://perma.cc/A6WV-L4GZ.
13. The development of a new drug inherently requires searching for an agent that at useful doses has a biological effect (e.g., decreasing blood pressure), whereas those developing a new chemical for consumer use (e.g., a house paint) hope that at usual doses no biological effects will occur. There are other compounds, such as pesticides and antibacterial agents, for which a biological effect is desired, but it is intended that at usual doses humans will not be affected. These different expectations
A major impetus for both REACH and the Lautenberg Act was the recognition that less than 1% of the 60,000 to 75,000 chemicals in commerce had been subjected to a full safety assessment, and there were significant toxicological data on only 10% to 20% of them. Under the current U.S. and international approaches to testing chemicals with high production volume, the extent of toxicological information is expanding rapidly.15
Risk assessment is an approach increasingly used by regulatory agencies to estimate and compare the risks of hazardous chemicals and to assign priority for avoiding their adverse effects. The National Academy of Sciences defines four components of risk assessment: hazard identification, dose–response estimation, exposure assessment, and risk characterization.16
Risk assessment, as practiced by government agencies involved in regulating exposure to environmental chemicals, is highly dependent upon the science of toxicology and on the information derived from toxicological studies. The EPA, FDA, Occupational Safety and Health Administration (OSHA), Consumer Product Safety Commission, and other international (e.g., the World Trade Organization), national, and state agencies use risk assessment as a means to protect workers or the
are part of the rationale for the differences in testing information available for assessing toxicological effects. Under FDA rules, approval of a new drug usually will require extensive animal and human testing, including a randomized double-blind clinical trial for efficacy and toxicity. In contrast, under the TSCA, the only requirement before a new chemical can be marketed is that a premanufacturing notice be filed with the EPA, including any toxicity data in the company’s possession.
14. See EPA, Regulatory Determinations made under Section 5 of the Toxic Substances Control Act (TSCA), https://perma.cc/6FNB-R74U
15. For a comparison of European REACH legislation and the U.S. EPA’s 2016 TSCA provisions, see Ágnes Botos, John D. Graham & Zoltán Illés, Industrial Chemical Regulation in the European Union and the United States: a Comparison of REACH and the Amended TSCA, 22 J. Risk Rsch. 1187, https://doi.org/10.1080/13669877.2018.1454495. For an overview of current trends in toxicity testing, see Ida Fischer, Catherine Milton & Heather Wallace, Toxicity Testing Is Evolving!, 9 Toxicology Rsch. 67 (2020), https://doi.org/10.1093/toxres/tfaa011; see also Daniel Krewski et al., Toxicity Testing in the 21st Century: Progress in the Past Decade and Future Perspectives, 94 Archives Toxicology 1 (2020), https://doi.org/10.1007/s00204-019-02613-4.
16. Nat’l Rsch. Council, Science and Decisions: Advancing Risk Assessment (2009). See also In re Johnson & Johnson Talcum Powder Prods. Mktg., Sales Pracs. & Prods. Litig., 509 F. Supp. 3d 116 (D.N.J. 2020) (expert opinion that talcum powder may cause ovarian cancer based on risk assessment and analysis of Bradford Hill factors considerations is admissible). Recently, a National Academy of Sciences panel has discussed potential approaches using systematic review to improve the EPA’s risk paradigm. See Nat’l Acads. of Scis., Eng’g & Med., The Use of Systematic Review in EPA’s Toxic Substances Control Act Risk Evaluations (2021).
public from adverse effects.17 Acceptable risk levels—for example, 1 in 1,000 to 1 in 1,000,000—are usually well below what can be measured through epidemiological study. Inevitably, this means that risk assessment is usually based solely on toxicological data—or, if epidemiological findings of an adverse effect are observed, then toxicological reasoning must be used to extrapolate to the appropriate lower exposure or dose standard aimed at protecting the public.
The four-part paradigm of risk assessment is heavily based on toxicological precepts. First, hazard identification, which is roughly equivalent to the legal term general causation, reflects the toxicological rule of specificity of effects, and the second step, dose–response assessment, is based upon “the dose makes the poison” and is fundamental to specific causation. The hazard identification process often uses weight-of-evidence approaches in which the toxicological, mechanistic, and epidemiological data are rigorously assessed to form a judgment regarding the likelihood that the agent produces a specific effect. Second, establishing the appropriate dose–response curve, threshold, or “one-hit” model is an exercise in toxicological reasoning. Even for those chemicals known to be carcinogens, a threshold model is appropriate if the toxicological mechanism of action can be demonstrated to depend upon a threshold. The third step, exposure assessment, requires knowledge of specific toxicological dynamics; for example, the impact on the lung of an air pollutant varies by factors such as: inhalation rate per unit body mass, which is affected by exercise and by age; the size of a particle or the solubility of a gas, either of which will affect the depth of penetration into the more sensitive parts of the airways; the competence of the usual airway defense mechanisms, such as mucus flow and macrophage function; and the ability of the lung to metabolize the agent.18 The fourth and final step is the characterization of risk.19
Risk assessment is not an exact science. It should be viewed as a useful framework to organize and synthesize information and to provide estimates on which policy making can be based. In recent years, codification of the methodology used to assess risk has increased confidence that the process can be reasonably free of bias; however, significant controversy remains, particularly when actual data are limited and generally conservative default assumptions are used.
17. Pharmaceuticals intended for human use are an exception in that a tradeoff between desired and adverse effects may be acceptable, and human data are available prior to, and as a result of, the marketing of the agent.
18. Some toxic agents pass through the lung without producing any direct effects on this organ. For example, inhaled carbon monoxide produces its toxicity in essence by being treated by the body as if it is oxygen. Carbon monoxide readily combines with the oxygen-combining site of hemoglobin, the molecule in red blood cells that is responsible for transporting oxygen from the lung to the tissues. In this way, the effective transport and tissue utilization of oxygen is blocked.
19. See, e.g., Nat. Res. Defense Council v. U.S. EPA, 38 F.4th 34 (9th Cir. 2022) (according to the EPA’s Office of Research and Development, the EPA did not follow its 2005 Guidelines for Carcinogen Risk Assessment, which lay out four steps for conducting risk assessments of chemicals’ carcinogenic potential).
Although risk assessment information about a chemical, and the research data on which it is based, may be useful in setting reasonable boundaries regarding the likelihood of causation, the impetus for the development of risk assessment has been the regulatory process, which has different goals.20 Because of their use of appropriately prudent assumptions in areas of uncertainty and their use of default assumptions when there are limited data, as well as legal requirements related to public health considerations, risk assessments often intentionally encompass the upper range of possible risks.21 An additional issue, particularly related to cancer risk, is that standards based on risk assessment often are set to avoid the risk caused by lifetime exposure at this upper-range estimate of risk. Exposure to levels exceeding this standard for a small fraction of a lifetime does not mean that the overall lifetime risk of regulatory concern has been exceeded.22 Given these issues, courts have taken different approaches to how information derived from risk assessments may be relied upon by experts.23
20. See Nat’l Rsch. Council & Comm. on Improving Risk Analysis Approaches Used by the U.S. EPA, Toward a Unified Approach to Dose–Response Assessment, in Science and Decisions: Advancing Risk Assessment 127 (2009), https://perma.cc/47RG-NKMG. See also Yates v. Ford Motor Co., No. 5:12-CV-752-FL, 2015 WL 2189774, at *17, *27 (E.D.N.C. May 11, 2015) (holding that an EPA pamphlet that includes regulatory risk assessment can be relied on by the plaintiff’s experts only to “support the reliability of studies the experts themselves rely upon”).
21. It is also claimed that standard risk assessment will underestimate true risks, particularly for sensitive populations exposed to multiple stressors, an issue of particular pertinence to discussions of environmental justice. Comm. on Env’t Just., Institute of Med., Toward Environmental Justice: Research, Education, and Health Policy Needs (1999), https://doi.org/10.17226/6034. In 2003, the EPA developed formal guidance for cumulative risk assessment, wherein it defined aggregate exposure as the “combined exposure of an individual (or defined population) to a specific agent or stressor via relevant routes, pathways, and sources.” EPA, Framework for Cumulative Risk Assessment (2003), https://perma.cc/D7CT-Z2RT. In 2022, the EPA released a draft update to this document, with an initial emphasis on identifying research needs. See EPA, Cumulative Impacts: Recommendations for ORD Research (2022), https://perma.cc/B6SL-EHUT.
22. A public health standard to protect against the lifetime risk of inhaling a known carcinogen will usually be based on lifetime exposure calculations of twenty-four hours a day, every day for seventy years. This is more than 25,000 days and 600,000 hours. Exceeding this standard for a few hours would presumably have little impact on cancer risk. In contrast, for a short-term standard set to avoid a threshold-based risk, exceeding the standard for this short time may make a major difference—for example, an asthma attack caused by being outdoors on a day that the ozone standard is exceeded. Regulatory statutes sometimes limit consideration of multiple pathways of exposure to the same chemical, potentially underestimating overall exposure and risk. See, e.g., Swati D. G. Rayasam et al., Toxic Substances Control Act (TSCA) Implementation: How the Amended Law Has Failed to Protect Vulnerable Populations from Toxic Chemicals in the United States, 56 Env’t Sci. & Tech. 11969 (2022), https://doi.org/10.1021/acs.est.2c02079. However, the interpretation of how such evaluations under the revised TSCA law are done by the EPA has been criticized. See Anthony C. Tweedale, Correspondence on “Toxic Substances Control Act (TSCA) Implementation: How the Amended Law has Failed to Protect Vulnerable Populations from Toxic Chemicals in the United States,” 56 Env’t Sci. & Tech. 16533 (2022), https://doi.org/10.1021/acs.est.2c06032.
23. See, e.g., Hardeman v. Monsanto Co., 997 F.3d 941 (9th Cir. 2021) (discussion of appropriate use in an expert opinion of the IARC’s classification of glyphosate as a “probable carcinogen”).
Toxicological studies, by themselves, rarely offer direct evidence that a disease in any one individual was caused by a chemical exposure.24 However, toxicology can provide scientific information regarding the increased risk of contracting a disease at any given exposure or dose and help rule out other risk factors for the disease (see section titled “Toxicology and Epidemiology” below for discussion on disease causation). Toxicological evidence also contributes to the weight of evidence supporting causal inferences by explaining how a chemical causes a specific disease through describing metabolic, cellular, and other physiological effects of exposure. For certain chemicals that create sufficiently long-lasting changes to molecular structures suitable for testing, toxicology evidence may be offered to establish chemical-specific “fingerprints” that can strengthen the causal inference between a specific chemical exposure and the presence of a specific disease.
The interface of toxicological science with legal issues can be complex, in part reflecting the inherent challenges of bringing science into a courtroom, but also because of issues pertinent to toxicology.
Complexity in toxicology is derived primarily from three factors. The first is that chemicals often change within the body as they go through various routes to eventual elimination.25 Thus absorption, distribution, metabolism, and excretion are central to understanding the toxicology of an agent.
The second is that human sensitivity to chemical and physical agents can vary greatly among individuals, often as a result of differences in absorption, distribution, metabolism, or excretion. Although toxic responses from a given chemical sometimes occur in multiple different organs in the body (e.g., liver, kidney, brain), toxic effects often occur selectively in specific tissues, which are then referred to as target organs. Both the disposition of the chemical (e.g., metabolism and excretion) and target organ sensitivity can be affected by a combination of genetic and environmental factors, including factors as mundane as drinking certain fruit juices.26
24. There are exceptions—for example, when measurements of levels in the blood or other body constituents of the potentially offending agent are at a high enough level to be consistent with reasonably specific health impacts, such as in carbon monoxide poisoning and lead poisoning.
25. Direct-acting toxic agents are those whose toxicity is due to the parent chemical entering the body. A change in chemical structure through metabolism usually results in detoxification. Indirect-acting chemicals are those that must first be metabolized to a harmful intermediate for toxicity to occur. For an overview of metabolism in toxicology, see David L. Eaton & Julia Cui, Biotransformation, in Patty’s Toxicology (James Klaunig ed., 7th ed. 2023), https://doi.org/10.1002/0471125474.tox122.
26. Grapefruit juices and certain other fruit juices at sufficient dose have the potential to inhibit the metabolism of a wide variety of drugs and chemicals. See Michael J. Hanley et al., The Effect of Grapefruit Juice on Drug Disposition, 7 Expert Op. Drug Metabolism & Toxicology 267 (2011), https://doi.org/10.1517/17425255.2011.553189; Meng Chen et al., Food-Drug Interactions
The third major source of complexity is the need for extrapolation—either across species, because many toxicological data are obtained from studies in laboratory animals, or across doses, because human toxicological and epidemiological data often are limited to specific exposure or dose ranges that may differ from the ranges of interest. All three of these factors are responsible for much of the complexity in utilizing toxicology for tort or regulatory judicial decisions and are described in more detail below.
The science of toxicology is useful in establishing general causation in toxic tort litigation through demonstrating that specific chemicals are capable of causing specific effects (e.g., specific types of cancers, birth defects, poisoning) and identifying when such effects may be manifested, either acutely or over time. Identifying such qualitative characteristics of a chemical is referred to as hazard evaluation and is often based on results of experimental animal studies. To determine the likelihood that a specific chemical caused a specific toxic effect in an individual person (specific causation in tort litigation) or a population (regulatory/population health guidance) requires careful consideration of the exposure characteristics (how much, when, and how often) that are central to determining the dose–response for the specific toxic endpoint in the individual or population. Such decisions are complex and may also rely on human epidemiological data where they exist.
From the beginning of human history, “Why me?” is probably the most frequently asked question about disease. Epidemiology is the scientific discipline most closely involved in providing evidence to answer this question. Epidemiology is the study of the incidence and distribution of disease in human populations. Clearly, both epidemiology and toxicology have much to offer in elucidating the causal relationship between chemical exposure and disease.27 These sciences often go hand in hand with assessments of the risks of chemical exposure, without
Precipitated by Fruit Juices Other than Grapefruit Juice: An Update Review, 26 J. Food & Drug Analysis S61 (2018), https://doi.org/10.1016/j.jfda.2018.01.009.
27. See Steve C. Gold et al., Reference Guide on Epidemiology, section titled “General Causation,” in this manual. See also Nat’l Acads. Sci., Eng’g & Med, Assessing Causality from a Multidisciplinary Evidence Base for National Ambient Air Quality Standards (2022), https://perma.cc/6L9G-NBCN (discussing the strengths and limitations of epidemiological studies in establishing “causation” between exposure to a hazardous chemical (in this case, various air pollutants) and the development of specific diseases). For example, in In re Nexium Esomeprazole, 662 F. App’x 528 (9th Cir. 2016), testimony on general causation was excluded as unreliable in which the expert did not adequately explain how he inferred a causal relationship from epidemiological studies that did not come to the same conclusions. However, epidemiological studies are not always necessary. McMunn v. Babcock & Wilcox Power Generation Grp., Inc., No. 2:10CV143, 2014 WL 814878, at *15 (W.D. Pa. Feb. 27, 2014) (citing Glastetter v. Novartis Pharms. Corp., 252 F.3d 986, 992 (8th Cir. 2001); Rider v. Sandoz Pharms. Corp., 295 F.3d 1194, 1198 (11th Cir. 2002)).
artificial distinctions being drawn between them. However, although courts generally rule epidemiological expert opinion admissible, the admissibility of toxicological expert opinion has been more controversial because of uncertainties regarding extrapolation from animal and in vitro data to humans. This has been particularly true in cases in which relevant epidemiological research data exist. However, the methodological weaknesses of some epidemiological studies, including their inability to accurately measure exposure, the often small numbers of subjects, and issues such as recall bias when individuals with specific diseases are asked to recall their past exposure to chemicals in comparison with a control population, often render these studies difficult to interpret. As detailed in the Reference Guide on Epidemiology, an epidemiological study by itself can only show an association, which may or may not be causal.
The most well-known guidelines for considering causality of a specific effect by a specific cause are the Bradford Hill considerations.28 Austin Bradford Hill was a British epidemiologist who contributed greatly to studies linking cigarette smoking to lung cancer. As discussed in detail in the Reference Guide on Epidemiology, Hill described nine points of consideration to facilitate the determination of a causal association seen in an epidemiology study. A modified list of the Bradford Hill considerations that inform epidemiologists in making judgments about causation includes:
28. Austin Bradford Hill, The Environment and Disease: Association or Causation?, 108 J. Royal Soc’y Med. 32 (2015) (reprint of 1965 article), https://doi.org/10.1177/0141076814562718. For a discussion of the Bradford Hill considerations, see Steve C. Gold et al., Reference Guide on Epidemiology, section titled “General Causation,” in this manual. Although the points raised by Bradford Hill in his 1965 article are often referred to as “criteria,” the author made it clear that none were absolutes. As discussed in the Reference Guide on Epidemiology in this manual, the term “consideration,” rather than “criteria” or even “guidelines,” is perhaps more appropriate to represent these ideas.
Toxicological evidence can provide insights into almost all of these considerations:
Thus, toxicology may contribute to the Bradford Hill considerations, for consistency considerations, particularly biological plausibility. Making sense of whether an association is causal is affected by whether there is a likely biological pathway between the exposure and the effect, for which the evidence is often based on animal toxicology studies. However, Hill argues that biological plausibility cannot be demanded, because it depends upon the biological knowledge of the day. The importance of biological plausibility to setting environmental regulation is evident in that it is now being routinely included as a chapter or sub-chapter in EPA analyses on which air pollution and other standards are based.
In recent decades, medical science has been challenged by the need to derive actionable information from the medical literature—a breadth of literature that has increased both in size and in the diversity of medical specialties and subspecialties that are being applied to solve problems related to diagnosis and treatment of disease. This has led to the development of formal systematic approaches to gathering and evaluating the pertinent epidemiological literature. The National Institutes of Health (NIH) held consensus-developing conferences from 1977 to 2013 aimed at controversial topics in clinical decision-making, which included
toxicological data where relevant. These were discontinued as no longer necessary because many other organizations had developed similar approaches.29
A more recent example of a common approach to developing a systematic review process for evaluating the medical science literature has been that of the Cochrane reviews.30 These are systematic approaches that analyze the existing literature and focus almost exclusively on epidemiological findings, primarily related to diagnosis and treatment of disease. In part building on the Cochrane review model, systematic reviews have been suggested for toxicology31 and for public health.32
In contrast to epidemiology, because animal and cell studies permit researchers to isolate the effects of exposure to a single chemical or to mixtures, toxicological findings offer unique information concerning dose–response relationships, mechanisms of action, specificity of response, and other information relevant to the assessment of causation—although these can present issues related to extrapolation to humans.33
The gold standard in clinical epidemiology and in the testing of pharmaceutical agents is the randomized double-blind placebo control study in which the control and intervention groups are usually well matched. Although appropriate and very informative for the testing of pharmaceutical agents, it is generally unethical for chemicals used for other purposes. The randomized control design in essence is what is used in a classic toxicological study in laboratory animals,
29. For a review of consensus approaches in general and the importance of unbiased selection of members of review committees that reflect the broad disciplines involved, see Bernard D. Goldstein, What the Trump Administration Taught Us About the Vulnerabilities of EPA’s Science-Based Regulatory Processes: Changing the Consensus Processes of Science into the Confrontational Processes of Law, 31 Health Matrix 299 (2021), https://perma.cc/YJ3H-TPPR. For a discussion of general issues regarding medical practice and testimony, see John B. Wong et al., Reference Guide on Medical Testimony, in this manual.
30. Cochrane Library, About Cochrane Reviews, https://perma.cc/UXR9-4HPL.
31. Lena Smirnova et al., 3S—Systematic, Systemic, and Systems Biology and Toxicology, 35 ALTEX 139 (2018) https://doi.org/10.14573/altex.1804051. See also Thomas Hartung et al., Systems Toxicology II: A Special Issue, 30 Chem. Rsch. Toxicology 869 (2017), https://doi.org/10.1021/acs.chemrestox.7b00038.
32. See also Steve C. Gold et al., Reference Guide on Epidemiology, section titled “Methods for Synthesizing or Combining the Results of Multiple Studies,” in this manual.
33. Courts have found animal studies to be relevant evidence of causation where there is a sound basis for extrapolating conclusions from those studies to humans in real-world conditions. See Hardeman v. Monsanto Co., 997 F.3d 941 (9th Cir. 2021); see also Milana v. Eisai, Inc., No. 8:21-cv-831-CEH-AEP, 2022 WL 846933 (M.D. Fla. Mar. 22, 2022) (increase in tumors in mammary tissue of experimental animals compared to higher rates of cancer in patients taking weight loss drug); In re Levaquin Prods. Liab. Litig., No. 08–1943 (JRT), 2010 WL 8400514, at *2–4, *8 (D. Minn. Nov. 8, 2010); Brandeis Univ. v. Keebler Co., No. 1:12–cv–01508, 2013 WL 5911233 (N.D. Ill. Jan. 18, 2013); In re Abilify (Aripiprazole) Prods. Liab. Litig., 299 F. Supp. 3d 1291, 1310 (N.D. Fla. 2018).
although matching is more readily achieved because the animals are bred to be genetically similar and have identical environmental histories.
Dose issues are at the intersection between toxicology and epidemiology. Many epidemiological studies of the potential risk of chemicals do not have direct information about dose, although qualitative differences among subgroups or in comparison with other studies can be inferred. The epidemiology database includes many studies that are probing for the potential for an association between a cause and an effect. Thus, a study asking all those suffering from a specific disease a multiplicity of questions related to potential exposures is bound to find some statistical association between the disease and one or more exposure conditions due to chance alone. Such studies generate hypotheses that can then be evaluated by subsequent studies that more narrowly focus on the potential cause-and-effect relationship. One way to evaluate the strength of the association is to assess whether those epidemiological studies evaluating cohorts with relatively high exposure support the association.34
The requirement in certain jurisdictions for epidemiological evidence of a relative risk greater than two (RR > 2) for general causation also has limited the utilization of toxicological evidence.35 A firm legal requirement for such evidence means that if the epidemiological database only showed statistically significant evidence that cohorts exposed to ten parts per million of an agent for twenty years produced an 80% increase in risk, the court could not hear the case of a plaintiff alleging that exposure to fifty parts per million for twenty years of the same agent caused the adverse outcome. Yet to a toxicologist, there would be little question that exposure to the fivefold-higher dose of an agent producing an 80% increase in risk would lead to more than a 100% increase in risk, i.e., more than doubling of the risk.
34. As an example of considering dose and exposure issues across epidemiological studies, see Luoping Zhang et al., Formaldehyde Exposure and Leukemia: A New Meta-Analysis and Potential Mechanisms, 681 Mutation Rsch. 150 (2008), https://doi.org/10.1016/j.mrrev.2008.07.002. The subject of the strength of an epidemiological association and its relation to causality is considered in Steve C. Gold et al., Reference Guide on Epidemiology, in this manual; see also note 79, infra.
35. The basis for the use of RR > 2 is the translation of the preponderance of evidence rule, or “more likely than not” legal standard, into the epidemiological concept of at least a doubling of risk. An example is the Havner rule in Texas, which for general causation requires that there be reliable epidemiological studies with a statistically significant RR > 2 associating a putative cause with an effect. See Bostic v. Ga. Pac. Corp., 439 S.W.3d 332 (Tex. 2014). But see Hardeman v Monsanto Co., 997 F.3d 941, 966 (9th Cir. 2021) (“[W]e have never suggested that a hardline increase in a risk statistic, or even an adjusted odds ratio above 2.0, is necessary for finding a strong association. To the contrary, flexibility is warranted considering the contextual nature of the Daubert inquiry.”) (citations omitted). For a discussion of the use by courts in various jurisdictions of relative risk > 2 for general and specific causation, see Russellyn S. Carruth & Bernard D. Goldstein, Relative Risk Greater Than Two in Proof of Causation in Toxic Tort Litigation, 41 Jurimetrics 195 (2001); for the toxicological issues, see Bernard D. Goldstein, Toxic Torts: The Devil Is in the Dose, 16 J.L. & Pol’y 551–85 (2008).
Even though there is little toxicological data on many of the approximately 75,000 compounds in general commerce, there is usually far more information from toxicological studies than from epidemiological studies.36 It is much easier, and usually more economical, to expose an animal to a chemical or to perform in vitro studies than it is to perform epidemiological studies. This difference in data availability is evident even for cancer causation, for which toxicological study is particularly expensive and time consuming. Of the approximately fifty specific chemicals that reputable international authorities agree are known human carcinogens based on positive epidemiological studies, almost all of them have also been shown to cause cancer in laboratory animal studies. Yet there are hundreds of known animal carcinogens for which there is either no valid epidemiological database or for which the epidemiological database has been equivocal.37
To clarify any findings, regulators can require a repeat of an equivocal two-year animal toxicological study or the performance of additional laboratory
36. See, for example, Nat’l Toxicology Prog., 15th Rep. on Carcinogens (2021), https://perma.cc/V4DP-3YNZ, for a discussion of how epidemiology and animal toxicity studies are used to assess the potential cancer risk of different chemicals.
37. See, e.g., Int’l Agency for Rsch. on Cancer, IARC Monographs on the Identification of Carcinogenic Hazards to Humans, https://perma.cc/R56M-EQMM; Nat’l Toxicology Prog., supra note 36; Waite v. AII Acquisition Corp., 194 F. Supp. 3d 1298 (S.D. Fla. 2016) (court remains “unconvinced” that lack of statistically significant epidemiological studies properly overcomes all other types of scientific evidence including animal, cellular, and molecular studies). The absence of epidemiological data is due, in part, to the difficulties in conducting cancer epidemiology studies, including the lack of suitably large groups of individuals exposed for a sufficient period of time, long latency periods between exposure and manifestation of disease, the high variability in the background incidence of many cancers in the general population, and the inability to measure actual exposure levels. See also Parker v. Mobil Oil Corp., 857 N.E.2d 1114 (N.Y. 2006), in which New York’s highest court reviewed an appeal from a summary judgment of a lower court that had dismissed a plaintiff’s case because of the lack of a specific exposure estimate by the plaintiff’s experts. The plaintiff’s experts claimed that unusual work practices during his seventeen years employed at a gasoline station led to higher exposure to benzene, a known cause of acute myelogenous leukemia (AML) from which the patient suffered, than would have occurred in the work practices of a usual gasoline station worker. The higher court found for the plaintiff on the lack of a need for a specific exposure assessment, citing other cases finding that an expert does not need to establish the amount of exposure the plaintiff experienced. Id. at 1121 (citing McClain v. Metabolife Int’l, Inc., 401 F.3d 1233, 1241 n.6 (11th Cir. 2005)). However, the same court ruled for the defense based on the absence of epidemiological evidence that gasoline caused AML, despite the fact that benzene is a component of the gasoline mixture. Court preference for epidemiology is also evident in the Havner rule (see supra note 35). The requirement for epidemiological evidence of a doubling of risk precludes toxicological reasoning related to dose. For example, suppose there were two epidemiological studies showing a statistically significant 80% increase of an adverse endpoint in large workforces in well-regulated industries that were exposed to ten parts per million (ppm) of a chemical for thirty years. Suppose also that there was a plaintiff with the same endpoint whose experts presented evidence that the plaintiff was exposed to fifty ppm for thirty years to the same chemical. Based on dose considerations, a toxicologist could argue that the plaintiff’s level of exposure would exceed 100%, i.e., more than double the plaintiff’s risk. But under a strict interpretation of the Havner rule, the plaintiff’s case would be dismissed.
studies in which animals deliberately are exposed to the chemical. Such deliberate exposure is not possible in humans. As a general rule, unequivocally positive epidemiological studies reflect prior workplace practices that led to relatively high levels of chemical exposure for a limited number of individuals and that, fortunately, in most cases no longer occur now. Thus, an additional prospective epidemiological study often is not possible, and even the ability to do retrospective studies is constrained by the passage of time.
In essence, epidemiological findings consistently demonstrating an adverse effect in humans of a manufactured chemical represent a failure of toxicology as a preventive science. A corollary of the tenet that, depending upon dose, all chemical and physical agents are harmful is that society depends upon toxicological science to discover and anticipate these harmful effects, and we also rely on regulators and other responsible parties to prevent human exposure to a harmful level or to ensure that the agent is not manufactured and distributed. In this regard epidemiology is a valuable backup approach that functions to detect failures of primary prevention. The two disciplines complement each other, particularly when the approaches are iterative.
Toxicologists may use epidemiological findings in their expert opinions concerning general causation and specific causation. This is particularly true for well-studied compounds that have been thoroughly evaluated by respected organizations, such as the International Agency for Research on Cancer (IARC, affiliated with the World Health Organization (WHO)) or the National Toxicology Program (NTP, affiliated with the U.S. Department of Health and Human Services), which include toxicologists and epidemiologists in their review of the evidence. For example, in mortality studies of large cohorts of workers exposed to high levels of a carcinogenic agent in which all types of cancer are reported, but only an increase in brain cancer is noted, the toxicologist may consider this information in determining if a pancreatic cancer was caused by exposure to a lesser dose of the agent. In terms of assessing specific causation for exposure to this known carcinogen, there may be the equivalent of a No Observed Adverse Effect Level (NOAEL; see section titled “Toxicological Study Design” below) in the reviewed and accepted epidemiological data in which an increase in brain cancer risk is seen only in the more highly exposed populations. Whether a toxicologist is qualified to offer an opinion based on both toxicology and epidemiology data depends on the particular background of the proffered expert and the legal standard applied in the jurisdiction.
In recent decades, exposure assessment has developed into a scientific field (exposure science) with journals, learned societies, and research funding processes.
Exposure science methodologies include mathematical models used to predict exposure resulting from an emission source, which might be a long distance upwind; chemical or physical measurements of media such as air, food, and water; and biological monitoring within humans, including measurements of blood and urine specimens if available, for the chemical of concern and/or its metabolites. Extrapolation back to the level of external exposure depends upon toxicokinetic analyses (see section titled “Safety and Risk Assessment” above). In this continuum of exposure metrics, the closer to the human body, the greater the overlap with toxicology.38 An exposure assessment should also look for competing exposures from other sources of the chemical of concern.
Exposure assessment is central to epidemiology as well and is often the limiting factor in using an epidemiological study to infer causation. Many of the causal associations between chemicals and human disease have been developed from epidemiological studies relating a workplace chemical to an increased risk of the specific disease in cohorts of workers, often with only a qualitative assessment of exposure. More sophisticated modern approaches to chemical exposures at complex workplaces include development of a job exposure matrix based on specific exposure levels throughout the working lifetime. An improved quantitative understanding of such exposures enhances the likelihood of observing causal relations.39 It also can support the opinion of the expert toxicologist on the likelihood that a specific exposure was responsible for an adverse outcome.
38. Toxicologists also have indirect means of approaching exposure through symptoms— e.g., for many agents, there is a known threshold for smell and a reasonable range of levels that might cause symptoms. For example, the use of toxicological expertise is appropriate in a situation in which chronic exposure to a volatile hydrocarbon is alleged to have occurred at levels at which acute exposure would be expected to render the individual unconscious. Toxicologists may also contribute knowledge of the extent of individual exposure based upon appropriate assumptions concerning inhalation rate or water use; for example, children inhale more per body mass than do adults, and outdoor workers in hot climates will drink more fluids than those in cool climates. For a discussion of general issues regarding assessment of exposure of toxic substances, see M. Elizabeth Marder & Joseph V. Rodricks, Reference Guide on Exposure Science and Exposure Assessment, in this manual.
39. In terms of general causation, accurate exposure assessment is important because a true effect can be missed owing to confounding caused by cohorts that often include workers with little exposure to the putative offending agent, thereby diluting the actual effect. See Peter F. Infante, Benzene Exposure and Multiple Myeloma: A Detailed Meta-analysis of Benzene Cohort Studies, 1076 Annals N.Y. Acad. Scis. 90 (2006), https://doi.org/10.1196/annals.1371.081, for a discussion of this issue in relation to a meta-analysis of the potential causative role of benzene in multiple myeloma. On the other hand, an association between exposure and effect occurring solely by chance is more likely if the effect does not meet the expected standard of being more pronounced in those receiving the highest dose. See Goldstein, supra note 35. Setting regulatory standards based on the observed effect in a cohort often requires a risk assessment, which in turn is dependent on understanding the extent of the exposure. This has led to extensive retrospective reconstruction of exposure in key cohorts.
The term biomarker is used to identify the measurement of a chemical or biological molecule present in tissue when such tests are available for the substance at issue, and where the tests have been administered within the time frame when the substance may be present in the body to indicate past and/or current exposure.40 There are three different types of biomarkers of relevance in toxicology: a) biomarkers of exposure, b) biomarkers of effect, and c) biomarkers of susceptibility. Biomarkers may be persistent in tissue or may rapidly disappear, depending on the chemicals at issue.
A widely used biomarker of exposure is blood alcohol levels estimated by the Breathalyzer test, frequently used by law enforcement to establish whether an automobile driver is inebriated. The science behind the breathalyzer test has been the subject of many court decisions. In a similar manner, measurements of the concentration of drug(s) in the blood may be used to establish cause of death. However, alcohol and most drugs have relatively short half-lives (the time it takes for 50% of the chemical to disappear from the blood), on the order of a few hours or less, and thus are useful only if the blood (or breath) sample is collected soon after the exposure occurred.
The concentration of lead in the blood and/or bones of an individual is another example of a biomarker of exposure that has been extensively studied. In the case of lead exposure, the half-life in blood is about one month, so repeated exposures in the workplace or environment can result in blood lead concentrations above what is measured in the environment. The half-life of lead in bone is on the order of twenty-five to thirty years, so assessment of lead in bone, performed noninvasively through an X-ray fluorescence technique, provides a window into past exposures to lead over a lifetime. Some other environmentally relevant toxic substances, referred to collectively as persistent organic pollutants or POPs (e.g., polychlorinated biphenyls (PCBs), dioxins, certain organochlorine pesticides like DDT and chlordane, and more recently a group of fluorinated chemicals collectively called per- and polyfluoroalkyl substances, or PFAS41),
40. See Nat’l Rsch. Council, Human Biomonitoring for Environmental Chemicals (2006), https://perma.cc/9AQ2-QXUQ; see also Lesa L. Aylward, Integration of Biomonitoring Data into Risk Assessment, 9 Current Op. Toxicology 14 (2018), https://doi.org/10.1016/j.cotox.2018.05.001.
41. Sometimes also referred to in the popular press as “forever chemicals.” For an EPA summary of research on PFAS, see https://perma.cc/JLV4-54XX.
have half-lives measured in years, and thus slow accumulation from environmental exposures can occur over many years. For these types of chemicals, blood concentrations are indicative of lifetime exposures—not just recent exposure—and can be useful to establish whether an unusual exposure to a specific agent has occurred in the past.
The Centers for Disease Control and Prevention’s (CDC) National Health and Nutrition Examination Survey (NHANES) monitors a broad level of biological markers of health—e.g., blood cholesterol levels—on a U.S. population sample. This survey now includes measurements of a wide variety of environmental and dietary substances. For environmental agents, the CDC considers blood levels above the statistical ninety-fifth percentile level in the NHANES samples as an indication of an “unusual exposure,” although it does not state whether exposures above, or below, the ninety-fifth percentile are necessarily harmful or safe, respectively.42 However, other regulatory agencies might use such levels to indicate that an exposure at that level is associated with an unacceptable risk.
Research on certain toxic chemicals has established the presence of biological changes indicative of the effects of that specific chemical and, thus, can also provide evidence of exposure, at least within the time period that the biological changes remain within the body. For example, when carbon monoxide enters the bloodstream, it binds to hemoglobin, the molecule responsible for delivering oxygen from the lungs to the rest of the body. Once bound, it prevents oxygen from binding, which results in oxygen deprivation that can be rapidly fatal. For sublethal doses of carbon monoxide, the time to full equilibrium with blood hemoglobin is eight to twelve hours, depending largely on breathing rate. The measurement of the amount of this carboxyhemoglobin in blood is therefore both a biomarker of exposure—reflecting carbon monoxide exposure during the past eight to twelve hours—as well as a biomarker of effect, as it is on the direct pathway to toxicity. Carbon monoxide can also be measured in expired air, in which case it primarily reflects exposure. Another example of a biomarker of exposure and effect is provided by a broad class of insecticides known as organophosphorus chemicals (OPs). As a chemical class, OPs bind to and inhibit an important
42. For more details on the NHANES program, see https://perma.cc/UY4L-S8TY. One objective of the NHANES program was to “establish reference ranges that physicians and scientists can use to determine whether a person or group has an unusually high exposure. . . . The 95th percentile is helpful when determining whether levels observed in separate public health investigations or other studies are unusual.” CDC, Fourth National Report on Human Exposure to Environmental Chemicals (Volume Three: Analysis of Pooled Serum Samples for Select Chemicals, NHANES 2005–2016), https://perma.cc/E28W-X5HL.
enzyme in nerve function called cholinesterase and can be rapidly lethal. Cholinesterase activity unrelated to its role in the nervous system is also present in red blood cells and in plasma. Measurements of cholinesterase in the blood can thus be an effects-related biomarker of exposure to many OP pesticides. This test is used frequently in the occupational environment and clinically to diagnose OP poisoning. Where available, and when testing is done within the required time frames, such effects-related biomarkers provide information that is directly on the pathway between external exposure and internal effects. Although currently there are only a relatively few chemicals for which specific biomarkers are available, it is predicted that the increasing understanding of molecular effects of toxic substances will lead to more effects-related biomarkers of exposure.
Just as biological measurement of environmental exposures is an area of research interest, another area of research interest is DNA analyses to identify individuals who may be more susceptible to exposures, such as genetically determined variability in enzymes responsible for detoxifying chemicals of concern. Given the complexities of environmental and genetic interactions, much discussion in this area remains theoretical.
Toxicological studies usually involve exposing laboratory animals (in vivo research) or cells or tissues (in vitro research) to chemical or physical agents, monitoring the outcomes (such as cellular abnormalities, tissue damage, organ toxicity, or tumor formation), and comparing the outcomes with those for unexposed control groups. As explained below,43 the extent to which animal and cell experiments accurately predict human responses to chemical exposures is subject to debate. However, because it is generally unethical to experiment on humans by exposing them to known doses of chemical agents, animal toxicological evidence often provides the best scientific information about the risk of disease from a chemical exposure.44
43. See section titled “Extrapolation from Animal (In Vivo) and Cell (In Vitro) Research to Humans” below.
44. Clinical drug trials for therapeutic drugs are an example of experimental human exposure, where as a policy matter potential benefit is considered greater than the risk of adverse events. Notably, information from animal research is used to determine risk profiles, prior to human trials. The human trials are highly regulated, and subject to ethical requirements, including informed
In contrast to their exposure to drugs, only rarely are humans exposed to environmental chemicals in a manner that permits a quantitative determination of adverse outcomes.45 This area of toxicological study may consist of individual or multiple case reports, or even experimental studies in which individuals or groups of individuals have been exposed to a chemical under circumstances that permit analysis of dose–response relationships, mechanisms of action, or other aspects of toxicology. For example, individuals occupationally or environmentally exposed to PCBs have been studied to determine the routes of absorption, distribution, metabolism, and excretion for this group of industrial chemicals. Human exposure occurs most frequently in occupational settings where workers are often exposed each working day to industrial chemicals. In well-regulated workplaces, there is often information about the exposure levels of specific chemicals of concern to the average worker. However, even under these circumstances, determining the amount (concentration, frequency, and duration) of exposure of an individual worker is more problematic, particularly as it must be done retrospectively in most toxic tort cases. Moreover, human populations are exposed to many other chemicals and risk factors, including those associated with normal aging, making it more difficult to isolate the role of any one chemical in the increased risk of a disease.46
Toxicologists use a wide range of experimental techniques, depending in part on their area of specialization. Toxicological research may focus on classes of chemical compounds, such as solvents, metals, and pesticides; body system effects, such as effects on the nervous system (neurotoxicology), the liver (hepatotoxicology), the reproductive system (reproductive toxicology), and the immune system (immunotoxicology); chemical effects on complex disease processes such as cancer (chemical carcinogenesis) and birth defects (developmental toxicology); and effects on physiological processes, including inhalation toxicology, cardiovascular toxicology, and dermal (skin) toxicology. Each of these subdisciplines utilizes both in vivo and in vitro research.47 Of importance to all of these types of toxicological studies is the rapidly growing field of molecular toxicology, which examines how certain chemicals interact with biomolecules within the cell, such as proteins, lipids, and DNA. Molecular toxicology studies provide critical insights into cross-species extrapolation, dose–response, and mode of action that demonstrates the biological plausibility used for causal inference following human exposure.
consent. See, e.g., FDA, Clinical Trials Guidance Documents, https://www.fda.gov/science-research/clinical-trials-and-human-subject-protection/clinical-trials-guidance-documents.
45. However, it is from drug studies in which multiple animal species are compared directly with humans that many of the principles of toxicology have been developed.
46. See, e.g., OECD, Considerations for Assessing the Risks of Combined Exposure to Multiple Chemicals (2018), https://perma.cc/L48K-W79J.
47. See, e.g., Casarett & Doull’s, supra note 1.
The insights provided by molecular toxicology are of particular interest to regulatory toxicologists who are exploring whether these insights can form the basis for new, more sensitive screening techniques to determine the risk of new or existing chemicals.48
Animal research in toxicology generally falls under two headings: safety assessment and classic laboratory research, with a continuum between them. As explained in the section titled “Safety and Risk Assessment” above, safety assessment is a relatively formal approach in which a chemical’s potential for toxicity is tested in vivo or in vitro using standardized techniques often prescribed by regulatory agencies, such as the EPA and the FDA.49
The roots of toxicology in the science of pharmacology are reflected in an emphasis on understanding the absorption, distribution, metabolism, and excretion of chemicals. Basic toxicological laboratory research also focuses on the mechanisms or modes of action of external chemical and physical agents.50 Such research is based on the standard elements of scientific studies, including appropriate experimental design using control groups and statistical evaluation. In general, toxicological research attempts to hold all variables constant except for that of the chemical exposure. Any change in the experimental group not found in the control group is assumed to be caused by the chemical.
An important component of toxicological research is dose–response relationships. Thus, most toxicological studies generally test a range of doses of the chemical. Animal experiments are conducted to determine the dose–response relationships of a compound by measuring how response varies with dose,
48. Robert J. Kavlock et al., Accelerating the Pace of Chemical Risk Assessment, 31 Chem. Rsch. Toxicology 287 (2018), https://doi.org/10.1021/acs.chemrestox.7b00339.
49. See, e.g., Michael Dorato et al., The Toxicologic Assessment of Pharmaceutical and Biotechnology Products, in Hayes’ Principles and Methods of Toxicology 325 (A. Wallace Hayes & Claire L. Kruger eds., 6th ed. 2014), https://doi.org/10.1201/b17359.
50. The terms mechanism of action and mode of action are conceptually similar and are used to describe the specific molecular, biochemical, and cellular changes that are responsible for the observed toxic effects following the administration of a chemical. Although not identical, the two terms have similar meaning. Generally, mode of action is somewhat broader and refers to the generalized sequence of molecular events that lead to toxicity, whereas mechanism of action refers to a specific cellular, biochemical, or molecular target of the substance (e.g., a specific receptor).
including diligently searching for a dose that has no measurable adverse effect. This information is useful in understanding the mechanisms of toxicity and in extrapolating data from animals to humans.51
To determine the dose–response relationship for a compound, a short-term lethal dose 50% (LD50) may be derived experimentally. The LD50 is the dose at which a compound kills 50% of laboratory animals and usually is determined from a single exposure, although the observation may extend for a period of days to weeks. The use of this easily measured endpoint for acute toxicity largely has been replaced, in part because recent advances in toxicology have provided other pertinent endpoints, and in part because of animal welfare concerns that have focused on reduction, refinement, or replacement of the use of animals in laboratory research.52 Although determining the LD50 was frequently the primary goal of acute toxicity testing, much additional useful information is obtained, such as the identification of the target organ(s) of effect, overall characterization of the symptoms, and progression of toxicity following single doses. Acute toxicity studies are also necessary to establish the dose range necessary for subsequent repeated-dose (subacute, usually fourteen days; and subchronic, usually ninety days) toxicity studies.
A dose–response study also permits the determination of another important characteristic of the biological action of a chemical—the no observed adverse effect level (NOAEL; Figure 1).53 The NOAEL sometimes is called a threshold, because it is the level above which observed adverse or toxic effects in test
51. See sections titled “Benchmark Dose (BMD),” “Linearized Non-Threshold (LNT) Model and Determination of Cancer Risk,” and “Maximum Tolerated Dose (MTD) and Chronic Toxicity Tests” below.
52. Dale M. Cooper et al., The Humane Use and Care of Laboratory Animals in Toxicology Research, in Hayes’ Principles and Methods of Toxicology 1023, supra note 49.
53. For example, undiluted acid on the skin can cause a horrible burn. As the acid is diluted to lower and lower concentrations, less and less of an effect occurs until there is a concentration sufficiently low (e.g., one drop in a bathtub of water, or a sample with less than the acidity of vinegar) that no effect occurs. This “no observed effect” concentration differs from person to person. For example, a baby’s skin is more sensitive than that of an adult, and skin that is irritated or broken responds to the effects of an acid at a lower concentration. However, the key point is that there is some concentration that is completely harmless to the skin.
animals are believed to occur and below which no toxicity is observed.54 Of course, because the NOAEL is dependent on the ability to observe an effect, the level is sometimes lowered after more sophisticated methods for detecting effects are developed. Typically, NOAEL studies involve the administration of the test chemical at varying different doses, daily over a period of ninety days. At least three doses (plus the “zero dose” control) are used, with doses generally spanning a dose range of thirty- to one hundred-fold, with the highest dose causing clear evidence of toxicity, and the lowest dose showing no evidence of toxicity. Typically, rats are used, with at least ten males and ten females for each dose group. The ultimate goal is to establish a dose–response curve, with at least one dose that does not show any evidence of toxicity. In circumstances where the lowest dose tested resulted in an adverse effect, the terminology for that dose is the lowest observed adverse effect level (LOAEL), which may then be used in place of a NOAEL, with additional caveats (e.g., uncertainty factors) if used in risk assessment.
For regulatory toxicology, the NOAEL has been replaced by a more statistically robust approach known as the benchmark dose (BMD; Figure 1). The BMD is determined based on dose–response modeling and is defined as the exposure associated with a specified low incidence of risk, generally in the range of 1% to 10%, of a health effect, or the dose associated with a specified measure or change of a biological effect (thus, if the lower range on the BMD is set at a 10% response, it is referred to as a BMD10). To model the BMD, sufficient data must exist, such as at least a statistically or biologically significant dose–related trend in the
54. The significance of the NOAEL was relied on in an action alleging various health issues caused by hazardous levels of wind-borne thallium dust that escaped a military base landfill and entered plaintiff’s living environment. Myers v. United States, No. 02cv1349–BEN, 2014 WL 6611398 (S.D. Cal. Nov. 20, 2014), aff’d, 673 F. App’x 749 (9th Cir. 2016). The court ruled in favor of defendants, in part from finding persuasive evidence that the low amount of thallium found in soil at the landfill was significantly below the NOAEL. 2014 WL 6611398, at *31. See also Env’t L. Found. v. Beech-Nut Nutrition Corp., 235 Cal. App. 4th 307, 311, 317–18 (2015), as modified on denial of reh’g (Apr. 16, 2015) (affirming judgment in favor of defendants because it found the trial court did not err in relying on the defendant’s toxicology expert testimony that the average and single-exposure levels of lead in the manufacturer’s baby food does not exceed the NOAEL (i.e., the regulatory safe harbor exposure level)). But see Chiaracane v. Port Auth. Trans-Hudson Corp., No. 18-CV-2995, 2020 WL 906268, at *4, *8 (S.D.N.Y. Feb. 25, 2020) (holding that plaintiffs’ expert testimony by a neurotoxicologist on the issue of causation of respiratory symptoms by a cleaning chemical cannot be admitted because, although the expert spoke to NOAEL, the expert did not quantify the plaintiffs’ dose or exposure to the chemical in his analysis).
Data reflect hypothetical experiment with twenty animals at each of five dose groups and a control group:selected endpoint.55 Although the NOAEL and BMD are conceptually similar, the BMD is generally considered to be a more scientifically robust value because it uses all of the dose–response information from a multidose study, rather than just the single dose point that was associated with no observable response. In the regulatory arena, the upper and lower 95% confidence limits on the BMD are often statistically determined. The lower confidence limit, called the BMDL, is often used as the point of departure (POD) for extrapolating to lower doses or determining reference doses (Rf D) or acceptable daily intakes (ADIs). If the BMD10 is used, then the lower bounds response is called the BMDL10 (Figure 1). Rf Ds and ADIs are then determined by dividing the point of departure by uncertainty or safety
55. Courts also recognize the benchmark dose. See, e.g., Wyman v. U.S. Surgical Corp., No. 1:18-cv-00095-JAW, 2020 WL 1932338, at *21 (D. Me. Apr. 21, 2020) (explaining that the EPA bases its estimations of daily exposure levels of methylmercury a human can have without “appreciable risk of deleterious effects during a lifetime” on benchmark doses, which are amounts of methylmercury that are “associated with the threshold for a health effect on sensitive populations according to epidemiological studies”); Kolakowski v. Sec’y of Health & Hum. Servs., No. 99-0625V, 2010 WL 5672753, at *11 (Fed. Cl. Nov. 23, 2010) (explaining that the National Academy of Sciences establishes its risk assessment benchmark dose for mercury “from a percentage of the population with a physical response to the dose”).
factors. Many chemicals have multiple different toxic effects, and the dose–response relationship for each adverse outcome (toxic effect) may be different. Typically, regulatory agencies use the most sensitive toxic effect (the effect associated with the lowest dose or threshold) to establish regulatory values, such as Maximum Contaminant Levels (MCLs) under the Clean Water Act, or National Ambient Air Quality Standards (NAAQS) under the Clean Air Act. Thus, in the absence of robust epidemiological data, the NOAEL or BMDL for the most sensitive toxic effect are frequently used to establish “safe” or “acceptable” levels of exposure, which are obtained by simply dividing the POD (identified as the BMDL10 or NOAEL) by safety or uncertainty factors (SF or UF). In the early days of regulation of food additives and contaminants, NOAELs were established as discussed above, and typically divided by a factor of one hundred to account for: (1) potential species differences, with a default assumption that humans might be tenfold more sensitive than the rat or mouse used to derive the NOAEL, and (2) interindividual variability within the human population, with a default assumption of another factor of ten. Today, regulatory agencies such as the EPA and the FDA often still use these basic assumptions or uncertainty factors but may modify overall uncertainty by adding additional uncertainty factors based on the quality of the study and the target population of concern. For example, the Food Quality Protection Act of 199656 generally requires an additional uncertainty factor (again, usually a factor of ten) if potentially susceptible populations, such as children and pregnant women, are likely to be exposed.
In toxic tort litigation related to a specific effect (i.e., a particular type of birth defect, liver or kidney disease, neurological damage, etc.), experts may argue that the dose of the agent was below the threshold and thus insufficient to have caused or substantially contributed to the disease. An individual plaintiff’s levels of exposure to a toxic substance can sometimes be estimated using exposure science techniques (see Reference Guide on Exposure Science and Exposure Assessement) and are usually expressed in units of milligrams (mg) of chemical per kilogram (kg) of body weight, per day. These are the same units used in determining NOAEL and BMDL levels from experimental animal studies, and sometimes from human epidemiology studies where airborne, dietary, or drinking water concentrations have been measured. Comparison of the estimated NOAEL or BMDL to the estimated dose of an individual can provide a useful approximation of whether the estimated exposure is likely to have exceeded a level known to be toxic. The ratio of the human-exposed dose to that of an estimate of a potential “threshold” (e.g., BMDL or NOAEL with appropriate safety factor adjustments) is called the Margin of Exposure (MOE). The larger the MOE, the less likely it is that an individual’s actual exposure could have caused or contributed to the disease. Thus, an MOE greater than 1,000 is unlikely to be sufficient to have caused or
56. See EPA, Summary of the Food Quality Protection Act, https://perma.cc/5G7J-KYLS.
substantially contributed to a toxic effect, whereas an MOE that is less than 1.0 might reasonably be anticipated to have caused or contributed to the toxic effect in question. However, there are many assumptions that go into such estimates and comparisons, and each assumption should be justified with data. It is important to note that this approach to estimation of a potential adverse outcome (risk) is generally not used for estimating the probability of cancer risk from chemicals that are thought to cause cancer by damaging DNA—so-called genotoxic carcinogens. The reason for this is that regulatory policies for genotoxic carcinogens have generally assumed that risk is proportionately related to dose and thus there is some risk at any dose, as discussed below.
Certain genetic mutations, such as those leading to cancer and some inherited disorders, have been widely hypothesized to occur without any threshold. In theory, the cancer-causing mutation to the genetic material of the cell can be produced by any one molecule of certain chemicals. The LNT model led to the development of the one-hit hypothesis of cancer risk, in which each molecule of a cancer-causing chemical has some finite possibility of producing the mutation that leads to cancer. (See Figure 2 for an idealized comparison of an LNT and threshold dose–response.) This risk is very small, because it is unlikely that any one molecule of a potentially cancer-causing agent will reach that one particular spot in a specific cell’s DNA and result in the specific change that then eludes the body’s defenses and leads to a clinical case of cancer. However, the LNT hypothesis holds that the risk is not zero. The same model also can be used to predict the risk of inheritable mutational events. There is also considerable scientific controversy over the origins of the LNT model and whether it is scientifically appropriate to use for risk assessment of chemical carcinogens.57
57. For further discussion of the LNT model of carcinogenesis, see James E. Klaunig & Zemin Wang, Chemical Carcinogenesis, in Casarett & Doull’s, supra note 1, at 329. See also Edward J. Calabrese et al., Dose Response and Risk Assessment: Evolutionary Foundations, 309 Env’t Pollution 119787 (2022), https://doi.org/10.1016/j.envpol.2022.119787; Rebecca A. Clewell et al., Dose–Dependence of Chemical Carcinogenicity: Biological Mechanisms for Thresholds and Implications for Risk Assessment, 301 Chemico-Biologocal Interactions 112 (2019), https://doi.org/10.1016/j.cbi.2019.01.025. Although the one-hit model may explain the response to some carcinogens, there is accumulating evidence that for most cancers there is in fact a multistage process and that some cancer-causing agents, so-called epigenetic or nongenotoxic agents, act through non-mutational processes. For example, the multistage cancer process may explain the carcinogenicity of benzo[a]pyrene (produced by the combustion of hydrocarbons such as oil) and aflatoxin (a mold toxin that contaminates corn and peanuts). However, non-mutational responses to asbestos, dioxin, and estradiol cause their carcinogenic effects. The appropriate mathematical model to use to depict the dose–response relationship for such carcinogens is still a matter of debate. See John E. Doe et al., The Codification of Hazard and Its Impact on the Hazard Versus Risk Controversy, 95 Archives Toxicology 3611 (2021), https://doi.org/10.1007/s00204-021-03145-6.
Proposals have been made to merge cancer and noncancer risk assessment models. Nat’l Rsch. Council & Comm. on Improving Risk Analysis Approaches Used by the U.S. EPA, supra note 20. The controversy over the use of non-threshold carcinogenesis models in regulatory risk assessment is described by J.V. Rodricks, When Risk Assessment Came to Washington: A Look Back, 17 Dose Response 1 (2019), https://doi.org/10.1177/1559325818824934. Recent efforts to improve risk assessment for non-genotoxic carcinogens, which are generally not thought to follow low dose, linear response, are underway. See Christian Strupp et al., Increased Cell Proliferation as a Key Event in Chemical Carcinogenesis: Application in an Integrated Approach for the Testing and Assessment of Non-Genotoxic Carcinogenesis, 24 Int’l J. Molecular Scis. 13246 (2023), https://doi.org/10.3390/ijms241713246.
Courts continue to grapple with the non-threshold model. The District of Columbia Court of Appeals noted “‘mounting judicial skepticism’ of no-threshold models. . . .” N. Am.’s Bldg. Trades Unions v. Occupational Safety & Health Admin., 878 F.3d 271, 284 (D.C. Cir. 2017). Nonetheless, the court upheld OSHA’s use of a non-threshold model to determine risk for silicosis and lung cancer from silica exposure as “supported by substantial evidence” and “in line with our precedent.” Id. at 283–84. In another case, the District of Massachusetts excluded an expert’s opinion based on a non-threshold model, noting that such an “opinion is inadmissibly unreliable” because “there is no scientific evidence that the linear no-safe threshold analysis is an acceptable scientific technique” to determine specific causation. Milward v. Acuity Specialty Prods. Grp., Inc., 969 F. Supp. 2d 101, 110 (D. Mass. 2013), aff’d sub nom. Milward v. Rust-Oleum Corp., 820 F.3d 469, 474–75 (1st Cir. 2016). But see McManaway v. KBR, Inc., No. H-10-1044, 2012 WL 13059744, at *11–12 (S.D. Tex. Aug. 22, 2012) (permitting expert to use non-threshold model despite an argument that the “theory . . . is unsupported by relevant case law and in the relevant scientific community” because the Fifth Circuit has “no bright line” rule excluding the model).
Another type of study uses different doses of a chemical agent to establish over a ninety-day period what is known as the maximum tolerated dose (MTD), the highest dose that does not cause significant overt toxicity. The MTD is important because it enables researchers to calculate the dose of a chemical to which an animal can be exposed without reducing its lifespan, thus permitting the evaluation of the chronic effects of exposure.58 These studies are designed to last the lifetime of the species.
Chronic toxicity tests evaluate carcinogenicity or other types of toxic effects. Federal regulatory agencies frequently require carcinogenicity studies on both sexes of two species, usually rats and mice, administered daily over the lifetime (generally two years) of the animals. A pathological evaluation is done on the tissues of animals that died during the study and those that are sacrificed at the conclusion of the study.
The rationale for using the MTD in chronic toxicity tests, such as carcinogenicity bioassays, often is misunderstood. It is preferable to use realistic doses of carcinogens in all animal studies. However, this leads to a loss of statistical power, thereby limiting the ability of the test to detect carcinogens or other toxic compounds. Consider the situation in which a realistic dose of a chemical causes a tumor in one in one hundred laboratory animals. If the lifetime background incidence of tumors in animals without exposure to the chemical is six in one hundred, a toxicological test involving one hundred control animals and one hundred exposed animals who were fed the realistic dose would be expected to reveal six control animals and seven exposed animals with the cancer. This difference is too small to be recognized as statistically significant. However, if the study started with ten times the realistic dose, the researcher would expect to get ten additional cases, for a total of sixteen cases in the exposed group and six cases in the control group, a statistically significant difference that is unlikely to be overlooked.
Unfortunately, even this example does not demonstrate the difficulties of determining risk. Regulators are responding to public concern about cancer by regulating risks often as low as one in 1,000,000—not one in one hundred, as in the example given above. To test risks of one in 1,000,000, a researcher would have to either increase the lifetime dose from ten times to 100,000 times the realistic dose or expand the numbers of animals under study into the millions. However, increases of this magnitude are beyond the world’s animal testing capabilities and are also prohibitively expensive. Inevitably, then, animal studies
58. Even the determination of the MTD can be fraught with controversy. See Environmental Science Deskbook § 4:5 (2022) (illustrating problems with making inferences about human toxicity based on MTD established by animal studies).
must trade statistical power for extrapolation from higher doses to lower doses. However, recent research indicates that the MTD used in studies of some chemicals—and perhaps even the lower doses—may overwhelm biologically important protective pathways, thereby causing cellular damage integral to the disease process that would not occur at lower, more realistic doses to which humans may be exposed. Because of these concerns, there has been increasing research interest among toxicologists to redefine the MTD to include assessment of repair response pathways, where data are available, such as DNA repair processes, to ensure that the highest dose used does not overwhelm protective pathways that are operable at lower doses.59
Accordingly, proffered toxicological expert opinion on potentially disease-causing chemicals almost always is based on evaluation of the research studies that extrapolate from animal experiments involving doses significantly higher than that to which humans are exposed.60 In both regulatory and toxic tort scenarios, such extrapolation is backed up by other types of data, where available, such as epidemiological studies.
In vitro research concerns the effects of a chemical on human or animal cells, bacteria, yeast, isolated tissues, or embryos. Thousands of in vitro toxicological tests have been described in the scientific literature. Many tests are for chemicals that cause changes in DNA (mutations) in bacterial or mammalian systems. There are short-term in vitro tests for just about every physiological response and every organ system, such as perfusion tests and DNA studies. Many of these short-term tests have been thoroughly validated, and standardized protocols for conducting such studies are widely available.61 However, in vitro studies often do not
59. Yu-Mei Tan et al., Opportunities and Challenges Related to Saturation of Toxicokinetic Processes: Implications for Risk Assessment, 127 Regul. Toxicology & Pharmacology 105070 (2021), https://doi.org/10.1016/j.yrtph.2021.105070.
60. See, e.g., Int’l Agency for Research on Cancer, World Health Organization, Preamble, Amended 2019, https://perma.cc/A3L5-NLN7; Joseph V. Rodricks, Evaluating Disease Causation in Humans Exposed to Toxic Substances, 14 J.L. & Pol’y 39 (2006).
61. See Joanna Klapacz & B. Bhaskar Gollapudi, Genetic Toxicology, in Casarett & Doull’s, supra note 1, at 497–54. Use of in vitro data for evaluating human mutagenicity and teratogenicity is described in John M. Rogers, Developmental Toxicology, in Casarett & Doull’s, supra note 1, at 547–92. For a critique of expert testimony using in vitro data, see In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., 26 F. Supp. 3d 466, 477–82 (E.D. Pa. 2014); In re Mirena IUS Levonorgestrel-Related Prods. Liab. Litig. (No. II), 341 F. Supp. 3d 213, 294–95 (S.D.N.Y. 2018), aff’d, 982 F.3d 113 (2d Cir. 2020); Davis v. McKesson Corp., No. CV-18-1157-PHX-DGC, 2019 WL 3532179, at *17 (D. Ariz. Aug. 2, 2019); In re Incretin-Based Therapies Prods. Liab. Litig., 524 F. Supp. 3d 1007 (S.D. Cal. 2021), aff’d, No. 21-55342, 2022 WL 898595 (9th Cir. Mar. 28, 2022).
accurately reflect the complexity of in vivo biological responses to chemicals, and thus can be misleading. A particular challenge with interpreting in vitro studies is how to compare the concentration of the test chemical used in the test tube or petri dish with the actual concentration of the same chemical in target tissues of the individual exposed to the chemical (the in vivo situation). As such, in vitro studies are often “hypothesis generating,” rather than providing definitive scientific proof that a chemical can cause a particular type of toxic response in vivo.
The criteria of reliability for an in vitro test include the following: (1) whether the test has come through a published protocol in which many laboratories used the same in vitro method on a series of unknown compounds prepared by a reputable organization (such as the NIH or the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use [ICH]) to determine if the test consistently and accurately measures toxicity; (2) whether the test has been adopted by a U.S. or international regulatory body; and (3) whether the test is predictive of in vivo outcomes related to the same cell or target organ system.
Understanding the molecular pathways that are responsible for the ability of a chemical to cause a particular toxic effect, such as a specific type of cancer, is of value to a toxicologist charged with assessing the likelihood that a specific chemical caused a specific cancer in an individual. Identification of biological pathways is an important component of specific causation in most toxic tort cases. Thus, in vitro toxicology studies that investigate exactly how it is that a chemical causes its adverse biological effects are relevant to toxic tort litigation and can contribute significant support to a toxicologist’s expert opinions on specific causation.62
Advances in analyzing the impact of external chemical agents are dependent primarily on advances in biology and in medicine leading to a greater understanding of normal body functions and of the changes observed in disease states. Integrating advances in biological and medical understanding of the cellular, subcellular, and molecular basis for normal function is reflected in the continued incorporation of new multidisciplinary science into toxicology.
62. See, e.g., In re Johnson & Johnson Talcum Powder Prods. Mktg., Sales Pracs. & Prods. Liab. Litig., 509 F. Supp. 3d 116 (D.N.J. 2020) (expert may opine that his in vitro studies show that talc causes cellular inflammation and oxidative stress but may not offer an opinion on a causal link between talc and ovarian cancer); see also In re Denture Cream Prods. Liab. Litig., No. 09-2051-MD, 2015 WL 392021 (S.D. Fla. Jan. 28, 2015) (expert’s failure to explain the relevancy of in vitro studies to humans or to account for factors needed to make a proper extrapolation renders expert’s opinion unreliable); Hardeman v. Monsanto Co., 997 F.3d 941, 963 (9th Cir. 2021) (“[C]ell studies can support more substantial evidence of causation.”).
The driving forces for incorporating these newer advances into the field of toxicology have included the recognition that a large percentage of chemicals in commerce had been inadequately tested for toxicity. The resultant changes in laws in the European Union (REACH), the United States (the Lautenberg Act), and elsewhere have led to development and adoption of new approach methodologies (NAMs) that could replace standard animal toxicology.63 Pressure toward the use of subcellular and computer-based in silico methodologies has also come from animal rights proponents. Pharmaceutical companies also have invested in developing in silico technology for rapid survey of the potential biological impact of large numbers of chemicals to support their search for therapeutic agents without unanticipated side effects. NAMs have also considered how to more effectively incorporate individual and population exposure data into chemical evaluation.
The NAMs collectively have a distance to go before they can be fully accepted as pertinent to regulatory decisions or to toxic torts. Important issues include understanding whether a gene or a biological pathway that is turned on by a chemical in a test tube or in silico will result in a similar response in vivo; uncertainties about the implications of the response to human or ecosystem function; the importance of reproducing the wide range of human sensitivity and environmental variability; and the potential difficulty of distinguishing between a pathway to an adverse effect in contrast to a desirable response that counteracts the potential for an adverse effect. Dose–response analysis is also problematic, as it is usually highly uncertain how a concentration of a chemical in a test tube or cell culture flask relates to a blood concentration following an in vivo exposure. Comparison of the relative efficacy of these newer methodologies will determine whether they will be incorporated into expert opinions in tort or regulatory law cases.
Two types of extrapolation must be considered: from animal data to humans and from higher doses to lower doses.64 In qualitative extrapolation, one can usually rely on the fact that a compound causing an effect in one mammalian species will cause it in another species. This is a basic principle of toxicology and pharmacology, when properly qualified by known differences in absorption, metabolism, excretion, and other known biological differences (e.g., differences in
63. Nat’l Rsch. Council, Toxicity Testing in the 21st Century: A Vision and a Strategy (2007), https://doi.org/10.17226/11970. See also Kavlock et al., supra note 48; Sebastian Schmeisser et al., New Approach Methodologies in Human Regulatory Toxicology—Not If, but How and When!, 178 Env’t Int’l 108082 (2023), https://doi.org/10.1016/j.envint.2023.108082.
64. See EPA, Guidance for Applying Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies Extrapolation (2014), https://perma.cc/JJ2K-SD8C.
receptor binding affinity) between humans and experimental animals. If a heavy metal, such as mercury, causes kidney toxicity in laboratory animals, it is highly likely to do so at some dose in humans. However, the dose at which mercury causes this effect in laboratory animals is modified by many internal factors, and the dose–response curve may be different from that for humans. Through the study of factors that modify the toxic effects of chemicals, including absorption, distribution, metabolism, and excretion, researchers can improve the ability to extrapolate from laboratory animals to humans and from higher to lower doses.65 The mathematical depiction of the process by which an external exposure is absorbed into the bloodstream and then moves through various compartments in the body until it reaches the target organ is often called physiologically based pharmacokinetic or toxicokinetic (PBPK/PBPT) modeling.66
Extrapolation from studies in nonmammalian species to humans is much more difficult but can be done if there is sufficient information on similarities in absorption, distribution, metabolism, and excretion. Advances in computational toxicology have increased the ability of toxicologists to make such extrapolations.67 Quantitative determinations of human toxicity based on in vitro studies usually are not considered appropriate. In vitro or in vivo animal data for elucidating the mechanisms of toxicity are more persuasive when positive human epidemiological data or human tissue–based toxicological information also exists.
65. For example, benzene undergoes a complex metabolic sequence that results in toxicity to the bone marrow in all species, including humans. Martyn T. Smith, Advances in Understanding Benzene Health Effects and Susceptibility, 31 Annual Rev. Public Health 133 (2010), https://doi.org/10.1146/annurev.publhealth.012809.103646. The exact metabolites responsible for this bone marrow toxicity are the subject of much interest but remain unknown. Mice are more susceptible to benzene than are rats. If researchers could determine the differences between mice and rats in their metabolism of benzene, they would have a useful clue about which portion of the metabolic scheme is responsible for benzene toxicity to the bone marrow. See, e.g., Angela L. Slitt, Absorption, Distribution, and Excretion of Toxicants, in Casarett & Doull’s, supra note 1, at 159–92; Andrew Parkinson et al., Biotransformation of Xenobiotics, in Casarett & Doull’s, supra note 1, at 193–400.
66. For a discussion of methods used to extrapolate from animal toxicity data to human health effects, see text and references cited in the sections titled “Benchmark Dose (BMD)” and “Linearized Non-Threshold (LNT) Model and Determination of Cancer Risk” above.
67. See, e.g., Nicole C. Kleinstreuer et al., Introduction to Special Issue: Computational Toxicology, 34 Chem. Rsch. Toxicology 171 (2021), https://doi.org/10.1021/acs.chemrestox.1c00032. For discussion of the use of artificial intelligence and machine learning in toxicology, see Zhoumeng Lin & Wei-Chun Chou, Machine Learning and Artificial Intelligence in Toxicological Sciences, 189 Toxicological Scis. 7 (2022), https://doi.org/10.1093/toxsci/kfac075. For an overview of new approaches in structure-activity relationships, see Mark T.D. Cronin et al., A Scheme to Evaluate Structural Alerts to Predict Toxicity—Assessing Confidence by Characterising Uncertainties, 135 Regul. Toxicology & Pharmacology 105249 (2022), https://doi.org/10.1016/j.yrtph.2022.105249.
The biological, chemical, and physical phenomena that are the basis of life are astounding in their complexity. As a result, human subcellular, cellular, and organ function are both delicately balanced and highly robust. Small changes caused by external chemical and physical agents can have major effects; yet, through the millennia, evolutionary pressures have led to the emergence of safety mechanisms that defend against adverse environmental stresses.
The specialization that is a hallmark of organ development in vertebrates inherently leads to diversity in the underlying processes that are the basis of organ function. Certain chemicals poison virtually all cells by affecting a basic biological process essential to life. For example, cyanide interferes with the conversion of oxygen to energy in a subcellular component known as mitochondria.68 Other chemical agents interfere selectively with an organ-specific process. For example, organophosphorus pesticides and chemically related biological warfare agents (i.e., some nerve gases) specifically interfere with the way nerve cells transmit signals—a process that is pertinent primarily to the nervous system. Other chemicals, such as the widely used over-the-counter analgesic acetaminophen, can cause specific damage to the liver if the dose is sufficient to overwhelm the normal protective pathways in the liver.69 Table 1 provides arbitrarily selected examples of toxicological endpoints and agents of concern, which are not meant to be inclusive or exhaustive.
Despite this specialization, there are pathological processes common to diseases affecting many different organs. For example, chronic inflammation of the skin leads to fiber formation that is recognized as scarring. Similarly, cirrhosis of the liver can result from fibrogenic processes caused by repetitive inflammation of the liver, such as from the overuse of ethanol, and fibrosis of the lung is a pathological process resulting from exposure to asbestos, silica, and
68. Note that the diffuse toxicity of cyanide also reflects its ability to spread widely in the body. Certain mitochondrial poisons primarily affect the brain and active muscles, including the heart, which are particularly oxygen dependent. Others, unable to penetrate the blood–brain barrier, will primarily affect peripheral muscles including the heart.
69. Acetaminophen (Tylenol) is among the most widely used analgesic/non-steroidal anti-inflammatory drugs, and is used safely every day by millions of people at normal therapeutic doses (less than five grams per day). However, doses in excess of ten to fifteen grams in a single day deplete an intracellular antioxidant called glutathione (GSH), and the excess dose can cause serious damage to the liver. Acetaminophen overdose is among the leading causes of acute liver failure in the United States. To produce toxicity it must first be metabolized to a reactive form by a specific enzyme that is only present in significant amounts in the liver. This is why acetaminophen is only toxic to the liver.
Table 1. Sample of Selected Toxicological Endpoints and Examples of Agents of Concern in Humans
| Organ System | Examples of Endpoints | Examples of Agents of Concern |
|---|---|---|
|
Skin |
allergic contact dermatitis |
nickel, poison ivy, cutting oils |
|
chloracne |
dioxins and dioxin-like compounds |
|
|
cancer |
polycyclic aromatic hydrocarbons |
|
|
Respiratory Tract |
nonspecific irritation (reactive airway disease) |
formaldehyde, acrolein, ozone |
|
asthma |
toluene diisocyanate |
|
|
chronic obstructive pulmonary disease |
cigarette smoke |
|
|
fibrosis, pneumoconiosis |
silica, mineral dusts, cotton dust, beryllium |
|
|
cancer |
cigarette smoke, arsenic, asbestos, nickel |
|
|
Blood and the Immune System |
anemia |
arsine, lead, methyldopa |
|
secondary polycythemia |
cobalt |
|
|
methemoglobinemia |
nitrites, aniline dyes, diaminodiophenyl sulfone |
|
|
pancytopenia |
benzene, radiation, chemotherapeutic agents |
|
|
secondary lupus erythematosus |
hydralazine |
|
|
leukemia |
benzene, radiation, chemotherapeutic agents |
|
|
Liver and Gastrointestinal Tract |
hepatic damage (hepatitis) |
acetaminophen, ethanol, carbon tetrachloride, vitamin A |
|
cancer |
aflatoxin, vinyl chloride |
|
|
Urinary Tract |
kidney toxicity |
ethylene and diethylene glycols, lead, melamine, aminoglycoside antibiotics |
|
bladder cancer |
aromatic amines, arsenic |
|
|
Nervous System |
nervous system toxicity |
cholinesterase inhibitors, mercury, lead, n-hexane, bacterial toxins (botulinum, tetanus), many organochlorine insecticides |
|
Parkinson’s disease/Parkinson’s-like symptoms |
manganese, MPTP |
| Organ System | Examples of Endpoints | Examples of Agents of Concern |
|---|---|---|
|
Reproductive and Developmental Toxicity |
fetal malformations |
thalidomide, ethanol |
|
decreased male fertility |
dibromochloropropane (DBCP) |
|
|
Endocrine System |
thyroid toxicity |
radioactive iodine, perchlorate |
|
Cardiovascular System |
heart toxicity |
anthracyclines, cobalt |
|
high blood pressure |
lead |
|
|
arrhythmias |
plant glycosides (e.g., digitalis) |
Note: This table presents only examples of toxicological endpoints and examples of agents of concern in humans and is provided to help illustrate the variety of toxic agents and endpoints. It is not an exhaustive or inclusive list of organs, endpoints, or agents. Absence from this list does not indicate that any specific chemical is not a cause of the target organ toxicity.
other agents.70 The potential for endocrine disruption by chemicals, particularly those that persist within the body, has become an increasing concern. Many of these persistent agents belong to families of chemically similar compounds, such as dioxins, PCBs, or per- and polyfluoroalkyl substances (PFAS) that may differ in their effect. Judges may encounter cases in which a wide variety of toxic effects are alleged to have been caused by endocrine-disrupting chemicals. Possible adverse outcomes from endocrine disruption include loss of fertility, birth defects, metabolic diseases that may result from thyroid disruption, and potentially some forms of cancer. While the concept of dose–response is still applicable, there is some evidence that some toxic effects of endocrine-disrupting chemicals may appear at lower doses but not be seen at higher doses, usually because the higher doses produce biological changes that mask the effects at lower doses (so-called “non-monotonic” dose–response relationship). This is an area of considerable controversy in toxicology. Vitamins also provide an example of differing adverse effects at levels above or below the normal range: e.g., too much vitamin A can cause liver disease and too little will cause blindness.
Particularly challenging to standard toxicological approaches are agents that react with different receptors present on the surface or internal components of the cell. These receptors often belong to complex families of related cellular components that are continually interacting with the broad range of hormones produced by our bodies.71 The intricate dynamic processes of normal endocrine
70. Lung fibrosis is a key pathological finding in a group of diseases known as pneumoconiosis that includes coal miners’ black lung disease, silicosis, asbestosis, and other conditions usually caused by occupational exposures to small airborne particles.
71. As a simplification, agent–receptor interactions often are described as a key in a lock, with the key needing to be able to both fit into the lock and turn the mechanism. An example from the
activity include daily cycles controlled by internal “biological clocks,” and feedback loops that allow cyclic variation, such as in the menstrual cycle or in the variations of hormone and receptor levels that are linked to normal functions such as sleeping and sexual activity. These complex normal up-and-down variations produce conceptual difficulties when attempting to extrapolate the results from model systems to the functioning human.72
The understanding of the relationship between mutation and cancer discussed previously led to some of the first toxicological tests to determine whether an external agent could cause cancer. Such tests have grown in sophistication because of the advances in molecular biology and computational toxicology that have occurred concomitantly with an increased understanding of the variety of potential pathways that lead to mutagenesis.73
Toxicological testing for chemical carcinogens ranges from relatively simple studies to determine whether the substance is capable of producing bacterial mutations to observation of cancer incidence as a result of long-term administration of the substance to laboratory animals. Between these two extremes are a multiplicity of tests that build upon the understanding of the mechanism of cancer causation. In vitro or in vivo tests may focus on the evidence of effects in DNA, such as the presence of adducts of the chemical or its metabolites bound to the DNA molecule or the cross-linking of the DNA molecule to protein. Researchers may look for changes in the nucleus of the cell suggestive of DNA damage that could result in mutagenesis and carcinogenesis, for example, the micronucleus test or the comet assay. Certain mutagens cause an increase in the normal exchange of nuclear material among DNA components during normal cell division, which gives rise to a test known as the sister chromatid exchange.74 The direct observation of chromosomes
nervous system is the use in treating a heroin overdose of another opiate that has a much higher affinity for the receptor site but produces little effect once bound. When given to a normal person, this second opiate would have a mild depressant effect, but it can reverse a near fatal overdose of heroin by displacing the heroin from the receptor site. Thus, the directionality of opiate effect depends on the interaction of the components of the mixture. This interaction is even more complex when dealing with estrogenic agents that are naturally occurring as well as made within the body at different levels in response to different external and internal stimuli and at different time intervals.
72. See, e.g., In re Denture Cream Prods. Liab. Litig., No. 09-2051-MD, 2015 WL 392021 (S.D. Fla. Jan. 28, 2015) (expert’s failure to correct for circadian rhythm fluctuations in zinc plasma levels rendered his opinion inadmissible). In another example, the complexity of the interaction of a mixture of dioxins with receptors governing the endocrine system can be contrasted with that of the reaction of carbon monoxide with the hemoglobin oxygen receptor discussed supra, note 18. The latter is unidirectional in that any additional carbon monoxide will interfere with oxygen delivery, of which there cannot be too much under normal physiological conditions.
73. See, e.g., James E. Klaunig, Carcinogenesis, in An Introduction to Interdisciplinary Toxicology: From Molecules to Man 87 (2020), https://doi.org/10.1016/B978-0-12-813602-7.00008-9.
74. All of these tests require validation regarding their relevance to predicting human carcinogenesis, as well as to their technical reproducibility. See Klapacz & Gollapudi, supra note 61, for a discussion of the wide variety of short-term mutagenesis assays for predicting mutagenic and potentially carcinogenic potential of chemicals.
to look for specific abnormalities, known as cytogenetic analysis, is providing more information about the pathways of carcinogenesis. For cancers such as acute myelogenous leukemia, it has long been recognized that those individuals who present with recognizable chromosomal abnormalities are more likely to have been exposed to a known human chemical leukemogen such as benzene.75 These and other tests provide information that can be used in evaluating whether a chemical is a potential human carcinogen.
The recent almost explosive growth in the ability to measure minuscule changes in the complex genome, a process known as next-generation sequencing, is rapidly being applied to a broad range of human diseases. The emphasis thus far has been on the promise of developing new therapies appropriate to personalized medicine, but the same techniques may in the future provide a fingerprint of the interaction of a chemical with DNA. However, there are very few chromosomal abnormalities or specific mutations that are unequivocally linked to a specific chemical or physical carcinogen.76
Assessing whether a chemical or physical agent produces human cancer requires careful evaluation. The World Health Organization’s IARC and the U.S. National Toxicology Program (NTP) have formal processes to evaluate the weight of evidence that a chemical causes cancer.77 Each classifies chemicals on the basis of epidemiological evidence, toxicological findings in laboratory animals, and mechanistic considerations, and then assigns a specific category of carcinogenic potential to the individual chemical or exposure situation (e.g., employment as a painter).78 Only a small percentage of the total chemicals in commerce
75. Patrick J. Kerzic & Richard D. Irons, Distribution of Chromosome Breakpoints in Benzene-Exposed and Unexposed AML Patients, 55 Env’t Toxicology & Pharmacology 212 (2017), https://doi.org/10.1016/j.etap.2017.08.033.
76. See Luoping Zhang et al., The Nature of Chromosomal Aberrations Detected in Humans Exposed to Benzene, 32 Critical Revs. Toxicology 1 (2002), https://doi.org/10.1080/20024091064165. However, studies on liver cancer patients exposed to the potent dietary carcinogen aflatoxin B1 (AFB) have demonstrated that a specific mutation in codon 249 in the important cancer gene called TP53 is uniquely associated with exposure to AFB. Only liver tumors from patients with a history of AFB exposure showed this specific mutation. S. P. Hussain et al., TP53 Mutations and Hepatocellular Carcinoma: Insights into the Etiology and Pathogenesis of Liver Cancer, 26 Oncogene 2166 (2007), https://doi.org/10.1038/sj.onc.1210279.
77. The U.S. National Toxicology Program issues a congressionally mandated report on carcinogens. Nat’l Toxicology Prog., 15th Rep. on Carcinogens (2021), https://perma.cc/V4DP-3YNZ. IARC produces its reports through a monograph series that provides detailed description of the agents or processes under consideration as well as the findings of the IARC expert working group. See the IARC website for a list of these monographs, https://perma.cc/UEP2-5L9A.
78. IARC uses the following classifications:
Group 1, The agent (mixture) is carcinogenic to humans;
Group 2A, The agent (mixture) is probably carcinogenic to humans;
Group 2B, The agent (mixture) is possibly carcinogenic to humans;
Group 3, The agent (mixture) is not classifiable as to its carcinogenicity to humans.
are considered to be known human carcinogens, although this must be considered in the context of how many of these chemicals have been rigorously tested (see discussion of REACH and the Lautenberg Act above). In the past, when chemicals were evaluated for carcinogenicity, assignment to the highest category was dependent almost totally on whether epidemiological research had been conducted, although animal data and mechanistic information were also considered. In recent years, with improved understanding of the mechanism of action of chemical carcinogens, there has been increased use of mechanistic data.79 For example, higher credence is given to the likelihood that a chemical is a human carcinogen if the metabolite found to be responsible for carcinogenesis in a laboratory animal is also found in the blood or urine of humans exposed to this chemical, or if there is evidence of the same type of DNA damage in humans as there is in laboratory animals in which the agent does cause cancer.80
Inherent in putting chemicals into distinct categories when there is a continuum for the strength of the evidence is that some chemicals will be very close to the dividing line between the discrete categories. Inevitably, small differences in the interpretation of the evidence for such chemicals will lead to disagreement regarding categorization.
79. See Daniele S. Wikoff et al., A Framework for Systematic Evaluation and Quantitative Integration of Mechanistic Data in Assessments of Potential Human Carcinogens, 167 Toxicological Scis. 322 (2019), https://doi.org/10.1093/toxsci/kfy279, for a discussion and for specific examples of the use of mechanistic data in evaluating carcinogens. The evolution in the approach to determining cancer causality is evident from reviewing the guidelines used to assemble the weight of evidence for causality by IARC and NTP, two of the organizations that have the lengthiest track record of responsibility for the hazard identification of carcinogens. Both have increased the weight given to mechanistic evidence in characterizing the overall strength of the total evidence used to classify the potential for a chemical or an exposure to be causal. IARC now permits classification in Group 1 when there is less than sufficient evidence in humans but sufficient evidence in animals and “strong evidence in exposed humans that the agent exhibits key characteristics of carcinogens and sufficient evidence of carcinogenicity in experimental animals.” IARC Preamble, supra note 60, at 23. The criterion used by NTP for listing a chemical as a known human carcinogen in its biannual Report on Carcinogens is: “There is sufficient evidence of carcinogenicity from studies in humans,* which indicates a causal relationship between exposure to the agent, substance, or mixture, and human cancer.” The asterisk is particularly notable in that it specifies that the evidence need not be solely epidemiological: “*This evidence can include traditional cancer epidemiology studies, data from clinical studies, and/or data derived from the study of tissues or cells from humans exposed to the substance in question, which can be useful for evaluating whether a relevant cancer mechanism is operating in humans.” See Nat’l Toxicology Prog., supra note 77, at 6. For a recent discussion on using mechanism/mode of action data in assessment of genotoxic carcinogens, see also Andrea Hartwig et al., Mode of Action-Based Risk Assessment of Genotoxic Carcinogens, 94 Archives Toxicology 1787 (2020), https://doi.org/10.1007/s00204-020-02733-2. The EPA also considers mechanism of action in its regulatory approaches and distinguishes further between mechanism of action and mode of action. See Kathryn Z. Guyton et al., Improving Prediction of Chemical Carcinogenicity by Considering Multiple Mechanisms and Applying Toxicogenomic Approaches, 681 Mutation Rsch. 230, 240 (2009), https://doi.org/10.1016/j.mrrev.2008.10.001.
80. An example is the IARC evaluation of formaldehyde that upgraded the categorization from 2A to 1 based on epidemiological data that were strongly supported by the finding of nasal cancer in laboratory animals and by the presence of DNA-protein cross-links in the nasal tissue of the
In toxic tort cases, once an expert in toxicology has been qualified, they are expected to offer an opinion on whether the plaintiff’s disease was caused by exposure to a chemical. To do so, the expert relies on the principles of toxicology to provide a scientifically valid methodology for establishing causation and then applies the methodology to the facts of the case.
An opinion on causation should be premised on three preliminary assessments. First, the expert should analyze whether the disease can be related to chemical exposure by a biologically plausible hypothesis. Second, the expert should examine whether the plaintiff was exposed to the chemical in a manner that can lead to absorption into the body. Third, the expert should offer an opinion about whether the dose to which the plaintiff was exposed is sufficient to cause the disease. In complex cases, multiple experts may offer opinions as to specific aspects of disease causation.
The following questions help evaluate the strengths and weaknesses of toxicological evidence.
All living organisms share a common biology that leads to marked similarities in the responsiveness of subcellular structures to toxic agents. Among mammals, more than sufficient common organ structure and function readily permit the extrapolation from one species to another in most instances. Comparative information concerning factors that modify the toxic effects of chemicals, including
laboratory animals and of humans inhaling formaldehyde. However, epidemiological evidence associating formaldehyde with human acute myelogenous leukemia was questioned on the basis of the lack of mechanistic evidence, including questions about how such a highly reactive agent could reach the bone marrow following inhalation. See Formaldehyde, 2-Butoxyethanol and 1-tert-Butoxypropan-2-ol, in 88 IARC Monographs on the Evaluation of Carcinogenic Risks to Humans (2006).
absorption, distribution, metabolism, and excretion, enhances the expert’s ability to extrapolate from laboratory animals to humans.81
The expert should review similarities and differences between the animal species in which the compound has been tested and humans. This analysis should form the basis of the expert’s opinion regarding whether extrapolation from animals to humans is warranted.
In general, an overwhelming similarity is apparent in the biology of all living things, and there is a particularly strong similarity among mammals. Of course, laboratory animals differ from humans in many ways. For example, rats do not have gallbladders. Thus, rat data would not be pertinent to the possibility that a compound produces human gallbladder toxicity.82 The life stage at which exposure occurs can also be an important determinant of toxicity. It is generally recognized that the developing embryo and fetus are unusually susceptible to many toxic substances, and thus exposure of a pregnant animal requires special consideration for toxic effects that might occur to the developing offspring. In a similar manner, it is also generally recognized that early life exposure (from infancy through adolescence) is also a period of enhanced susceptibility, and effects not seen in adults may occur. This is especially true for chemicals that affect the nervous system, since the development of the nervous system is not complete until early adulthood. Thus, for example, the effects of lead poisoning in children are manifest mostly through impacts on the central nervous system that lead to impairment of learning and development, whereas adults exposed to the same level of lead have little or no detectable effects on the central nervous system, although at high doses lead can cause effects on the peripheral nervous system in adults (so-called “lead palsy”).
Note that, in animal studies, many subjective symptoms are poorly modeled regardless of the age of exposure. Thus, complaints that a chemical has caused nonspecific symptoms, such as nausea, headache, and weakness, for which there may be no objective manifestations in humans, are difficult to test in laboratory animals.
81. See generally supra note 3 and references therein.
82. See, e.g., Hardeman v. Monsanto Co., 997 F.3d 941, 963 (9th Cir. 2021) (“Animal studies are relevant evidence of causation where there is a sound basis for extrapolating conclusions from those studies to humans in real-world conditions.”). See generally David Spurgeon et al., Species Sensitivity to Toxic Substances: Evolution, Ecology and Applications, 8 Frontiers Env’t Sci. 1 (2020), https://doi.org/10.3389/fenvs.2020.588380. Species differences that produce a qualitative difference in response to xenobiotics are well known. Sometimes understanding the mechanism underlying the species difference can allow one to predict whether the effect will occur in humans. Thus, carbaryl, an insecticide commonly used for gypsy moth control, produces fetal abnormalities in dogs but not in hamsters, mice, rats, and monkeys. Dogs lack the specific enzyme involved in metabolizing carbaryl; the other species tested all have this enzyme, as do humans. Therefore, it has been assumed that humans are not at risk for fetal malformations produced by carbaryl.
Another important variable in extrapolating laboratory animal data to humans is that animals used in experimental research are maintained on nutritionally adequate diets and clean, healthy environments that are designed to maintain optimal health. The diets are free of other chemical contaminants, and the rest of their environment is equally pristine. This is in contrast to humans, who may have unhealthy diets and may be exposed to multiple different toxic substances from their diets, workplace, and general environment. This is one of the reasons that regulatory agencies typically use additional safety or uncertainty factors when extrapolating laboratory animal data to humans.
Some toxic agents affect only specific organs and not others. This organ specificity may be due to particular patterns of absorption, distribution, metabolism, and excretion; the presence of specific receptors; or organ function. For example, organ specificity may reflect the presence in the organ of relatively high levels of an enzyme capable of metabolizing or changing a compound to a toxic form of the compound,83 or it may reflect the relatively low level of an enzyme capable of detoxifying a compound. An example of the former is liver toxicity caused by inhaled carbon tetrachloride, which affects the liver but not the lungs because of extensive metabolism to a toxic metabolite within the liver but relatively little such metabolism in the lung.84
Some chemicals, however, may cause nonspecific effects or even multiple effects. Lead is an example of a toxic agent that affects many organ systems, including the blood, the central and peripheral nervous systems, the reproductive system, and the kidneys.
83. Certain chemicals act directly to produce toxicity, whereas others require the formation of a toxic metabolite. For example, the potent rat liver carcinogen, N-nitroso-dimethylamine (NDMA, present at trace levels in a typical diet and in some pharmaceutical preparations), requires metabolic activation in order to bind to DNA and cause mutations that lead to cancer in experimental animals. In humans, there appears to be only one enzyme capable of activating NDMA to its reactive intermediate, and this enzyme (called CYP2E1) is only expressed in the liver, suggesting that liver cancer would be the only kind of cancer expected in humans exposed to NDMA. However, other carcinogenic nitrosamines can be activated by enzymes present in other human tissues, as well as the liver. See Yupeng Li & Stephen S. Hecht, Metabolic Activation and DNA Interactions of Carcinogenic N-Nitrosamines to Which Humans Are Commonly Exposed, 23 Int’l J. Molecular Scis. 4559 (2022), https://doi.org/10.3390/ijms23094559.
84. Brian J. Day et al., Potentiation of Carbon Tetrachloride-Induced Hepatotoxicity and Pneumotoxicity by Pyridine, 8 J. Biochem. Toxicology 11 (1993), https://doi.org/10.1002/jbt.2570080104.
Specificity often reflects the function of individual organs. For example, the thyroid is particularly susceptible to radioactive iodine in atomic fallout because thyroid hormone is unique within the body in that it requires iodine. Through evolution, a very efficient and specific mechanism has developed that concentrates any absorbed iodine preferentially within the thyroid, rendering the thyroid particularly at risk from radioactive iodine. In a test tube, the radiation from radioactive iodine can affect the genetic material obtained from any cell in the body, but in the intact laboratory animal or human, only the thyroid is at risk.
The unfolding of the human genome already is beginning to provide information pertinent to understanding the wide variation in human risk from environmental chemicals. The impact of this understanding on toxic tort causation issues continues to be explored.85 Several examples of the importance of genetic variability in response to toxic substances is provided later in the section titled “What is the likelihood that the disease would have occurred without the specific exposure?”
Understanding the structural aspects of chemical toxicology has led to the use of structure activity relationships (SAR) as a formal method of predicting the potential toxicity of new chemicals. This technique compares the chemical structure of compounds with known toxicity and the chemical structure of compounds with unknown toxicity. Toxicity then is estimated based on the molecular similarities between the two compounds. Although SAR is used
85. Nat’l Rsch. Council, Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment (2007), https://doi.org/10.17226/12037. Genomics can also be misinterpreted. An example is the use of white blood cell gene expression to determine whether benzene was a cause of acute myelogenous leukemia (AML) in individual workers. Martyn T. Smith, Misuse of Genomics in Assigning Causation in Relation to Benzene Exposure, 14 Int’l J. Occupational & Env’t Health 144 (2008), https://doi.org/10.1179/oeh.2008.14.2.144, describes why the failure to match a pattern of DNA expression in workers with AML who were previously exposed to benzene is not scientifically defensible as a means to establish the lack of causation, as said to have been done in workers’ compensation cases in California. The wide range in the rate of metabolism of chemicals is at least partly under genetic control. A study of Chinese workers exposed to benzene found approximately a doubling of risk in people with high levels of either an enzyme that increased the rate of formation of a toxic metabolite or an enzyme that decreased the rate of detoxification of this metabolite. There was a sevenfold increase in risk for those who had both genetically determined variants. Nathan Rothman et al., Benzene Poisoning, A Risk Factor for Hematological Malignancy, Is Associated with the NQO1 609C→T Mutation and Rapid Fractional Excretion of Chlorzoxazone, 57 Cancer Rsch. 2839 (1997), https://perma.cc/4BDM-JXNZ. See also Lucio G. Costa & David L. Eaton, Gene Environment Interactions: Fundamentals of Ecogenetics (2006).
extensively by the EPA in evaluating many new chemicals required to be tested under the registration requirements of the TSCA, its reliability has a number of limitations.86
Cellular and tissue culture research can be particularly helpful in identifying mechanisms of toxic action and potential target organ toxicity. The major barrier to the use of in vitro results is the frequent inability to relate doses that cause cellular toxicity to doses that cause whole-animal toxicity. In many critical areas, knowledge that permits such quantitative extrapolation is lacking.87 Nevertheless, the ability to quickly test new products through in vitro tests, using
86. For example, benzene and the alkyl benzenes (which include toluene, xylene, and ethyl benzene) share a similar chemical structure. SAR works exceptionally well in predicting the acute central nervous system anesthetic-like effects of both benzene and the alkyl benzenes. Although there are slight differences in dose–response relationships, they are readily explained by the interrelated factors of chemical structure, vapor pressure, and lipid solubility (the brain is highly lipid). Nat’l Rsch. Council, The Alkyl Benzenes (1981). However, only benzene produces damage to the bone marrow and leukemia; the alkyl benzenes do not have this effect. This difference is the result of specific toxic metabolic products of benzene in comparison with the alkyl benzenes. Thus, SAR is predictive of neurotoxic effects but not bone marrow effects. See David A. Eastmond et al., Lymphohematopoietic Cancers Induced by Chemicals and Other Agents and Their Implications for Risk Evaluation: An Overview, 761 Mutation Rsch. 40 (2014), https://doi.org/10.1016/j.mrrev.2014.04.001. Advances in computational approaches also show promise in improving SAR. See Nat’l Rsch. Council, supra note 63. For an example of how gains in computational sciences and “artificial intelligence” (e.g., “Deep Learning”) are being applied to improve SAR analysis for predicting chemical toxicity, see Gabriel Idakwo et al., Deep Learning-Based Structure-Activity Relationship Modeling for Multi-Category Toxicity Classification: A Case Study of 10K Tox21 Chemicals With High-Throughput Cell-Based Androgen Receptor Bioassay Data, 10 Frontiers Physiology 2044 (2019), https://doi.org/10.3389/fphys.2019.01044. In Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993), the Court rejected a per se exclusion of SAR, animal data, and reanalysis of previously published epidemiological data where there were negative epidemiological data. Most cases involving SAR expert opinion involve patent litigation or analogous drugs under the Controlled Substance Analogue Enforcement Act. See, e.g., Takeda Pharm. Co. v. Torrent Pharms. Ltd., No. 17-3186 (SRC)(CLW), 2020 WL 549594, at *26 (D.N.J. Feb. 4, 2020), aff’d, 844 F. App’x 339 (Fed. Cir. 2021); United States v. Cooper, No. 3:14-cr-014-J-20MCR, 2015 WL 13850123, at *6, *9 (M.D. Fla. Jan. 14, 2015).
87. See, e.g., In re Denture Cream Prods. Liab. Litig., No. 09-2051-MD, 2015 WL 392021 (S.D. Fla. Jan. 28, 2015) (expert failed to explain relevancy of in vitro studies to humans or account for factors needed to make a proper extrapolation).
human cells, provides invaluable “early warning systems” for toxicity.88 For example, screening of new chemicals with a simple bacterial mutagenicity assay such as the Ames test will identify if the new chemical is likely to be mutagenic, and thus carcinogenic. The pharmaceutical industry routinely evaluates new chemicals for mutagenic/carcinogenic potential, and generally will not spend large amounts of time and money in further development if the candidate drug is identified through such simple tests as likely to be mutagenic and thus carcinogenic.
No matter how strong the temporal relationship between exposure and the development of disease, or the supporting epidemiological evidence, it is difficult to accept an association between a compound and a health effect when no plausible mechanism can be identified by which the chemical or physical exposure leads to the putative effect.89
An expert who opines that exposure to a compound caused a person’s disease engages in deductive clinical reasoning.90 In most instances, cancers and other diseases do not wear labels documenting their causation. The opinion is based on an assessment of the individual’s exposure, including the amount, the temporal relationship between the exposure and disease, and other disease-causing factors. This information is then compared with scientific data on the relationship
88. Despite its limitations, in vitro research can strengthen inferences drawn from whole-animal bioassays and can support opinions regarding whether the association between exposure and disease is biologically plausible. See Klapacz & Gollapudi, supra note 61; Rogers, supra note 61.
89. See, e.g., Sarkees v. E.I. DuPont de Nemours & Co., No. 17-CV-651, 2020 WL 906331 (W.D.N.Y. Feb. 25, 2020) (expert reviewed animal studies, in vitro studies, and human studies and the inferential steps utilized by researchers to propose a likely mechanism for toxic effects in humans).
90. For an example of deductive clinical reasoning based on known facts about the toxic effects of a chemical and the individual’s pattern of exposure, see Bernard D. Goldstein, Is Exposure to Benzene a Cause of Human Multiple Myeloma?, 609 Annals N.Y. Acad. Scis. 225 (1990), https://doi.org/10.1111/j.1749-6632.1990.tb32070.x.
between exposure and disease. The certainty of the expert’s opinion depends on the strength of the research data demonstrating a relationship between exposure and the disease at the dose in question and the presence or absence of other disease-causing factors (also known as confounding factors).91
Particularly problematic are generalizations made in personal injury litigation from regulatory positions. Regulatory standards are set for purposes far different than determining the preponderance of evidence in a toxic tort case. Further, if regulatory standards are discussed in toxic tort cases to provide a reference point for assessing exposure levels, it must be recognized that there is a great deal of variability in the extent of evidence required to support different regulations.92 The extent of evidence required to support regulations depends on
91. Causation issues are discussed in Steve C. Gold et al., Reference Guide on Epidemiology, section titled “General Causation and Specific Causation,” and John B. Wong et al., Reference Guide on Medical Testimony, section titled “Medical Decision-Making,” both in this manual. For a detailed analysis of the challenges of assessing epidemiological and toxicological data to establish causal relationships between exposure and disease, see Nat’l Acads. Scis., Eng’g, & Med., Advancing the Framework for Assessing Causality of Health and Welfare Effects to Inform National Ambient Air Quality Standard Reviews (2022).
92. See, e.g., Kirk v. Schaeffler Grp. USA, Inc., 887 F.3d 376, 392 (8th Cir. 2018) (noting that regulatory standards are “designed to protect public health,” not to be a measure of causation, and that regulatory agencies may set standards “without having precise data on the question of how much harm, or what kind of harm, some specific amount of . . . substance might reasonably be expected to cause” (quoting Wright v. Willamette Indus., Inc., 91 F.3d 1105, 1107 (8th Cir. 1996)); Williams v. Mosaic Fertilizer, LLC, 889 F.3d 1239, 1247 (11th Cir. 2018) (stating that regulatory standards are designed to be protective whereas “dose–response calculations aim to identify the exposure levels that actually cause harm”).
93. Cmty. Voice v. U.S. Env’t Prot. Agency, 997 F.3d 983, 990–91 (9th Cir. 2021) (citing Whitman v. Am. Trucking Ass’ns, 531 U.S. 457, 467–68 (2001)) (explaining TSCA’s consideration of “non-health factors such as achievability and cost” and the opposite for the Clean Air Act, which the Supreme Court has held “does not” permit “the EPA to consider costs in setting clean air standards”). See also Nat’l Res. Defense Council v. U.S. Env’t Prot. Agency, 38 F.4th 34 (9th Cir. 2022) (challenge to the EPA’s conclusions on human health risks and cost-benefit analysis for glyphosate under FIFRA, based on the EPA’s failure to follow its Cancer Guidelines, resulting in court vacating human health portion of the EPA’s Interim Decision and remanding).
These three concerns, as well as others, including costs, politics, and the virtual certainty of litigation challenging the regulation, have an impact on the level of scientific proof required by the regulatory decision-maker.94
In addition, regulatory standards traditionally include protective factors to reasonably ensure that susceptible individuals are not put at significant risk. Furthermore, standards often are based on the risk that results from lifetime exposure. Accordingly, the mere fact that an individual has been exposed to a level above a standard does not necessarily mean that an adverse effect has occurred or will occur.
Evidence of exposure is essential in determining the effects of harmful substances.95 Basically, potential human exposure is measured in one of three ways. First, when direct measurements cannot be made, exposure can be measured by
94. These concerns are discussed in Stephen Breyer, Breaking the Vicious Circle: Toward Effective Risk Regulation (1993).
95. “‘[S]cientific knowledge of the harmful level of exposure to a chemical’ is a ‘minimal fact’ necessary for establishing causation.” Cooper v. Meritor, Inc., No. 4:16-CV-52-DMB-JMV, 2019 WL 545271, at *5 (N.D. Miss. Feb. 11, 2019) (quoting Allen v. Pa. Eng’g Corp., 102 F.3d 194, 199 (5th Cir. 1996)). In toxic tort cases, the plaintiff bears the burden of demonstrating the level of exposure. Zellars v. NexTech Ne. LLC, 895 F. Supp. 2d 734, 741 (E.D. Va. 2012) (citing Westberry v. Gislaved Gummi AB, 178 F.3d 257, 260 (4th Cir. 1999)). The court in Zellars discussed cases from the Fourth, Fifth, and Seventh Circuit U.S. Courts of Appeal where expert opinions were or were not found to be reliable, noting that where “substantial exposure” is established by the expert, “more detailed quantitative data about [the plaintiff’s] level of exposure” may not be necessary. 895 F. Supp. 2d at 739 (citing Westberry, 178 F.3d at 264). On the other hand, where there is “no direct evidence of the plaintiff’s level of exposure to the chemical at issue,” the expert’s opinion may be excluded as unreliable. Zellars, 895 F. Supp. 2d at 740 (citing Allen, 102 F.3d at 199). Likewise, where the expert did not review medical records, perform any tests or calculations, or seek to learn any specifics about the environment in which the alleged exposure took place, the expert’s opinion as to causation based on exposure level was excluded as being “not based on sufficient information about the level of the plaintiff’s exposure to the alleged toxin.” 895 F. Supp. 2d at 740 (citing Wintz v. Northrop Corp., 110 F.3d 508, 510–13 (7th Cir. 1997)). In Zellars, the expert opinions were ultimately excluded because the experts “did not consider the extent of Plaintiffs’ exposure,” rendering their opinions “speculative at best.” 895 F. Supp. 2d at 741, aff’d sub nom. Zellers v. NexTech Ne., LLC, 533 F. App’x 192, 198 (4th Cir. 2013). For a discussion of general issues
mathematical modeling, in which one uses a variety of physical factors to estimate the transport of the pollutant from the source to the individual. For example, mathematical models take into account such factors as wind variations to allow calculation of the transport of radioactive iodine from a federal atomic research facility to nearby residential areas. Second, exposure can be directly measured in the medium in question—air, water, food, or soil. When the medium of exposure is water, soil, or air, hydrologists or meteorologists may be called upon to contribute their expertise to measuring exposure. The third approach directly measures human individuals through some form of biological monitoring, such as blood tests to determine blood lead levels or urinalyses to check for a urinary metabolite indicative of pollutant exposure. Ideally, both environmental testing and biological monitoring are performed; however, this is not always possible, particularly in instances of past exposure (unless the chemical(s) had very long half-lives (e.g., greater than three or four years), in which case blood levels may be useful to demonstrate significant exposures that occurred many years earlier).96
The toxicologist must go beyond understanding exposure to determine if the individual was exposed to the compound in a manner that can result in absorption into the body. The absorption of the compound is a function of its physiochemical properties, its concentration, and the presence of other agents or conditions that assist or interfere with its uptake. For example, inhaled lead is absorbed almost totally, whereas ingested lead is taken up only partially into the body. Of toxicological relevance is the finding that children absorb ingested lead substantially more efficiently than adults, which may contribute to the enhanced susceptibility of children to the neurotoxic effects of lead.97 Iron deficiency and low nutritional calcium intake, both common conditions of children living in impoverished urban environments, increase the amount of ingested lead that is absorbed in the gastrointestinal tract and passes into the bloodstream.98
regarding assessment of exposure of toxic substances see M. Elizabeth Marder & Joseph V. Rodricks, Reference Guide on Exposure Science and Exposure Assessment, in this manual.
96. See sections titled “Toxicology and the Law” and “Use of Biomarkers in Toxicology Exposure Assessment” above.
97. Alan R. Abelsohn & Margaret Sanborn, Lead and Children: Clinical Management for Family Physicians, 56 Canadian Fam. Physician 531 (2010), https://perma.cc/M6S4-P2L7.
98. The term bioavailability is used to describe the extent to which a compound, such as lead, is taken up into the body. In essence, bioavailability is at the interface between exposure and absorption into the organism. See, e.g., In re Denture Cream Prods. Liab. Litig., No. 09-2051-MD, 2015 WL 392021 (S.D. Fla. Jan. 28, 2015) (expert’s analysis of in vitro studies failed to demonstrate bioavailability of zinc in denture cream); Ridings v. Maurice, No. 15-00020-CV-W-JTM, 2019 WL 13159921, at *4 (W.D. Mo. Aug. 12, 2019) (permitting expert to testify regarding bioavailability of drug alleged to have caused, at least in part, the plaintiff’s injuries). Under certain circumstances bioavailability is not a necessary condition to establish general causation, however. Adkisson v. Jacobs Eng’g Grp., Inc., 342 F. Supp. 3d 791, 805–07 (E.D. Tenn. 2018) (“Biological plausibility and bioavailability are important scientific concepts. But it does not appear that either is strictly
Once a compound is absorbed into the body through the skin, lungs, or gastrointestinal tract, it is distributed throughout the body via the bloodstream. Thus, the rate of distribution depends on the rate of blood flow to various organs and tissues. Distribution and resulting toxicity also are influenced by other factors, including the dose, the route of entry, tissue solubility, lymphatic supplies to the organ, metabolism, and the presence of specific receptors or uptake mechanisms within body tissues.
Metabolism is the alteration of a chemical by bodily processes. It does not necessarily result in less toxic compounds being formed. In fact, many of the organic chemicals that are known human cancer-causing agents require metabolic transformation before they can cause cancer. A distinction often is made between direct-acting agents, which cause toxicity without any metabolic conversion, and indirect-acting agents, which require metabolic activation before they can produce adverse effects. Metabolism is complex, because a variety of pathways compete for the same agent; some produce harmless metabolites, and others produce toxic agents.99
Excretory routes are urine, feces, sweat, saliva, expired air, and lactation. Many inhaled volatile agents are eliminated primarily by exhalation. Small water-soluble compounds are usually excreted through urine. Higher-molecular-weight
necessary for an association between a particular toxic agent and a particular disease to be considered causal. Accordingly, neither is required to establish proof of general causation.”).
99. Courts have explored the relationship between metabolic transformation and carcinogenesis. See, e.g., In re E.I. Du Pont De Nemours & Co. C-8 Personal Inj. Litig., No. 2:13-md-2433, 2016 WL 3064124, at *4 (S.D. Ohio May 30, 2016) (“For example, one pound of a toxic, cancer-causing chemical that biopersists in humans for decades and bioaccumulates in the environment for millennia may cause more harm than 100 pounds of a toxic, cancer-causing chemical that metabolizes and/or degrades quickly.”).
compounds are often excreted through the biliary tract into the feces. Certain fat-soluble, poorly metabolized compounds, such as PCBs, may persist in the body for decades, although they can be excreted in the milk fat of lactating women, with potential effects in their infants.
In acute toxicity, there is usually a short time period between cause and effect. However, in some situations, the length of basic biological processes necessitates a longer period of time between initial exposure and the onset of observable disease. For example, in AML (acute myelogenous leukemia), the adult form of acute leukemia, at least one to two years must elapse from initial exposure to radiation, benzene, or cancer chemotherapy before the manifestation of a clinically recognizable case of leukemia, and the period of significantly higher risk from the last exposure usually persists for no more than about fifteen years. A toxic tort claim alleging a shorter or longer time period between cause and effect is scientifically highly debatable. Much longer latency periods are necessary for the manifestation of solid tumors caused by agents such as asbestos and arsenic.100
100. The temporal relationship between exposure and causation is discussed in many cases. See, e.g., Johnson v. Arkema, Inc., 685 F.3d 452, 466–67 (5th Cir. 2012) (quoting Curtis v. M&S Petroleum, Inc., 174 F.3d 661, 670 (5th Cir. 1999)) (“[T]emporal connection standing alone is entitled to little weight in determining causation[,]” but may be “entitled to greater weight when there is an established scientific connection between exposure and illness or other circumstantial evidence supporting the causal link.”) (internal citations omitted); In re Paulsboro Derailment Cases, 746 F. App’x 94, 99 (3d Cir. 2018) (citing Kannankeril v. Terminix Int’l, Inc., 128 F.3d 802, 808–09 (3d Cir. 1997)) (noting that an expert may only rely on a temporal relationship analysis if the expert uses that relationship within the greater methodology of a differential diagnosis); C.W. ex rel. Wood v. Textron, Inc., 807 F.3d 827, 838 (7th Cir. 2015) (quoting Ervin v. Johnson & Johnson, 492 F.3d 901, 904–05 (7th Cir. 2007)) (rejecting expert methodology relying on the relationship between the time of exposure and the onset of injury, explaining that “[t]he mere existence of a temporal relationship between taking a medication and the onset of symptoms does not show a sufficient causal relationship”); Arias v. DynCorp, 928 F. Supp. 2d 10, 8 (D.D.C. 2013), aff’d, 752 F.3d 1011 (D.C. Cir. 2014) (discussing the relationship between temporal evidence and specific causation); Leake v. United States, 843 F. Supp. 2d 554, 561–62 (E.D. Pa. 2011) (noting that “even a strong temporal relationship between exposure and injury, in and of itself,” may not be “sufficient to establish a reliable general causation opinion”).
As discussed in the section titled “In Vivo Research: Use of Live Animals in Toxicity Testing and Safety Assessment” above, for agents that produce effects other than through mutations, it is assumed that there is some level below which no toxic effects will occur—the so-called threshold. Many chemicals may have multiple different toxic effects, and the dose–response relationship for each adverse outcome (toxic effect) may be different. Typically, regulatory agencies use the most sensitive toxic effect (the effect associated with the lowest dose, or threshold). In toxic tort litigation related to a specific effect (e.g., a particular type of birth defect, liver or kidney disease, etc.), experts may argue that the dose of the agent was below the threshold and thus insufficient to have caused or substantially contributed to the disease. If the level of exposure was below this level, a relationship between the exposure and disease cannot be established.101 When only laboratory animal data are available, the expert extrapolates the BMD, NOAEL, or other metric of a threshold level based on available experimental data. Occasionally, the lowest dose used has an observable effect, and thus no NOAEL can be established. In those circumstances, the lowest dose used is called the Lowest Observed Adverse Effect Level (LOAEL). As discussed in the section noted above, in regulatory toxicology, if using a NOAEL value to estimate a threshold dose, the expert decreases this level by one or more safety factors (usually a factor of ten for each one) to ensure no human effect.102 As discussed in the section titled “Benchmark Dose (BMD),” if using a BMD, the lower statistical confidence level (usually tenth percentile) of the BMD (called
101. See, e.g., Wyman v. U.S. Surgical Corp., No. 1:18-cv-00095-JAW, 2020 WL 1932338, at *21 (D. Me. Apr. 21, 2020) (In the context of mercury, benchmark dose is “the amount . . . associated with the threshold for a health effect on sensitive populations according to epidemiological studies.”); Kolakowski v. Sec’y of Health & Hum. Servs., No. 99-0625V, 2010 WL 5672753, at *11 (Fed. Cl. Nov. 23, 2010) (Benchmark dose “is a dose at which some statistical interval, measured from a percentage of the population with a physical response to the dose.”); In re Valsartan, Losartan, and Irbesartan Prods. Liab. Litig., No. 2022 WL 807343, at *2 (D.N.J. Mar. 17, 2022) (permitting an expert to testify using benchmark-dose methodology, acknowledging that while “this is not the methodology used by many scientists,” it “is nonetheless used by some in the relevant scientific community”).
102. See, e.g., supra note 54 & accompanying text; Robert G. Tardiff & Joseph V. Rodricks, Toxic Substances and Human Risk: Principles of Data Interpretation 391 (1987); Joseph V. Rodricks, Calculated Risks 230–39 (2d ed. 2006). For regulatory toxicology, NOAEL is being replaced by a more statistically robust approach known as the benchmark dose. See supra note 55 & accompanying text. For example, the EPA’s use of the benchmark dose takes into account comprehensive dose–response information, unlike NOAEL.
the BMDLow, or BMDL, or BMDL10) can also be calculated from animal data or from human toxicity data if they exist. The estimation of a threshold dose using either NOAEL or BMD data, however, is generally not applied to substances that exert toxicity by causing DNA damage and mutations that may lead to cancer. Theoretically, any exposure at all to mutagens may increase the risk of cancer, although the risk may be very small and not achieve statistical significance.103
Although the concept of the non-threshold response is generally used only for genotoxic carcinogens or other outcomes associated with gene mutations (e.g., development of some types of birth defects), it is sometimes claimed that other toxic responses, such as the effects of lead on children’s IQ, also follow a linear response at low doses. However, it is biologically likely that there are thresholds for such responses, but if the threshold is below background levels of exposure (such as may be the case with lead), the dose–response would appear to be linear since a threshold dose could not be established and background exposures exceed the putative threshold level.
In coming to an expert opinion as to whether the exposure to a known cause of a disease is likely to be responsible for the case observed in the plaintiff, the expert must consider other potential causes of the same disease. Some may be
103. See sources cited supra notes 57 and 65. See also Myers v. United States, No. 02cv1349–BEN, 2014 WL 6611398, at *39 (S.D. Cal. Nov. 20, 2014) (explaining that although thallium may be harmful at certain levels, “[s]ince we are exposed to thallium on a daily basis, there must be a level that does not cause adverse health effects”). U.S. regulatory approaches aimed at protecting the general population tend to avoid setting a standard for a known human carcinogen, because any allowable level below the standard is at least theoretically capable of causing cancer. However, exposure to many chemical carcinogens, including benzene and arsenic, cannot be eliminated. Thus, agencies and Congress have developed a number of ingenious means to regulate carcinogens while not seeming to acquiesce in exposure of the general population to a carcinogen. These include the FDA’s approach to de minimis risk and the EPA’s setting of a zero maximum contaminant level goal for carcinogens in drinking water while setting a maximum contaminant level above zero that is set as closely as possible to the MCLG, taking technology and cost data into account. In contrast, occupational standards, which also take into account feasibility, permit exposure to known human carcinogens. A generally outmoded approach for environmental or indoor air guidelines has been to divide the permissible OSHA standard by a factor accounting for the presumed lifetime exposure to the environmental chemical compared with forty-five years at a forty-hour workweek. Plaintiff’s experts in a toxic court case must rely on more than exposure levels established by regulatory agencies, as those levels “often build in considerable cushion to account for the most sensitive members of the population.” Williams v. Mosaic Fertilizer, LLC, 889 F.3d 1239, 1246–47 (11th Cir. 2018) (affirming exclusion of expert opinion for, inter alia, relying on regulatory standards of exposure levels).
due to other known causal factors, such as lung cancer in a uranium miner who is also a cigarette smoker. In addition, with rare exceptions, almost all diseases have some anticipated background causes that appear related to human biology, particularly aging. An additional challenge comes in unraveling the contribution of a specific substance when exposure may have been to multiple agents. There are numerous examples where the effect of two combined exposures to two substances both known to cause the same disease is greater than the sum of the individual effects (synergism). The example above of lung cancer in uranium miners who also smoke is a good illustration. Another example of such synergism is the incidence of liver cancer in people who are exposed to the dietary contaminant aflatoxin B1, and also are infected with hepatitis B virus. Both are known causes of liver cancer (relative risk for hepatitis B virus antigen positivity is approximately tenfold; relative risk for aflatoxin B1 alone is approximately threefold) but the presence of both risk factors increased the risk of developing liver cancer by approximately sixty times.104
Not surprisingly, cancer has received the most attention in research aimed at disentangling external and internal causes. For some cancers, such as lung cancer, smoking is an obvious predominant external cause, and the extent to which other known causes contribute to overall lung cancer incidence also can be estimated. But these known external causes do not currently account for all cases of lung cancer. For other cancers, such as pancreatic cancers, it has been very difficult to find any external cause. There are, however, a few instances in which the exposure to a specific chemical is nearly uniquely associated with a particular (usually rare) type of cancer. For example, the industrial chemical vinyl chloride, widely used in the synthesis of certain plastics, is associated with a rare form of cancer of the blood vessels in the liver (angiosarcoma).105 Likewise, certain types of mesothelioma (a cancer of the lining of the chest cavity or stomach/intestine) are associated with prior exposure to asbestos, especially among smokers, suggesting a synergistic interaction between smoking and asbestos exposure.106
That mutations in the DNA of stem cells of virtually every tissue accumulate with age is now generally recognized. Upwards of thousands of mutations
104. John D. Groopman et al., Aflatoxin and Hepatitis B Virus Biomarkers: a Paradigm for Complex Environmental Exposures and Cancer Risk, 1 Cancer Biomarkers 5 (2005), https://doi.org/10.3233/cbm-2005-1103. See also Joshua Jin et al., Synergism in Actions of HBV with Aflatoxin in Cancer Development, 499 Toxicology 153652 (2023), https://doi.org/10.1016/j.tox.2023.153652.
105. The relative risk (RR) of angiosarcoma of the liver was ninety-seven in one of the first occupational cohort studies to demonstrate the association between occupational exposure to vinyl chloride and this very rare form of liver cancer. There is also suggestive evidence that vinyl chloride can contribute to the more common form of liver cancer (hepatocellular carcinoma), although the relative risks are much lower. See Ugo Fedeli et al., Occupational Exposure to Vinyl Chloride and Liver Diseases, 25 World J. Gastroenterology 4885 (2019), https://doi.org/10.3748/wjg.v25.i33.4885.
106. Sonja Klebe et al., Asbestos, Smoking and Lung Cancer: An Update, 17 Int’l J. Env’t Rsch. & Pub. Health 258 (2019), https://doi.org/10.3390/ijerph17010258.
per cell are not uncommon in the cells of the elderly. Certain of these mutations, often called driver mutations, are more commonly associated with cancer.
A major cause for the accumulation of mutations as we age is related to the frequency of mistakes made in copying DNA. These reflect the fallibility of normal cell DNA replication as well as the increasing failure with age of DNA repair enzymes to successfully carry out appropriate copy editing of the replicated DNA.
There is increasing evidence that many cancers are associated with heritable genetic differences among individuals. For example, women who inherit one mutated copy of the so-called “breast cancer genes,” BRCA1 and BRCA2, are at substantially greater risk of developing breast cancer when compared to women who have the common (“wild type”) genetic form.107 The study of gene–environment interactions is of growing importance in the field of toxicology as well as medicine, as there is now substantial evidence that many diseases result from a combination of certain genetic variants and exposure to specific chemicals.108 For many cancers, genetic variants that decrease the efficiency and accuracy of DNA repair are widely recognized as “susceptibility genes.” The evidence suggests that individuals with genetic deficiency in a DNA repair gene are likely to be more susceptible to the mutagenic effects of chemicals that cause the specific type of mutation repaired by the deficient DNA repair gene.
Estimates of what percent of individual cancer types are due to external versus internal factors are beginning to appear and can be expected to improve with advances in stem cell biology. (Unfortunately, cancers due to intrinsic factors, such as the failure of DNA repair, have been described as “bad luck” to distinguish them from cancers due to external factors.) This information will likely become increasingly important in considering the differential causation of individual cancer types.109 It is likely that the introduction of potential genetic
107. Women who have mutated copies of BRCA genes have a lifetime risk of developing breast cancer of 45–75%, compared with a “background” lifetime risk of ~10–15%. About 5–10% of all breast cancers can be attributed to genetic risk. See Zora Baretta et al., Effect of BRCA Germline Mutations on Breast Cancer Prognosis: A Systematic Review and Meta-Analysis, 95 Medicine e4975 (2016), https://doi.org/10.1097/MD.0000000000004975.
108. See Nat’l Inst. Env’t Health Scis., Gene and Environment Interaction, https://perma.cc/X5HB-39TV, for a general discussion of how genes and the environment can interact to produce certain diseases.
109. A readily understandable account of the chemistry and biological implications of DNA repair is in the Popular Science Background piece accompanying the award of the 2015 Nobel Prize for Chemistry. DNA Repair—Providing Chemical Stability for Life, https://perma.cc/S9VL-9KRH. See also McManaway v. KBR, Inc., No. H-10-1044, 2012 WL 13059744 (S.D. Tex. Aug. 22, 2012) (expert failed to take into account whether alleged genetic transformation injuries caused by sodium dichromate were repaired, rendering his opinion on persistence of these injuries unreliable). A brief overview of the literature on the extent of estimated external and internal causes in different human cancer types is in Bernard D. Goldstein & Varun Patel, Controversy About the “Bad Luck” Cancer Hypothesis Could Lead to a Useful Tool for Planning Primary Prevention Cancer
susceptibility factors to environmental/occupational exposures to certain chemicals will become more common in toxic tort litigation, potentially used by plaintiff and defense lawyers alike to address the likelihood of a causal association between an exposure and a disease.
A relatively new and important area of genetic research, called epigenetics, has identified that there can be heritable changes (passed on from one generation to the next) in the functions of DNA that are not associated with a change in the DNA sequence (i.e., not the result of a mutation). Epigenetics (literally translated as “above genetics”) describes how the expression of DNA is regulated by several different biological processes.110 Importantly, there are a growing number of examples where chemicals found in the diet, workplace, and general environment can cause epigenetic changes that may contribute to a wide variety of adverse effects, including many types of cancers, diabetes and metabolic syndrome, neurological and cognitive dysfunction, as well as alterations in the functions of important organs such as the lung, the heart, the immune system, and reproductive organs. Indeed, a search of the scientific literature today shows over 127,000 scientific papers published in the general field of epigenetics. Although much of this scientific literature is focused on the normal (endogenous) functioning of epigenetic phenomena in chronic disease processes, there is growing recognition that many environmental and dietary factors can alter epigenetic processes, including some heavy metals, certain pesticides, some chemicals used in plastics, diesel exhaust, tobacco
Research, 32 Chem. Rsch. Toxicology, 949 (2019), https://doi.org/10.1021/acs.chemrestox.8b00390. For a discussion of methodologies for ruling out idiopathic (unknown) causes of a disease, see Hardeman v. Monsanto Co., 997 F.3d 941, 953 (9th Cir. 2021) (quoting Wendell v. GlaxoSmithKline LLC, 858 F.3d 1227, 1233–34 (9th Cir. 2017)) (ruling out idiopathy for disease with 70% idiopathy rate where expert relied on clinical experience, literature, and medical records).
110. For example, many of the nucleotide bases that are strung together to make up DNA can have methyl (-CH3) groups attached to them in specific positions by enzymes known as DNA methyl transferases. DNA methylation functions somewhat like a switch, providing critical information on when a gene is “turned on” or “turned off.” In a similar manner, certain enzymes can modify proteins called chromatin that are intimately associated with DNA and help to regulate when specific genes are turned on or off (gene expression). One group of these chromatin proteins, called histones, can be modified by attaching, or detaching, acetyl groups on the histone proteins (histone acetylases and histone deacetylases), which then modulate whether the gene is expressed or not. This epigenetic process is known as histone modification (histone acetylation or deacetylation). Several other biological processes that also modify the expression of genes via epigenetic processes include protein phosphorylation, ubiquination, and sumolyation of chromatin proteins. Similar to histone acetylation, adding these different chemical entities to proteins that are involved in regulation of DNA function can modify how genes are expressed, without changing the actual sequence of the DNA, and thus are referred to as epigenetic effects. Other forms of RNA called short non-coding RNA and microRNAs have been found to be important epigenetic regulators of expression of DNA. See Lian Zhang et al., Epigenetics in Health and Disease, in Epigenetics in Allergy and Autoimmunity (Christopher Chang & Qianjun Lu eds., 2020), https://doi.org/10.1007/978-981-15-3449-2_1.
smoke, polycyclic aromatic hydrocarbons, hormones, radioactivity, viruses and bacteria, to name a few.111
As the role of epigenetic factors in cancer and in other diseases unfolds, these might also be grist for the mill of both plaintiff and defense lawyers.
One of the basic and most useful tools in diagnosis and treatment of disease is the patient’s medical history.112 A thorough, standardized patient information questionnaire would be particularly useful for identifying the etiology, or causation, of illnesses related to toxic exposures; however, there is currently no validated or widely used questionnaire that gathers all pertinent information.113 Nevertheless, it is broadly recognized that a thorough medical history involves the questioning and examination of the patient as well as appropriate medical testing. The patient’s written medical records also should be examined.
The following information is relevant to a patient’s medical history: past and present occupational and environmental history and exposure to toxic agents; lifestyle characteristics (e.g., use of nicotine and alcohol); family medical history (i.e., medical conditions and diseases of relatives); and personal medical history (i.e., present symptoms and results of medical tests as well as past injuries, medical conditions, diseases, surgical procedures, and medical test results).
111. See Bob Weinhold, Epigenetics: The Science of Change, 114 Env’t Health Perspectives A160 (2006), https://doi.org/10.1289/ehp.114-a160; Giacomo Cavalli & Edith Heard, Advances in Epigenetics Link Genetics to the Environment and Disease, 571 Nature 489 (2019), https://doi.org/10.1038/s41586-019-1411-0.
112. For a thorough discussion of the methods of clinical diagnosis, see John B. Wong et al., Reference Guide on Medical Testimony, in this manual. A number of cases have considered the admissibility of the treating physician’s opinion based, in part, on medical history, symptomatology, and laboratory and pathology studies. See, e.g., Morrow v. Brenntag Mid-South, Inc., 505 F. Supp. 3d 1287, 1290 (M.D. Fla. 2020) (declining to find expert testimony reliable where the expert, a physician, had not reviewed any “prior medical history or treatment records”); cf. Galvez v. KLLM Transp. Servs., LLC, 575 F. Supp. 3d 748, 760–61 (N.D. Tex. 2021) (finding expert testimony reliable where causation opinion was supported by a review of the patient’s medical history, part of “the well-established ‘scientific method’ of diagnosis in the medical community”).
113. Office of Tech. Assessment, U.S. Congress, Reproductive Health Hazards in the Workplace 8 (1985), at 365–89.
In some instances, the reporting of symptoms in itself can be diagnostic of exposure to a specific substance, particularly in evaluating acute effects.114 For example, individuals acutely exposed to organophosphorus pesticides report headaches, nausea, and dizziness accompanied by anxiety and restlessness. Other reported symptoms are muscle twitching, weakness, and hypersecretion with sweating, salivation, and tearing.115
Acute exposure to many toxic agents produces a constellation of nonspecific symptoms, such as headaches, nausea, lightheadedness, and fatigue. These types of symptoms are part of human experience and can be triggered by a host of medical and psychological conditions. They are almost impossible to quantify or document beyond the patient’s report. Thus, these symptoms can be attributed mistakenly to an exposure to a toxic agent or discounted as unimportant, when in fact they reflect a significant exposure.116
114. See McManaway v. KBR, Inc., No. H-10-1044, 2012 WL 13059744, at *6 (S.D. Tex. Aug. 22, 2012) (expert review of medical records and in-person interviews, including review of acute symptoms, supports a finding of reliability); Coene v. 3M Co., No. 10-CV-6546-FPG, 2017 WL 1046749, at *3, *9 (W.D.N.Y. Mar. 20, 2017) (finding toxicologist’s opinion regarding causation of silicosis to be reliable when the opinion was based on a review of the plaintiff’s medical records as well as conversations with the plaintiff, the MSDS sheets for the chemicals the plaintiff worked with that allegedly caused the injury, and technical literature); Ledbetter v. Blair Corp., No. 3:09–CV–843–WKW, 2012 WL 2464000 (M.D. Ala. June 27, 2012) (finding medical toxicologist’s opinion as to impairment caused by certain prescription drugs to be reliable when it was based, in part, on a review of medical and prescription records). Contra Gilbert v. Lands’ End Inc., No. 19-cv-823-jdp, 2022 WL 2643514, at *12–13 (W.D. Wis. July 8, 2022) (in a case alleging injury from required work uniforms, finding expert’s opinion unreliable when it was based only on a review of partial medical records and when the expert did not talk to the plaintiffs, “attempt to link any particular plaintiffs’ symptom to any particular garment,” or “rule out other potential causes”).
115. EPA, Recognition and Management of Pesticide Poisonings 45 (6th ed. 2013), https://perma.cc/Z4UL-9NCQ.
116. Complex issues of exposure to oil and dispersants, chemicals, fumes, and odors and reported adverse health effects are addressed in a medical settlement establishing a Periodic Medical Consultation Program and a Specified Physical Conditions Matrix as discussed in In re Oil Spill by Oil Rig Deepwater Horizon, 295 F.R.D. 112 (E.D. La. 2013) (medical class certified for settlement purposes only). An example of a disease that may be caused by exposure to toxins but may present only with nonspecific symptoms is Legionnaires’ Disease, explored at length in Mueller v. Chugach Federal Solutions, Inc., No. 12-S-00624-NE, 2014 WL 2891030 (N.D. Ala. June 25, 2014). There, the decedent’s wife alleged that her husband had been exposed to Legionella pneumophila at his workplace. Id. at *1. The court noted that the “disease may present early in the illness with nonspecific symptoms, so it can be difficult to diagnose.” Id. at *20. Indeed, the decedent in Mueller had been hospitalized for pneumonia and treated for the same prior to finally being diagnosed with Legionnaires’ Disease after a urine antigen test. Id. at *4. In opining that that decedent’s illness was
In taking a careful medical history, the expert focuses on the time pattern of symptoms and disease manifestations in relation to any exposure and on the constellation of symptoms to determine causation. It is easier to establish causation when symptoms or a specific diagnosis are unusual and rarely caused by anything other than the suspect chemical (e.g., blue lips and shortness of breath caused by dyes that produce methemoglobinemia) or rare cancers such as hem-angiosarcoma, associated with vinyl chloride exposure, and mesothelioma, associated with asbestos exposure. However, many cancers and other conditions are associated with several extrinsic or intrinsic causative factors, complicating proof of causation.117
Two types of laboratory tests can be considered: tests that are routinely used in medicine to detect changes in normal body status, and specialized tests, which are used to directly or indirectly detect the presence of the chemical or physical agent.118 For the most part, tests used to demonstrate the presence of a toxic agent are frequently unavailable from clinical laboratories. Even when available from a hospital or a clinical laboratory, a test such as that for carbon monoxide bonded to hemoglobin (carboxyhemoglobin) is done so rarely that there may be concerns regarding its accuracy. Other tests, such as the test for blood lead levels, are required for routine surveillance of potentially exposed workers.119
caused by exposure in the workplace, the expert was permitted to base his opinions on various materials produced during litigation, his own experience, and the fact that other employees also exhibited symptoms consistent with Legionnaires’ Disease. Id. at *6.
117. Failure to rule out other potential causes of symptoms may lead to a ruling that the expert’s report is inadmissible. See, e.g., Hardeman v. Monsanto Co., 997 F.3d 941, 953, 965 (9th Cir. 2021) (expert’s specific causation opinion was admissible where hepatitis C was ruled out as a cause of plaintiff’s non-Hodgkins lymphoma based on research studies and a temporal analysis of exposures and active disease); Pluck v. BP Oil Pipeline Co., 640 F.3d 671, 678–81 (6th Cir. 2011) (affirming district court’s conclusion that expert’s opinion was unreliable when the expert failed to rule out alternative causes of the plaintiff’s non-Hodgkins lymphoma); Scott v. Dyno Nobel, Inc., No. 4:16-CV-1440 HEA, 2021 WL 1750238, at *10–11 (E.D. Mo. May 4, 2021) (finding expert opinion reliable when it ruled out other possible causes of symptoms and noting that another expert’s opinion was found inadmissible for failure to do the same).
118. See, e.g., Myers v. United States, No. 02cv1349–BEN, 2014 WL 6611398, at *35–38 (S.D. Cal. Nov. 20, 2014) (explaining at length the necessity of reliable laboratory testing and reasons why tests may be found unreliable).
119. However, if a laboratory is certified for the testing of blood lead levels in workers, for which the OSHA action level is 40 micrograms per deciliter (µg/dl), it does not necessarily mean that it will give reliable data on blood lead levels at the much lower CDC Reference Level of 3.5
With few exceptions, acute and chronic diseases, including cancer, can be caused by either a single toxic agent or a combination of agents or conditions. In taking a careful medical history, the expert examines the possibility of competing causes, or confounding factors, for any disease, which leads to a differential diagnosis. In addition, ascribing causality to a specific source of a chemical requires that a history be taken concerning other sources of the same chemical. The failure of a physician to elicit such a history or of a toxicologist to pay attention to such a history raises questions about competence and leaves open the possibility of competing causes of the disease.120
An individual’s simultaneous exposure to more than one chemical may result in a response that differs from that which would be expected from exposure to only one of the chemicals.121 When the effect of multiple agents is that which would be predicted by the sum of the effects of individual agents, it is called an additive effect; when it is greater than this sum, it is known as a synergistic effect; when one agent causes a decrease in the effect produced by another, the result is termed antagonism; and when an agent that by itself produces no effect leads to an
µg/dl (defined as the blood level used “to identify children with blood lead levels that are higher than most children’s levels,” CDC, Blood Lead Levels in Children, https://perma.cc/XY53-3A7K).
120. See, e.g., McManaway v. KBR, Inc., No. H-10-1044, 2012 WL 13059744 (S.D. Tex. Aug. 12, 2012) (expert conducted a differential diagnosis for each plaintiff, in which he ruled out other causes for their injuries, and also conducted blood testing to assist with differential diagnoses regarding sodium dichromate exposure); In re E.I. du Pont de Nemours & Co. C-8 Personal Inj. Litig., 348 F. Supp. 3d 680, 696 (S.D. Ohio 2016) (differential diagnosis is an “appropriate method for making a determination of causation for an individual instance of disease”) (citing Hardyman v. Norfolk & W. Ry. Co., 243 F.3d 255, 260 (6th Cir. 2001)); see also Milward v. Rust-Oleum Corp., 820 F.3d 469, 476 (1st Cir. 2016) (affirming exclusion of expert testimony as unreliable for failure to rule out an idiopathic diagnosis); Kovach v. Wheeling & Lake Erie Ry. Co., 556 F. Supp. 3d 762, 771 (N.D. Ohio 2021) (quoting Best v. Lowe’s Home Ctrs., Inc., 563 F.3d 171, 181 (6th Cir. 2009)) (noting that “doctors need not rule out every conceivable cause for their differential-diagnosis-based opinions to be admissible[,]” but they must at least consider and rule out alternative causes).
121. See generally Angelo Moretto et al., A Framework for Cumulative Risk Assessment in the 21st Century, 47 Critical Revs. Toxicology 85 (2017), https://doi.org/10.1080/10408444.2016.1211618; Devon C. Payne-Sturges et al., Methods for Evaluating the Combined Effects of Chemical and Nonchemical Exposures for Cumulative Environmental Health Risk Assessment, 15 Int’l J. Env’t Rsch. & Pub. Health. 2797 (2018), https://doi.org/10.3390/ijerph15122797.
enhancement of the effect of another agent, the response is termed potentiation.122 These interactions can often be explained if the metabolic pathways of the agent or the toxicological mechanism by which the individual agents cause their effects are known.
Three types of toxicological approaches are pertinent to understanding the effects of mixtures of agents. One is based on the standard toxicological evaluation of common commercial mixtures, such as gasoline. The second approach is from studies in which the known toxicological effect of one agent is used to explore the mechanism of action of another agent, such as using a known specific inhibitor of a metabolic pathway to determine whether the toxicity of a second agent depends on this pathway. The third approach is based on an understanding of the basic mechanism of action of the individual components of the mixture, thereby allowing prediction of the combined effect, which can then be tested in an animal model.123
The toxicologist may also need to consider whether the mixture contains agents whose effects would allow estimation of exposure to a putative causative agent that was also part of the mixture. For example, the upper boundary for exposure to benzene present at a level of 0.1% in commercial grade toluene can be based on the level of toluene that causes central nervous system effects, including coma. A worker’s daily exposure of 1 part per million (ppm) benzene to this mixture would have imposed a concomitant exposure of approximately 1,000 ppm toluene—incompatible with the National Institute for Occupational Safety and Health (NIOSH) rating of Immediately Dangerous to Life and Health level for toluene of 500 ppm for fifteen minutes. Conversely, in a situation in which the worker did become woozy or comatose following exposure to a toluene-containing mixture, the toxicologist could use information about the level of toluene that caused such symptoms as a means to estimate the extent to which there was exposure to benzene or another contaminant of concern.
122. Courts have been called on to consider the issue of synergy. See Russell v. U.S. Dep’t of Labor, No. 3:15-CV-320-DCP, 2018 WL 2054566, at *8–9 (E.D. Tenn. May 2, 2018) (finding that the Department of Labor properly considered synergistic effects of four substances in arriving at its conclusion that plaintiff’s chronic obstructive pulmonary disease was not caused by exposure to the substances “individually or in combination”); but see In re E.I. Du Pont De Nemours & Co. C-8 Personal Inj. Litig., 337 F. Supp. 3d 728, 747 (S.D. Ohio 2015) (noting that the expert’s opinion on synergistic action between C-8 and obesity was “mere speculation” and, in this context, “synergism as a theory is unreliable and inadmissible under Daubert and Rule 702”).
123. See generally Moretto et al., supra note 121; Payne-Sturges et al., supra note 121. The EPA has been addressing the issue of multiple exposures to different agents within a community under the heading of cumulative risk assessment. This approach is particularly of importance in dealing with environmental justice concerns. See Michael A. Callahan & Ken Sexton, If Cumulative Risk Assessment Is the Answer, What Is the Question?, 115 Env’t Health Persps. 799 (2007), https://doi.org/10.1289/ehp.9330.
Individuals who exercise inhale more than sedentary individuals and therefore are exposed to higher doses of airborne environmental toxicants. Similarly, differences in metabolism, which are inherited or caused by external factors, such as the levels of carbohydrates in a person’s diet, may result in differences in the delivery of a toxic product to the target organ.124
Moreover, for any given level of a toxic agent that reaches a target organ, damage may be greater because of a greater response of that organ. In addition, for any given level of target-organ damage, there may be a greater impact on particular individuals. For example, an elderly individual or someone with preexisting lung disease is less likely to tolerate a small decline in lung function caused by an air pollutant than is a healthy individual with normal lung function.
Advances in human genetics research and interactions between genetics and environmental factors are providing information about susceptibility that may be relevant to determining the likelihood that a given exposure has a specific effect on an individual.125
Nongenetic factors may also alter the metabolism of a carcinogen. For example, an increased risk of certain cancers has been associated with obesity.126 Obesity also can alter the metabolism of chemicals, for example through the increased levels of cytochrome P450 2E1 found in humans and laboratory animals with fatty livers.127 A person’s level of physical activity, age, sex, and genetic makeup, as well as exposure to therapeutic agents (such as prescription or over-the-counter drugs), may affect the metabolism of the compound and hence its toxicity, making the individual more susceptible to the toxic effects of a chemical exposure.
124. See generally Moretto et al., supra note 121; Payne-Sturges et al., supra note 121; see also Marissa B. Kosnik & David M. Reif, Determination of Chemical-Disease Risk Values to Prioritize Connections Between Environmental Factors, Genetic Variants, and Human Diseases, 379 Toxicology & Applied Pharmacology 114674 (2019), https://doi.org/10.1016/j.taap.2019.114674.
125. See, e.g., Supratim Choudhuri et al., From Classical Toxicology to Tox21: Some Critical Conceptual and Technological Advances in the Molecular Understanding of the Toxic Response Beginning from the Last Quarter of the 20th Century, 161 Toxicological Scis. 5 (2018), https://doi.org/10.1093/toxsci/kfx186.
126. See, e.g., Sarkees v. E.I. DuPont de Nemours & Co., 15 F.4th 584, 592–93 (2d Cir. 2021) (expert identified numerous factors that can elevate risk of bladder cancer and then sufficiently ruled them out, thereby establishing differential etiology); see also Cooper v. Smith & Nephew, Inc., 259 F.3d 194, 202 (4th Cir. 2001).
127. See, e.g., Mohamed A. Abdelmegeed et al., Role of CYP2E1 in Mitochondrial Dysfunction and Hepatic Injury by Alcohol and Non-Alcoholic Substances, 10 Current Molecular Pharmacology 207 (2017), https://doi.org/10.2174/1874467208666150817111114.
Multiple avenues of deductive reasoning based on scientific data lead to acceptance of causation in any field, particularly in toxicology. However, the basis for this deductive reasoning is also one of the most difficult aspects of causation to describe quantitatively. If animal studies, pharmacological research on mechanisms of toxicity, in vitro tissue studies, and epidemiological research all document toxic effects of exposure to a compound, an expert’s opinion about causation in a particular case is much more likely to be true.128
The more difficult problem is how to evaluate conflicting research results. When different research studies reach different conclusions regarding toxicity, the expert must be asked to explain how those results have been taken into account in the formulation of the expert’s opinion.
The basis of the toxicologist’s expert opinion in a specific case is a thorough review of the research literature and treatises concerning effects of exposure to the chemical at issue. To arrive at an opinion, the expert assesses the strengths and weaknesses of the research studies. The expert also bases an opinion on fundamental concepts of toxicology relevant to understanding the actions of chemicals in biological systems.
As the following series of questions indicates, no single academic degree, research specialty, or career path qualifies an individual as an expert in
128. See, e.g., Waite v. AII Acquisition Corp., 194 F. Supp. 3d 1298, 1313, 1315 (S.D. Fla., 2016) (holding that the weight-of-evidence approach used by the expert was “thoroughly reasoned and based on sound methodology” and explaining that the approach “requires consideration of all available scientific evidence, including epidemiology, toxicology (animal studies), cellular studies (in vitro), and molecular biology”); In re Flint Water Cases, No. 17-10164, 2021 WL 5631706, at *7–8 (E.D. Mich. Dec. 1, 2021) (finding expert’s opinion on general causation reliable where the opinion was based on “reasonable scientific inferences” from “the great weight of the evidence” and where the expert “cite[d] to peer-reviewed, epidemiological studies which account for confounding variables”); In re Abilify (Aripiprazole) Prods. Liab. Litig., 299 F. Supp. 3d 1291, 1311 (N.D. Fla. 2018) (explaining that the “‘weight of the evidence’ approach to analyzing causation can be considered reliable, provided the expert considers all available evidence carefully and explains how the relative weight of the various pieces of evidence led to his conclusion . . . , it is crucial that the expert describe each step in the process by which he gathered and assessed the scientific evidence”). But see Daniels-Feasel v. Forest Pharms., Inc., No. 17 CV 4188-LTS-JLC, 2021 WL 4037820, at *5 (S.D.N.Y. Sept. 3, 2021), aff’d, No. 22–146, 2023 WL 4837521 (2d Cir. July 28, 2023) (quoting In re Abilify, 299 F. Supp. 3d at 1311) (expert’s opinion that relies on cherry-picked data is not reliable). See generally Liesa L. Richter & Daniel J. Capra, The Admissibility of Expert Testimony, in this manual.
toxicology. Toxicology is a heterogeneous field. A number of indicia of expertise can be explored, however, that are relevant to both the admissibility and weight of the proffered expert opinion.
A graduate degree in toxicology demonstrates that the proposed expert has a substantial background in the basic issues and tenets of toxicology. Many universities have established graduate programs in toxicology. These programs are administered by the faculties of medicine, pharmacology, pharmacy, or public health.
In addition to toxicologists with a specific Ph.D. in toxicology, many highly qualified toxicologists are physicians or hold doctoral degrees in related disciplines (e.g., veterinary medicine, pharmacology, biochemistry, environmental health, or industrial hygiene). For a person with this type of background, a single course in toxicology is unlikely to provide sufficient background for developing expertise in the field. Additional specific training and board certification in toxicology is highly relevant to evaluating the credentials of a toxicologist.
A proposed expert should be able to demonstrate an understanding of the discipline of toxicology, including statistics, toxicological research methods, and disease processes. A physician without particular training or experience in toxicology is unlikely to have sufficient background to evaluate the strengths and weaknesses of toxicological research. Most practicing physicians have little knowledge of environmental and occupational medicine.129 Subspecialty physicians may have particular knowledge of a cause-and-effect relationship (e.g., pulmonary physicians have knowledge of the relationship between asbestos exposure and asbestosis); anesthesiology is largely focused on the effects of certain chemicals; and certified addiction medicine physicians focus on chemical agents that are not pharmaceuticals or are illegally used pharmaceuticals, such as
129. For recent documentation of how rarely an occupational history is obtained, see Barry J. Politi et al., Occupational Medical History Taking: How Are Today’s Physicians Doing? A Cross-Sectional Investigation of the Frequency of Occupational History Taking by Physicians in a Major U.S. Teaching Center, 46 J. Occupational Env’t Med. 550 (2004), https://doi.org/10.1097/01.jom.0000128153.79025.e4.
fentanyl.130 Physicians who receive additional training in classic toxicology include those who are certified by the American Board of Preventive Medicine in medical toxicology, addiction medicine, or occupational and environmental medicine.131 Knowledge of toxicology is particularly strong among occupational health specialists who work in the chemical, petrochemical, and pharmaceutical industries, in which the surveillance of workers exposed to chemicals is a major responsibility.132
130. Courts look at specific circumstances in determining whether a treating physician may testify as a fact witness or an expert witness. See, e.g., Mueller v. Chugach Fed. Solutions, Inc., No. 12-S-00624-NE, 2014 WL 2891030, at *3–4 (N.D. Ala. June 25, 2014) (permitting treating physician to testify as fact witness and offer his “opinion regarding the cause of death . . . based upon his personal experience of treating” the decedent); but see N.K. by Bruestle-Kumra v. Abbott Lab’ys, 731 F. App’x 24, 26 (2d Cir. 2018) (noting that a treating physician may not testify as a fact witness and must “be admitted as a Rule 702 expert witness to provide expert testimony on specific causation”). Some courts have allowed treating physicians to testify regarding causation. See, e.g., Lassalle v. McNeilus Truck & Mfg., Inc., No. 16-cv-00766-WHO, 2017 WL 3115141, at *2 (N.D. Cal. July 21, 2017) (permitting the treating physician to offer his opinion regarding causation because it could have been formed within the scope of treatment); Burton v. Am. Cyanamid, 362 F. Supp. 3d 588, 599 (E.D. Wis. 2019) (allowing treating physician to testify regarding lead exposure and causation because “his professional training and his experience treating . . . patients with elevated levels are sufficient to qualify him to give testimony”). Contra Zellers v. NexTech Ne., LLC, 533 F. App’x 192, 198 (4th Cir. 2013) (excluding testimony of treating physician as to causation because, inter alia, the physician was a neurologist who did not have “specialized training in the field of toxicology”); Higgins v. Koch Dev. Corp., 794 F.3d 697, 704–05 (7th Cir. 2015) (excluding treating physician opinion regarding causation because the treating physician had no “training in toxicology”).
Treating physicians also become involved in considering cause-and-effect relationships when they are asked whether a patient can return to a situation in which an exposure has occurred. The answer is obvious if the cause-and-effect relationship is clearly known. However, this relationship is often uncertain, and the physician must consider the appropriate advice. In such situations, the physician will tend to give advice as though the causality was established, both because it is appropriate caution and because of fears concerning medicolegal issues.
131. Before 1990, the American Board of Medical Toxicology certified physicians. But beginning in 1990, medical toxicology became a subspecialty board under the American Board of Emergency Medicine, the American Board of Pediatrics, and the American Board of Preventive Medicine, as recognized by the American Board of Medical Specialties.
132. Clinical ecologists, another group of physicians, have offered opinions regarding multiple chemical hypersensitivity and immune system responses to chemical exposures. These physicians generally have a background in the field of allergy, not toxicology, and their theoretical approach is derived in part from classic concepts of allergic responses and immunology. This theoretical approach has often led clinical ecologists to find cause-and-effect relationships or low-dose effects that are not generally accepted by toxicologists. Clinical ecologists often belong to the American Academy of Environmental Medicine.
In 1991, the Council on Scientific Affairs of the American Medical Association concluded that until “accurate, reproducible, and well-controlled studies are available . . . multiple chemical sensitivity should not be considered a recognized clinical syndrome.” Council on Sci. Affairs, Am. Med. Ass’n, Council Report on Clinical Ecology 6 (1991). Multiple chemical sensitivity has been held by courts to be “a controversial diagnosis that has been excluded under Daubert as unsupported
As of August 2022, more than 2,500 individuals had received board certification from the American Board of Toxicology, Inc., and thus can use the title of “Diplomate of the American Board of Toxicology” (DABT). To sit for the examination, the candidate must be involved full time in the practice of toxicology, including designing and managing toxicological experiments or interpreting results and translating them to identify and solve human and animal health problems. Diplomates must be recertified every five years. The Academy of Toxicological Sciences (ATS) was formed to provide credentials in toxicology through peer review only. It does not administer examinations for certification. As of 2022, approximately 290 individuals had been elected as Fellows of ATS.
The Society of Toxicology (SOT), the major professional organization for the field of toxicology, was founded in 1961 and has grown dramatically in recent years. In 2022 the SOT had approximately 800 members.133 Criteria for membership is based either on peer-reviewed publications or on the active practice of toxicology. Physician toxicologists can join the American College of Medical Toxicology and the American Academy of Clinical Toxicology as well as the SOT. There are also societies of forensic toxicology, such as the International Association of Forensic Toxicologists. Other organizations in the field are the American College of Toxicology, for which experience in the active practice of toxicology is the major membership criterion; the International Society of Regulatory Toxicology and Pharmacology; and the Society for Occupational and Environmental Health. For membership, the last two organizations require only the payment of dues.
by sound scientific reasoning or methodology.” Madej v. Maiden, 951 F.3d 364, 374–75 (6th Cir. 2020) (quoting Summers v. Mo. Pac. R.R. Sys., 132 F.3d 599, 603 (10th Cir. 1997)) (collecting cases). The Madej court recognized that the “mountain of precedent” regarding multiple chemical sensitivity consists mostly of cases “over a decade old[,]” but reiterated that it has not been “shown that more recent scientific advancements have led the scientific community to come to accept a multiple-chemical-sensitivity diagnosis.” 951 F.3d at 375. The “diagnosis remains unrecognized by the American Medical Association and unlisted in the World Health Organization’s International Classification of Diseases.” Id.
133. As of 2024, there are twenty-nine specialty sections of the SOT that represent the different specialty areas involved in understanding the wide range of toxic effects associated with exposure to chemical and physical agents. These sections include Mechanisms, Molecular and Systems Biology, Inhalation and Respiratory, Metals, Neurotoxicology, Carcinogenesis, Risk Assessment, Biological Modeling, Immunotoxicology, and Sustainable Chemicals through Contemporary Toxicology, to name a few.
Toxicology, like many other disciplines that transcend both basic and applied science, has in recent decades gone through professionalization. This includes development of organizations that have different levels of expertise and recognition in the field. Understanding these organizations, including their criteria for membership, is pertinent to evaluating expertise in the field.
The success of academic scientists in toxicology, as in other biomedical sciences, usually is measured by the following types of criteria: the quality and number of peer-reviewed publications, the ability to compete for research grants, service on scientific advisory panels, and university appointments.
Publication of articles in peer-reviewed journals indicates an expertise in toxicology. The number of articles, their topics, and whether the individual is the principal or senior author are important factors in determining the expertise of a toxicologist.134
Most research grants from government agencies and private foundations are highly competitive. Successful competition for funding and publication of the research findings indicate competence in an area.
Selection for local, national, and international regulatory advisory panels usually implies recognition in the field. Examples of such panels are the NIH Study Sections considering toxicology-related grant proposals, and panels convened by the EPA, FDA, WHO, and IARC. Recognized industrial organizations, including the American Petroleum Institute and the Electric Power Research Institute; public interest groups, such as the Environmental Defense Fund and the Natural Resources Defense Council; and public–private nonprofit partnerships, such as the Health and Environmental Sciences Institute (HESI) and the Health Effects Institute (HEI), employ toxicologists directly and as consultants, and enlist academic toxicologists to serve on advisory panels. Because of a growing interest in environmental issues, the demand for scientific advice has outgrown the supply of available toxicologists. It is thus common for reputable toxicologists to serve on advisory panels.
Finally, a tenured or tenure-track university appointment in toxicology, risk assessment, or a related field signifies substantial expertise, particularly if the
134. Examples of reputable, peer-reviewed journals are the Journal of Toxicology and Environmental Health; Toxicological Sciences; Toxicology and Applied Pharmacology; Science; British Journal of Industrial Medicine; Clinical Toxicology; Archives of Environmental Health; Journal of Occupational and Environmental Medicine; Annual Review of Pharmacology and Toxicology; Teratogenesis, Carcinogenesis and Mutagenesis; Fundamental and Applied Toxicology; Inhalation Toxicology; Biochemical Pharmacology; Toxicology Letters; Environmental Research; Environmental Health Perspectives; International Journal of Toxicology, Human and Experimental Toxicology; Regulatory Toxicology and Pharmacology; and American Journal of Industrial Medicine.
university has a graduate education program in that area. However, some universities offer adjunct or affiliate faculty positions to local professionals (consultants, government employees, etc.), often to assist in teaching entry-level toxicology courses. Although many well-qualified toxicologists working in the private sector or government agencies often have adjunct academic appointments, such an appointment does not necessarily confer the same level of expertise or toxicology credentials as a regular faculty appointment.
The authors gratefully appreciate the excellent research assistance provided by Stephanie N. Patton, associate, Buchanan Ingersoll & Rooney, P.C.; Lyndsey Arneson Field, Wake Forest Law School class of 2024; and Marvin Astrada, senior research associate, Federal Judicial Center, in the preparation of this fourth edition of the reference guide.
absorption. The biological process of taking up of a chemical into the body orally, through inhalation, or through skin exposure.
acute toxicity. An immediate toxic response following a single or short-term exposure to a toxic substance.
additive effect. When exposure to more than one toxic agent results in the same effect as would be predicted by the sum of the effects of exposure to the individual agents.
antagonism. When exposure to one toxic agent causes a decrease in the effect produced by another toxic agent.
benchmark dose (BMD). The benchmark dose is determined on the basis of dose–response modeling and is defined as the exposure associated with a specified low incidence of risk, generally in the range of 1% to 10%, of a health effect, or the dose associated with a specified measure or change of a biological effect. Frequently, the statistical lower 10% confidence limit on the estimate of the BMD is used for regulatory standard setting, and is referred to as the Benchmark Dose Low 10%, or BMDL10.
bioassay. A test for measuring the toxicity of an agent by exposing laboratory animals to the agent and observing the effects.
biological monitoring. Measurement of toxic agents or the results of their metabolism in biological materials, such as blood, urine, expired air, or biopsied tissue, to test for exposure to the toxic agents, or the detection of physiological changes that are due to exposure to toxic agents.
biologically plausible theory. A biological explanation for the relationship between exposure to an agent and adverse health outcomes. This is usually associated with an understanding of how the chemical causes its toxic effect (see note 49 above and accompanying text).
carcinogen. A chemical substance or other agent that causes cancer.
carcinogenicity bioassay. Limited or long-term tests using laboratory animals to evaluate the potential carcinogenicity of an agent.
chronic toxicity. A toxic response to long-term exposure or dosing (typically more than six months and up to two years for laboratory animal studies) with an agent.
clinical ecologists. Physicians who believe that exposure to certain chemical agents can result in damage to the immune system, causing multiple-chemical hypersensitivity and a variety of other disorders. Clinical ecologists often have a background in the field of allergy, not toxicology, and their theoretical approach is derived in part from classic concepts of allergic
responses and immunology. There has been much resistance in the medical community to accepting their claims.
clinical toxicology. The study and treatment of humans exposed to chemicals and the quantification of resulting adverse health effects. Clinical toxicology includes the application of pharmacological principles to the treatment of chemically exposed individuals and research on measures to enhance elimination of toxic agents.
compound. In chemistry, the combination of two or more different elements in definite proportions, which when chemically combined acquire properties different from those of the original elements.
confounding factors. Variables that are related to both exposure to a toxic agent and the outcome of the exposure. A confounding factor can obscure the relationship between the toxic agent and the adverse health outcome associated with that agent.
developmental toxicology. The processes by which a chemical agent causes toxic effects to the developing embryo and fetus and also during early life stages after birth when the brain and endocrine system continue to develop.
differential diagnosis. A physician’s consideration of alternative diagnoses that may explain a patient’s condition.
direct-acting agents. Agents that cause toxic effects without metabolic activation or conversion.
distribution. Movement of a toxic agent throughout the organ systems of the body (e.g., the liver, kidney, bone, fat, and central nervous system). The rate of distribution is usually determined by the blood flow through the organ and the ability of the chemical to pass through the cell membranes of the various tissues.
dose, dosage. A product of both the concentration of a chemical or physical agent and the duration and frequency of exposure.
dose–response curve. A graphic representation of the relationship between the dose of a chemical administered and the effect(s) produced. Each specific endpoint of toxicity may have its own dose–response curve.
dose–response relationships. The extent to which a living organism responds to specific doses of a toxic substance. The more time spent in contact with a toxic substance, or the higher the dose, the greater the organism’s response. For example, a small dose of carbon monoxide will cause drowsiness; a large dose can be fatal.
epidemiology. The study of the occurrence and distribution of disease among people. Epidemiologists study groups of people to discover the cause of a disease, or where, when, and why disease occurs.
epigenetic. The study of how cells control gene activity without changing the DNA sequence. Epigenetics research has identified heritable changes in the functions of DNA that are not associated with a change in the DNA sequence. Chemicals found in the diet, workplace, and environment have been found to cause epigenetic changes that may contribute to adverse effects, including cancers.
etiology. A branch of medical science concerned with the causation of diseases.
excretion. The process by which toxicants are eliminated from the body, including through the kidneys and urinary tract, the liver and biliary system, fecal excretion, exhalation of the agent via the lungs, and secretion into sweat, saliva, and breast milk (lactation, which can result in an important route of exposure for nursing infants).
exposure. The intake into the body of a hazardous material. The main routes of exposure to substances are through the skin, mouth, and lungs. Exposure via injection (intravenous, intramuscular, subcutaneous) can occur for some drugs, especially drugs of abuse.
exposure assessment. The second step in the process of chemical risk assessment, in which quantitative estimates of the potential dose and duration of exposure to a specific chemical or chemical mixture will occur during a specified set of circumstances. It includes the consideration of concentration of a chemical in specific media (air, food, water, skin contact), amount of contact (food consumption, water consumption, inhalation rate), and frequency and duration of exposure for each route.
extrapolation. The process of estimating unknown values from known values.
genome, genomics. The genetic content of a cell, composed of DNA and related proteins that make up the twenty-three pairs of chromosomes in every living human cell. The human genome is composed of approximately 24,000 genes, each consisting of thousands of individual nucleotides (the building blocks of DNA). The sequence of the DNA in a gene dictates its function. The DNA of a gene is used by a cell to make proteins, which are the functional units of a cell. Changes in DNA sequence (a mutation) can result in irreversible changes in the function of a cell, leading to a wide variety of pathology, including cancers and birth defects.
good laboratory practice (GLP). Codes developed by the federal government in consultation with the laboratory testing industry that govern many aspects of laboratory standards.
hazard. The inherent ability of a chemical to cause (a) specific type(s) of toxic effect on biological organisms. Hazard differs from risk in that there is no quantitative consideration of dose.
hazard assessment. The first step in the process of risk assessment, which evaluates the potential of a specific chemical, or mixture of chemicals, to cause harm.
hazard identification. In risk assessment, the qualitative analysis of all available experimental animal and human data to determine whether and at what dose an agent is likely to cause toxic effects.
hydrologists, hydrogeologists. Scientists who specialize in the movement of ground and surface waters and the distribution and movement of contaminants in those waters.
immunotoxicology. A branch of toxicology concerned with the effects of toxic agents on the immune system.
indirect-acting agents. Agents that require metabolic activation or conversion before they produce toxic effects in living organisms.
inhalation toxicology. The study of the effect of toxic agents that are absorbed into the body through inhalation, including their effects on the respiratory system.
in silico methodology. Computational models that investigate toxicological and pharmacological hypotheses using tools such as data mining, homology models, machine learning, pharmacophores, quantitative structure-activity relationships, and network analysis tools.
in vitro. A research or testing methodology that uses living cells in an artificial or test tube system, or that is otherwise performed outside of a living organism.
in vivo. A research or testing methodology that uses living organisms.
lethal dose 50 (LD50). The dose at which 50% of laboratory animals die within days to weeks, generally after the administration of a single dose.
lifetime bioassay. A bioassay in which doses of an agent are given to experimental animals throughout their lifetime. (See bioassay.) For carcinogenicity and most chronic toxicity studies, the standardized period for a lifetime bioassay in rats and mice is two years.
linearized non-threshold (LNT) model. A mathematical modeling approach that is used to extrapolate dose–response data to doses below the observed range, with assumptions that (1) the cancer risk is directly proportional to the dose at all doses, and thus (2) there is no threshold (a dose below which the response is zero).
lowest observed adverse effect level (LOAEL). The lowest dose or concentration of a chemical that causes observable adverse effects in experimental toxicology studies that use multiple doses.
maximum tolerated dose (MTD). The highest dose of an agent to which an organism can be exposed without it causing death or significant overt toxicity.
mechanism (or mode) of action. The specific molecular, biochemical, and cellular processes that are altered by the toxic substance, giving rise to the observed adverse effect(s). (See also biologically plausible theory.)
metabolism. The sum total of the biochemical reactions that a chemical produces in an organism.
metabolome. The total number of small molecules, both endogenous and exogenous, that can be measured in the blood. Metabolomics is useful to identify the presence of multiple potentially toxic substances in the blood, including the metabolites of the toxic substances.
molecular toxicology. The study of how toxic agents interact with cellular molecules, including DNA, proteins, and lipids (fats).
multiple-chemical hypersensitivity. A physical condition whereby individuals react to many different chemicals at extremely low exposure levels.
multistage model. A biochemical and statistical model for understanding certain diseases, including some cancers, based on the postulate that more than one event is necessary for the onset of disease.
mutagen. A substance that causes physical changes in chromosomes or biochemical changes in genes.
mutagenesis. The process by which agents cause changes in chromosomes and genes.
neurotoxicology. A branch of toxicology concerned with the effects of exposure to toxic agents on the central nervous system.
no observed adverse effect level (NOAEL). The highest level of exposure to an agent at which no effect is observed. It is the experimental equivalent of a threshold. See threshold.
one-hit hypothesis. A hypothesis of cancer risk in which each molecule of a chemical mutagen has a possibility, no matter how tiny, of mutating a gene in a manner that may lead to tumor formation or cancer. See also linearized non-threshold model.
PFAS. A widely used abbreviation to represent a class of chemicals called per- and polyfluoroalkyl substances. Such compounds provide a nonstick and stain-resistant coating to a wide variety of consumer products, such as nonstick cookware, carpeting, apparel, upholstery, personal care products, and cosmetics, and were also used in firefighting foams.
pharmacokinetics. A mathematical approach that expresses the movement of a toxic agent through the organ systems of the body, including the distribution to the target organ and to its ultimate fate via excretion pathways. It measures rate constants for each step in the process.
point of departure (POD). A term used to identify a point on the lower portion of a dose–response curve from which extrapolations are used to identify a hypothetical response level at lower doses. PODs are usually either a NOAEL, LOAEL, or BMD. These values, which are experimentally derived within the range of observations of a specified effect (e.g., birth defect, liver injury, tumor occurrence), are then divided by specified safety or uncertainty factors to arrive at a Reference Dose (Rf D) or Acceptable Daily Intake (ADI).
potentiation. The process by which the addition of one agent, which by itself has no toxic effect, increases the toxicity of another agent when exposure to both agents occurs simultaneously.
reference dose (Rf D). A term used by some regulatory agencies, such as the EPA, to identify a daily dose that is expected to be without harm. It is conceptually identical to the phrase acceptable daily intake (ADI), which is the amount of a chemical to which a person can be exposed on a daily basis over an extended period of time (usually a lifetime) without suffering a deleterious effect.
reproductive toxicology. The study of the effect of toxic agents on male and female reproductive systems, including sperm, ova, and offspring.
risk assessment. The use of scientific evidence to estimate the likelihood of adverse effects on the health of individuals or populations from exposure to hazardous materials and conditions. Chemical risk assessment includes: (1) hazard assessment, (2) exposure assessment, (3) dose–response modeling, and (4) risk characterization.
risk characterization. The final step in the risk assessment process, which summarizes information about an agent, including the strengths and limitations of the available toxicity data, and evaluates it in order to estimate the risks it poses.
safety assessment. Toxicological research that tests the toxic potential of a chemical in vivo or in vitro using standardized techniques required by governmental regulatory agencies or other organizations. The term is similar to risk assessment of chemical hazards, but is most often used in association with pharmaceuticals, food additives, and other chemicals for which public benefits are expected. It considers both the risks and the benefits of the chemical in deriving an acceptable level of risk (i.e., a “safe” dose).
safety factor (SF). See uncertainty factor.
structure–activity relationships (SAR). An in silico method used by toxicologists to predict the toxicity of new chemicals by comparing their chemical structures with those of compounds with known toxic effects.
synergistic effect. An effect by which two toxic agents acting together have an effect greater than that predicted by adding together their individual effects.
target organ. The organ system that is affected by a particular toxic agent.
target-organ dose. The dose to the organ that is affected by a particular toxic agent.
teratogen. An agent that changes eggs, sperm, or embryos, thereby increasing the risk of birth defects.
teratogenic. The ability to produce birth defects. Teratogenic effects do not pass to future generations. See teratogen.
threshold. The dose level above which effects will occur and below which no effects are expected to occur. See no observed adverse effect level.
toxic. Of, relating to, or caused by a poison—or a poison itself.
toxic agent or toxicant. An agent or substance that causes disease or injury.
toxicokinetics. Toxicokinetics is the description of the rate at which a chemical will enter an organism, and what occurs in terms of absorption, distribution, metabolism, and excretion (ADME). The term is used principally in describing ADME characteristics for chemicals that are not used as pharmaceuticals. The term pharmacokinetics (see above) describes this same process for chemicals used for therapeutic benefit (drugs).
toxicology. The science of the nature and effects of poisons, their detection, and the treatment of their adverse effects.
toxins. Toxic substances that are produced by biological organisms (plants, animals, fungi, bacteria, etc).
uncertainty factor (UF). These are values, usually a factor of ten, which are used to provide ample public health protection in determining “safe doses” (e.g., ADIs, Rf Ds) when extrapolating dose information obtained in experimental animals to human populations.
xenobiotic. A chemical that is foreign to the body. Sometimes used synonymously with “toxicant” or “toxic chemical.”
Casarett & Doull’s Toxicology: The Basic Science of Poisons (Curtis D. Klaassen ed., 9th ed. 2019).
Environmental Health: From Global to Local (Howard Frumkin ed., 3d ed. 2016).
Encyclopedia of Toxicology (Phillip Wexler ed., 3d ed. 2014).
Hayes’ Principles and Methods of Toxicology (A. Wallace Hayes & Claire L. Kruger eds., 6th ed. 2014).
National Research Council, Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment (2007).
National Research Council, New Approach Methods (NAMs) for Human Health Risk Assessment: Proceedings of a Workshop–in Brief (2022).
National Research Council, Science and Decisions: Advancing Risk Assessment (2009).
National Research Council, Toxicity Testing in the 21st Century: A Vision and a Strategy (2007).
National Research Council, Using 21st Century Science to Improve Risk-Related Evaluations (2017).
National Research Council, Understanding Pathways to a Paradigm Shift in Toxicity Testing and Decision-Making: Proceedings of a Workshop (2018).
Principles of Toxicology, in Comprehensive Toxicology (vol. 1, D. L. Eaton vol. ed., Charlene McQueen series ed., 3d ed. 2017).