HENRY T. GREELY AND NITA A. FARAHANY
Henry T. Greely, J.D., is the Deane F. and Kate Edelman Johnson Professor of Law; Professor, by courtesy, of Genetics; and the Director of Center for Law and the Biosciences, at Stanford University.
Nita A. Farahany, J.D., Ph.D., is the Robinson O. Everett Professor of Law and Philosophy and Founding Director for the Initiative for Science and Society at Duke University.
CONTENTS
Some Aspects of How the Brain Works
Some Common Neuroscience Techniques
Implanted Microelectrode Arrays
Issues in Interpreting Study Results
Problems in Experimental Design
The Number and Diversity of Participants
Applying Group Averages to Individuals
Technical Accuracy and Robustness of Imaging Results
Questions About the Admissibility and the Creation of Neuroscience Evidence
Rule 702 and the Admissibility of Scientific Evidence
Other Potentially Relevant Evidentiary Issues
Constitutional and Other Substantive Rules
Possible Rights Against Neuroscience Evidence
The Fifth Amendment privilege against self-incrimination
Other possible legal protections against compulsory neuroscience procedures
Other substantive rights against neuroscience evidence
Neuroscience evidence and the sixth and seventh amendment rights to trial by jury
Possible rights to the creation or use of neuroscience evidence
The Eighth Amendment right to present evidence of mitigating circumstances in capital cases
The Sixth Amendment right to present a defense
Examples of the Possible Uses of Neuroscience in the Courts
Issues Involved in the Use of EEG-Based Brain “Recognition”
FIGURES
1. Schematic of the typical structure of a neuron
2. Synapse. Communication between neurons occurs at the synapse
3. Lateral (left) and mid-sagittal (right) views of the human brain
6. CAT scan depicting axial sections of the human brain
Science’s understanding of the human brain is increasing exponentially. We know almost infinitely more than we did thirty years ago; however, we know almost nothing compared with what we are likely to know thirty years from now. The development of machine learning algorithms and artificial intelligence (AI) has allowed us to pull much more information out of existing methods of observing and recording brain activity, including greatly boosting the value of the old technology of electroencephalography (EEG).1 The electrodes and electrode arrays that serve as tools for recording and stimulating small numbers of neurons inside living brains have improved greatly.2 The development of human neural organoids and the increase in human/nonhuman brain chimeras have given us new ways to watch how brain tissues develop and function.3 And, less excitingly but equally importantly, increased use has given us better understanding of the limitations of some of our existing tools.4
We still remain very far from a deep and broad understanding of how human brains work, but some advances in understanding the human brain and its attendant mental states have resulted in neuroscience evidence already appearing in courtrooms. If, as neuroscience indicates, our mental states correspond to, and result from, physical states of our brain, our increased ability to discern those physical states will have huge implications for the law. Since 2008, lawyers in steadily increasing numbers have been trying to introduce neuropsychological and neuroimaging evidence as relevant to questions of individual criminal responsibility, such as claims of insanity or diminished responsibility, on issues of either criminal liability or sentencing. In May 2010, parties in two cases sought to introduce neuroimaging in court as evidence of honesty; we have begun to see efforts to use it to prove that a person is in pain. These and other uses of neuroscience are almost certain to increase with our growing knowledge of the human brain as well as continued technological advances in accurately and precisely measuring the brain. This reference guide strives to give judges some background knowledge about neuroscience and the strengths and weaknesses of its
1. See, e.g., Meysam Golmohammadi et al., Deep Learning Approaches for Automated Seizure Detection from Scalp Electroencephalograms, in Signal Processing in Medicine and Biology 235 (Iyad Obeid, Ivan Selesnick & Joseph Picone eds., 2020), https://doi.org/10.1007/978-3-030-36844-9.
2. See Nita A. Farahany, The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology ch. 9 (2023).
3. See Nat’l Acads. of Scis., Eng’g, & Med., The Emerging Field of Human Neural Organoids, Transplants, and Chimeras (2021), https://doi.org/10.17226/26078.
4. See, e.g., Russell A. Poldrack et al., Scanning the Horizon: Towards Transparent and Reproducible Neuroimaging Research, 18 Nature Revs. Neuroscience 115 (2017), https://doi.org/10.1038/nrn.2016.167; Scott Marek et al., Reproducible Brain-Wide Association Studies Require Thousands of Individuals, 603 Nature 654 (2022), https://doi.org/10.1038/s41586-022-04492-9.
possible applications in litigation in order to help them become better prepared for these cases.5
Evidence about the brain has been used in the courtroom for decades, by psychologists, psychiatrists, neurologists, and other professionals. That evidence has been used to help courts decide issues such as the existence or extent of injuries, criminal defenses of insanity or reduced capacity, questions of competency in widely varying contexts, and more. Most, if not all, of the tools and techniques discussed in this reference guide could be used in court to provide evidence relevant to these efforts to answer more traditional legal questions about human brains and minds. We have chosen to focus this guide on some of the new ways in which—and different professionals through which—neuroscience evidence may enter the court, either for new questions, such as lie detection, or through new methods, like neuroimaging to provide evidence of mens rea, because judges and lawyers will be less familiar with them. This reference guide’s descriptions of the techniques, and of their strengths and weaknesses, should apply even when more customary issues are presented.
This reference guide begins with a brief overview of the structure and function of the human brain. It then describes some of the tools neuroscientists use to understand the brain—tools likely to produce findings that parties will seek to introduce in court. Next, it discusses a number of fundamental issues that must be considered when interpreting neuroscientific findings. Finally, after discussing, in general, the evidentiary issues raised by neuroscience-based evidence, this guide concludes by analyzing a few illustrative situations in which neuroscientific evidence is likely to appear in court—now and, to an increasing extent, in the future.
This abbreviated and simplified discussion of the human brain describes the cellular basis of the nervous system, the structure of the brain, and finally our current understandings of how the brain works. More detailed, but still accessible,
5. The Reference Guide on Neuroscience in the third edition of this manual, upon which this chapter is based, was published in 2011 with that aim. A Primer on Criminal Law and Neuroscience (Stephen J. Morse & Adina L. Roskies eds., 2013) is another useful source, commissioned by the Law and Neuroscience Project, funded by the John D. and Catherine T. MacArthur Foundation. That project had already published a pamphlet written by neuroscientists for judges, with brief discussions of issues relevant to law and neuroscience: A Judge’s Guide to Neuroscience: A Concise Introduction (Michael S. Gazzaniga & Jed. S. Rakoff eds., 2010). More recently, a textbook on law and neuroscience has been published in a second edition: Law and Neuroscience (Owen D. Jones, Jeffrey D. Schall & Francis X. Shen eds., 2d ed. 2020). One early book on a broad range of issues in law and neuroscience also deserves mention: Neuroscience and the Law: Brain, Mind, and the Scales of Justice (Brent Garland ed., 2004).
information about the human brain can be found in academic textbooks and in popular books for general audiences.6
Like most of the human body, the nervous system is made up of cells. Adult humans contain somewhere between 30 trillion and 40 trillion human cells. Each of those cells is both individually alive and part of a larger living organism.
For all people, each human cell in the body (except for a few unusual cell types, like red blood cells) contains each person’s entire complement of human genes—their genome. The genes, found on very long molecules of deoxyribonucleic acid (DNA) that make up a human’s forty-six chromosomes, work by leading the cells to make other molecules, notably proteins and ribonucleic acid (RNA). Cells are different from each other not because they contain different genes, but because they turn on and off different sets of genes. All human cells seem to use the same group of several thousand “housekeeping” genes that run the cell’s basic machinery, but skin cells, kidney cells, and brain cells differ in which other genes they use. Scientists count different numbers of types of human cells, with estimates ranging from a few hundred to a few thousand (depending largely on how narrowly or broadly one defines a cell type).
The most important cells in the nervous system are called neurons. Neurons pass messages from one to another in a complex way that appears to be responsible for brain function, conscious or otherwise.
Neurons come in many sizes, shapes, and subtypes (with their own names), but they generally have three features: a cell body, short extensions called dendrites, and a longer extension called an axon (see Figure 1). The cell body contains the nucleus of the cell, which in turn contains the forty-six chromosomes
6. The Society for Neuroscience, the very large scholarly society that covers a wide range of brain science, has published a brief and useful primer about the human brain called Brain Facts. The most recent edition, the eighth, published in 2018, is available free at https://perma.cc/TEL6-6YNG. Widely used academic textbooks include Eric R. Kandel et al., Principles of Neural Science (6th ed. 2021); Michael S. Gazzaniga, Richard B. Ivry & George R. Mangun, Cognitive Neuroscience: The Biology of the Mind (6th ed. 2025); and Larry Squire et al., Fundamental Neuroscience (4th ed. 2012). Some particularly interesting books about various aspects of the brain written for a popular audience include Oliver Sacks, The Man Who Mistook His Wife for a Hat and Other Clinical Tales (1985); Antonio R. Damasio, Descartes’ Error: Emotion, Reason, and the Human Brain (1994); Daniel L. Schacter, Searching for Memory: The Brain, the Mind, and the Past (1996); Joseph E. LeDoux, The Emotional Brain: The Mysterious Underpinnings of Emotional Life (1996); Christopher D. Frith, Making up the Mind: How the Brain Creates Our Mental World (2007); Sandra Aamodt & Sam Wang, Welcome to Your Brain: Why You Lose Your Car Keys but Never Forget How to Drive and Other Puzzles of Everyday Life (2008). Matthew Cobb, The Idea of the Brain: The Past and Future of Neuroscience (2020), provides a good history of neuroscience.
Source: Quasar Jarosz, https://en.m.wikipedia.org/wiki/File:Neuron_Hand-tuned.svg.
with the cell’s DNA. The dendrites and axons both reach out to make connections with other neurons. The dendrites generally receive information from other neurons; the axons send information.
Communication between neurons occurs at areas called synapses (Figure 2), where two neurons almost meet. At a synapse, the two neurons will come within less than a micrometer (a millionth of a meter) of each other, with the presynaptic side, on the axon, separated from the postsynaptic side, on the dendrite, by a gap called the synaptic cleft. At synapses, when the axon (on the presynaptic side) “fires” (becomes active), it releases molecules, known as neurotransmitters, into the synaptic cleft. Some of those molecules are picked up by special receptors on the dendrite that is on the postsynaptic side of the cleft. More than 100 different neurotransmitters have been identified; among the best known are dopamine, serotonin, glutamate, and acetylcholine.
At the postsynaptic side of the cleft, neurotransmitters binding to the receptors can have a wide range of effects. Sometimes they cause the receiving (postsynaptic) neuron to fire, sometimes they suppress (inhibit) the postsynaptic neuron from firing, and sometimes they seem to do neither. The response of the receiving neuron is a complicated summation of the various messages it receives from multiple neurons that converge, through synapses, on its dendrites.
A neuron that does fire does so by generating an electrical current that flows down the length of its axon (away from the cell body). We normally think of electrical current as flowing in things like copper wiring. In that case, free
Source: From Carlson, Neil R. Foundations of Physiological Psychology (with Neuroscience Animations and Student Study Guide CD-ROM, 2005). Reprinted by permission of Pearson Education, Inc.
electrons move down the wire. The electrical currents of neurons are more complicated. Molecules with a positive or negative electrical charge (ions) move through the neuron’s membrane and create differences in the electrical charge between the inside and outside of the neuron, with the current traveling along the axon, rather like a fire brigade passing buckets of water in only one direction down the line. Firing occurs in milliseconds. This process of moving ions in and out of the cell membrane requires that the cell use large amounts of energy. When the current reaches the end of the axon, it may or may not lead to the axon releasing neurotransmitters into the synaptic cleft. This complicated part-electrical, part-chemical system is how information passes from one neuron to another.
The axons of human neurons are all microscopically narrow, but they vary enormously in length. Some are micrometers long; others, such as the axons of neurons running from the base of the spinal cord to the toes, are several feet long.
Longer axons tend to be coated with a fatty substance called myelin. Myelin helps insulate the axon and thus increases the strength and efficiency of the electrical signal, much like the insulation wrapped around a copper wire. (The destruction of this myelin sheathing is the cause of multiple sclerosis.) Axons coated with myelin appear white; thus, areas of the nervous system that have many myelin-coated axons are referred to as “white matter.” Cell bodies, by contrast, look gray; areas with many cell bodies and relatively few axons make up our “gray matter.” White matter can roughly be thought of as the wiring that connects gray matter to the rest of the body or to other areas of gray matter.
What we call nerves are really bundles of neurons. For example, we all have nerves that run down our arms to our fingers. Some of those nerves consist of neurons that pass messages from the fingers, up the arm, to other neurons in the spinal cord that then pass the messages on to the brain, where they are analyzed and experienced. This is how we feel things with our fingers. Other nerves are bundles of neurons that pass messages from the brain through the spinal cord to nerves that run down the arms to the fingers, telling them when and how to move.
Neurons can connect with other neurons or with other kinds of cells. Neurons that control body movements ultimately connect to muscle cells—these are called motor neurons. Neurons that feed information into the brain start with specialized sensory cells (i.e., cells specialized for detecting different types of stimuli—light, touch, heat, pain, and more) that fire in response to the appropriate stimulus. Their firings ultimately lead, directly or through other neurons, into the brain. These are sensory neurons. These neurons send information only in one direction—motor neurons ultimately from the brain, sensory neurons to the brain. The paralysis caused by, for example, severe damage to the spinal cord both prevents the legs from receiving messages to move that would come from the brain through the motor neurons and keeps the brain from receiving messages from sensory neurons in the legs about what the legs are experiencing. The break in the spinal column prevents the messages from getting through, just as a break in a local telephone line will keep two parties from connecting. (There are, unfortunately, not yet any human equivalents to wireless service.)
Estimates of the number of cells in a human brain vary widely, from a few hundred billion to several trillion. These cells include those that make up blood vessels and various connective tissues in the brain, but most of them are specialized brain cells. About 80 to 100 billion of these brain cells are neurons; the other cells (and the source of most of the uncertainty about the number of cells) are principally another class of cells referred to generally as glial cells.
Glial cells play many important roles in the brain, including, for example, producing and maintaining the myelin sheaths that insulate axons and serving as a special immune system for the brain. Traditionally, glial cells were often dismissed as “mere support staff”—their name comes from the Greek word for “glue,” which indicates the early understanding of their role. But emerging
evidence suggests that they may play a much larger role in mental processes than previously believed. Work pioneered by the late Ben Barres has shown that glial cells not only are crucial for holding neurons together, but are essential for forming, editing, and pruning neuronal connections (synapses). They also produce factors that, themselves, affect neurotransmitter concentrations and the firings of neurons. The full importance of glial cells is still being discovered, and another edition of this manual may devote more space to them. At this point, however, glial cells have not been used significantly in the kinds of neuroscience evidence that has been working its way into courts. So we will concentrate on neurons, the brain structures they form, and how those structures work.
Anatomists refer to the brain, the spinal cord, and a few other nerves directly connecting to the brain as the central nervous system. All the other nerves are part of the peripheral nervous system. This reference guide will not discuss the peripheral nervous system in any detail, despite its importance in, for example, assessing some aspects of personal injuries. We will also, less fairly, ignore parts of the central nervous system other than the brain, even though the spinal cord, in particular, plays an important role not just in passing messages going into and coming out from the brain, but in changing their strength and meanings.
The average adult human brain weighs about three pounds and fills a volume of about 1,300 cubic centimeters. Two standard wine bottles hold 750 cubic centimeters each, so, if liquid, a human brain would not quite fill up a double bottle (a magnum); it would fill a measuring bowl to about 5.4 cups. Living brains have a consistency similar to that of Jell-O. Despite their softness, brains are made up of regular shapes and structures that are generally consistent from person to person. Just as every nondamaged or nondeformed human face has two eyes, two ears, one nose, and one mouth with standard numbers of various kinds of teeth, every normal brain has the same set of identifiable structures, both large and small. And, like noses or eyes, these brain structures are not exactly identical from person to person.
Neuroscientists have long worked to describe and define particular regions of the brain. In some ways this is like describing parcels of land in property documents, and, like property descriptions, several different methods are used. At the largest scale, the brain is often divided into three parts: the brain stem, the cerebellum, and the cerebrum (Figure 3).7
7. The brain also is sometimes divided into the forebrain, midbrain, and hindbrain. This classification is useful for some purposes, particularly in describing the history and development of the
Source: Courtesy of Anthony Wagner, Stanford University.
The brain stem is found near the bottom of the brain and is, in some ways, effectively an extension of the spinal cord. Its various parts play crucial roles in controlling the body’s autonomic (involuntary or unconscious) functions, such as heart rate and digestion. The brain stem also contains important regions that regulate processing in the cerebrum, one example of the strong pattern of interconnections in brain regions. For instance, the substantia nigra and ventral tegmental area in the brain stem consist of critical neurons that generate the neurotransmitter dopamine. While the substantia nigra is crucial for motor control, the ventral tegmental area is important for learning about rewards. The loss of neurons in the substantia nigra is associated with the movement problems of Parkinson’s disease.
The cerebellum, which is about the size and shape of a squashed tennis ball, is tucked away in the back of the skull. It plays a major role in fine motor control and seems to keep a library of learned motor skills, such as riding a bicycle. Although it is relatively small by volume, it is the region of the brain with the largest number of neurons. It was long thought that damage to the cerebellum had little to no effect on a person’s personality or cognitive abilities, but resulted primarily in unsteady gait, difficulty in making precise movements, and problems in learning movements. However, more recent studies of patients with cerebellar damage and functional brain imaging studies of healthy individuals indicate that
vertebrate brain, but it does not entirely correspond with the categorization of cerebrum, brain stem, and cerebellum, and it is not used in this reference guide.
the cerebellum also plays a role in more cognitive functions, including supporting aspects of working memory, attention, and language.
The cerebrum is the largest part of the human brain, making up about 85% of its volume. The cerebrum is found at the front, top, and much of the back of the human brain. The human brain differs from the brains of other mammals mainly in that it has a vastly enlarged cerebrum.
There are several different ways to identify parts of, or locations in, the cerebrum. First, the cerebrum is divided into two hemispheres—the famous left and right brain. These two hemispheres are connected by tracts of white matter—of axons—most notably the large connection called the corpus callosum. Oddly, in humans and most other vertebrates, the right hemisphere of the brain generally receives messages from and controls the movements of the left side of the body, while the left hemisphere receives messages from and controls the movements of the right side of the body.
Each hemisphere of the cerebrum is divided into four lobes (Figure 4): the frontal lobe in the front of the cerebrum (behind the forehead), the parietal lobe at the top and toward the back, the temporal lobe on the side (just behind and above the ears), and the occipital lobe at the back. Thus, one could describe a particular region as lying in the left frontal lobe—the frontal lobe of the left hemisphere.
Source: Adapted from https://commons.wikimedia.org/wiki/File:Lobes_of_the_brain_NL.svg.
The surface of the cerebrum consists of the cortex, which is a sheet of gray matter a few millimeters thick. The cortex is not a smooth sheet in humans, but rather is heavily folded with valleys, called sulci (“sulcus” in the singular), and bulges, called gyri (“gyrus”). The sulci and gyri have their own names, so a location can be described as in the inferior frontal gyrus in the left frontal lobe. These folds allow the surface area of the cortex, as well as the total volume of the cortex, to be much greater than in other mammals, while still allowing it to fit inside our skulls, similar to the way the many folds of a car’s radiator give it a very large surface area (for radiating away heat) in a relatively small space.
The cerebral cortex is extraordinarily large in humans compared with other species and is clearly centrally involved in much of what makes our brains special, but the cerebrum contains many other important subcortical structures that we share with other vertebrates. Some of the more important of these structures include the thalamus, the hypothalamus, the basal ganglia, and the amygdala. These areas all connect widely, with the cortex, with each other, and with other parts of the brain, to form complex networks.
The functions of all these areas are many, complex, and not fully understood, but some facts are known. The thalamus seems to act as a main relay that carries information to and from the cerebral cortex, particularly for vision, hearing, touch, and proprioception (one’s sense of the position of the parts of one’s body). It also is, importantly, involved in sleep, wakefulness, and consciousness. The hypothalamus has a wide range of functions, including the regulation of body temperature, hunger, thirst, and fatigue. The basal ganglia are a group of regions in the brain that are involved in motor control and learning, among other things. They seem to be strongly involved in selecting movements, as well as in learning through reinforcement (as a result of rewards). The amygdala appears to be important in emotional processing, including how we attach emotional significance to particular stimuli.
In addition, many other parts of the brain, in the cortex or elsewhere, have their own special names, usually with Latin or Greek roots that may or may not seem descriptive today. The hippocampus, for example, is named for the Greek word for seahorse. For most of us, these names will have no obvious rhyme or reason, but merely must be learned as particular structures in the brain—the superior colliculus, the tegmentum, the globus pallidus, the substantia nigra, the cingulate cortex, and more. All of these structures come in pairs, with one in the left hemisphere and one in the right hemisphere; only the pineal gland is unpaired. Brain atlases include scores of names for particular structures or regions in the brain and detailed information about those structures or regions.
Some of these smaller structures may have special importance to human behavior. The nucleus accumbens, for example, is a small subcortical region in each hemisphere of the cerebrum that appears important for reward processing and motivation. In experiments with rats that received stimulation of this region
in return for pressing a lever, the rats would press the lever almost to the exclusion of any other behavior, including eating. The nucleus accumbens in humans appears linked to appetitive motivation, responding in anticipation of primary rewards (such as pleasure from food or sex) and secondary rewards (such as money). Through interactions with the orbital frontal cortex and dopamine-generating neurons in the midbrain (including the ventral tegmental area), the nucleus accumbens is considered part of a “reward network.” With a hypothesized role in addictive behavior and in reward computations, more broadly, this putative reward network is a topic of considerable ongoing research.
All of these various locations, whether defined broadly by area or by the names of specific structures, can be further subdivided using directions: front and back, up and down, toward the middle or toward the sides. Unfortunately, the directions often are not expressed in a straightforward manner, and several different terminological conventions exist. Locations toward the front or back of the brain can be referred to as either anterior or posterior, or as rostral or caudal (literally, toward the nose, or beak, or toward the tail). Locations toward the bottom or top of the brain are termed inferior or superior or, alternatively, as ventral or dorsal (toward the stomach or toward the back). A location toward the middle of the brain is called medial; one toward the side is called lateral. Thus, different locations could be described, for example, as in the left anterior cingulate cortex, in the dorsal medial (or sometimes dorsomedial) prefrontal cortex, or in the posterior hypothalamus.
Finally, one other method often is used, a method created by Korbinian Brodmann in 1909. Brodmann, a neuroanatomist, divided the brain into about forty-five different areas or regions. Each region was defined based on the kinds of neurons found there and how those neurons are organized. A location described by a Brodmann area (Figure 5) may or may not correspond closely with a structural location. Other organizational schemes exist, but Brodmann’s remains the most widely used to describe the approximate locations of findings in modern human brain imaging studies.
If these varied ways of naming different brain regions seem confusing, it is because they are. But think of them as being like U.S. Supreme Court decisions being cited to U.S. Reports, the Supreme Court Reporter, and Supreme Court Reports: Lawyers’ Edition. They are different methods of pointing to particular parts of the brain.
Most of neuroscience is dedicated to finding out how the brain works, but although much has been learned, considerably more remains unknown. We could use many different ways to describe what is known about how the brain
Source: Professor Mark Dubin, University of Colorado.
works. This section will discuss a few important aspects of brain function and will make several general points about the localization and distribution of functions, as well as brain plasticity, before commenting on the effects of hormones and other chemical influences on the brain.
Some brain functions are localized in, or especially dependent on, particular regions of the brain. This has been known for many years as a result of studies of people who, through traumatic injury, stroke, or cancer, have lost, or lost the use
of, particular regions of their brains. For example, in the 1860s, French anatomist Paul Broca discovered through autopsies of patients that damage to a region in the left inferior frontal lobe (now known as Broca’s area) caused an inability to speak. It is now known that some functions cannot normally be performed when particular brain areas are damaged or missing. The visual cortex, located at the back of the brain in the occipital lobes, is as necessary for vision as the eyes are; the hippocampus is necessary for the creation of many kinds of memory; and the motor cortex is necessary for voluntary movements. The motor cortex and the parallel somatosensory cortex, which is essential for processing sensory information such as the sense of touch from the body, are further subdivided, with particular regions necessary for causing motion or sensing feelings from the legs, arms, fingers, face, and so on.
At the same time, the fact that a region is necessary to a particular class of sensations, behaviors, or cognition does not mean either that it is not involved in other brain functions or that other brain regions do not also contribute to these particular abilities. The amygdala, for example, is involved in our feelings of fear, but it is also involved broadly in emotional reactions, both positive and negative. It also modulates learning, memory, and even sensory perception. Although some functions are localized, others are widely distributed. For example, the visual cortex is essential to vision, but actual visual perception involves many parts of the brain in addition to the occipital lobes. Memories appear to be stored over much of the cortex. Networks of brain regions participate in many of these functions.
For example, if you touch something very hot with your left index finger, your spinal cord, through a reflex loop, will cause you to pull your finger back very quickly. Then the part of your right somatosensory cortex devoted to the index finger will be involved in receiving and initially interpreting the sensation. Other areas of your brain will recognize the stimulus as painful, your motor regions will be involved in waving your hand back and forth or bringing your finger to your mouth, widespread parts of your cortex may lead to you remembering other instances of burning yourself, and your hippocampus may play a role in making a new long-term memory of this incident. There is no single brain region dedicated to reacting when your finger burns; many regions, both specific and general, contribute to the brain’s response.
In addition, brains are at least somewhat “plastic” or changeable on both small and large scales. Anyone who can see has a working visual cortex, and it is always located in the back of the brain (in the occipital lobe), but its exact borders will vary slightly from person to person. Even within individuals, the shape of the brain and the size and location of its parts can change over time. This is most evident before birth, during childhood, and for many people well into their twenties. It is not true that adults make no new neurons, but we do not make many of them, and they are localized to two regions: the olfactory bulb, which plays a huge role in our senses of smell and taste, and the hippocampus, which is
essential for making most kinds of memories. Unfortunately, though, adults can and do lose neurons through the shrinkage or even destruction of parts of the brain, as a result of traumatic injury, disease, or aging.
Another kind of plasticity is ubiquitous in our brains’ lives. The shape or number of our neurons does not change much after childhood, but their connections and functions can change markedly. The brain can adjust and change in response to a person’s behavior or changes in that person’s anatomy. If a person loses an arm to amputation, the parts of the motor and somatosensory cortices that had dealt with that arm may be “taken over” by other body parts. In some cases, this brain plasticity can be extreme. Young children who have lost an entire hemisphere of their brains may grow up to have normal or nearly normal functionality as the remaining hemisphere takes on the tasks of the missing hemisphere. Unfortunately, the possibilities of this kind of extreme plasticity do diminish with age, but rehabilitation after stroke in adults sometimes shows changes in the brain functions undertaken by particular brain regions.
Both kinds of plasticity—physical changes in the brain and connective and functional changes in the brain—pose some important problems for neuroimaging evidence. A criminal defendant’s brain might have unusual and possibly detrimental physical or functional anomalies at or shortly before the time of trial, but those issues may not have existed at the time of the relevant events. This “time machine” problem is discussed further at pages 1244 and 1249.
The picture of the brain as a set of interconnected neurons that fire in networks or patterns in response to stimuli is useful but not complete. In addition to neuron firings, other factors affect how the brain works, particularly chemical factors.
Some of these factors are hormones, generated by the body either inside or outside the brain. They can affect how the brain functions, as well as how it develops. Sex hormones such as estrogen and testosterone can have both short-term and long-term effects on the brain. So can other hormones, such as cortisol, associated with stress, and oxytocin, associated with, among other things, trust and bonding. Endorphins, chemicals secreted by the pituitary gland in the brain, are associated with pain relief and a sense of well-being. Still other chemicals, brought in from outside the body, can have major effects on the brain, both in the short term and the long term. Examples include alcohol, caffeine, nicotine, morphine, and cocaine. These can trigger very specific brain reactions or can have broad effects.
Neuroscientists use many techniques to study the brain. Some of them have been used for centuries, such as autopsies and the observation of patients with
brain damage. Some, such as the intentional destruction of parts of the brain, can be used ethically only in research on nonhuman animals. Of course, research with nonhuman animals, while often helpful in understanding human brains, is of less value when examining behaviors that are uniquely developed among humans. The current revolution in neuroscience is largely the result of a revolution in the tools available to neuroscientists, as new methods have been developed to image and to intervene in living brains. These methods, particularly the imaging methods that allow more precise measurements of human brain structure and function in living people, are giving rise to increasing efforts to introduce neuroscientific evidence in court.
This section will focus on several kinds of neuroimaging—computerized axial tomography (CAT scans), positron emission tomography (PET scans), single photon emission computed tomography (SPECT scans), magnetic resonance imaging (MRI), diffusion tensor imaging (DTI), and functional near-infrared spectroscopy (fNIRS), as well as an older method, electroencephalography (EEG) and its close relative magnetoencephalography (MEG). Some of these methods show the structure of the brain, others show the brain’s functioning, and some do both. These are not the only important neuroscience techniques; several others will be discussed briefly at the end of this section. Genetic analysis provides yet another technique for increasing our understanding of human brains and behaviors, but this reference guide will not deal with the possible applications of human genetics to understanding behavior.
We go into these techniques at some length and in some detail, but we think that is crucial. A judge may be confronted with neuroscience evidence that has been developed using any of these methods. Having at least some background in what they are and their strengths and weaknesses may be invaluable. In the context of any given case, a judge may well choose to look at only the parts of this section relevant to that case rather than read the whole section.
Before digging into the techniques, though, it may be useful to discuss briefly the experts who will be testifying about them. All scientific evidence is presented through experts. In court cases involving brain or mind evidence, judges and juries regularly hear testimony from clinical psychologists and psychiatrists, often specialized as forensic psychologists or psychiatrists. Other kinds of specialist or subspecialist physicians may also testify, such as neurologists, neurosurgeons, neurointensivists, neuroradiologists, or neuropathologists.
Those experts may also be involved in presenting neuroscience evidence, but they will likely be joined or replaced by nonclinical experts in neuroscience, usually from universities or medical schools. Some will be located in departments of neuroscience, neurobiology, or some other term their employer uses for its basic brain science research department; others will be in psychology, psychiatry, neurology, or neurosurgery departments. Although some of these experts will be trained physicians and may even see some patients, most will have doctoral degrees and will focus their work on research. Rather than testifying about the condition
or history of a particular person, they will usually testify about neuroscience techniques or research establishing connections between the findings of those techniques and relevant human behaviors or mental conditions, based on peer-reviewed literature, often that they have published. They may or may not have examined the individual in question.
The distinctions among the types of experts who might testify about neuroscience findings is well discussed in a 2018 law review article.8
Although X-ray imaging has been used for over a century, it has not been very helpful in studying the brain. X-ray images are the shadows cast by dense objects. Not only is the brain surrounded by our very dense skulls, but there are no dense objects inside the brain to cast these shadows. Although a few features of the brain or its blood vessels could be seen through methods that involved the injection of air into some of the spaces in the brain or of contrast media into the blood, these provided limited information. The opportunity to see inside a living brain itself only goes back to about the 1970s, with the development of CAT scans. This ability has since exploded with the development of several new techniques. This section will discuss five of them: CAT scans (sometimes also called computer-assisted tomography or computed tomography), PET and SPECT scans, MRI, DTI, and fNIRS.
The CAT scan is a multidimensional, computer-assisted X-ray machine. Instead of taking one X-ray from a fixed location, in a CAT scan both the X-ray source and the X-ray detectors opposite the source rotate around the person being scanned. Traditional X-ray machines shine the radiation through an object to a photographic negative, producing an actual picture that shows where dense objects blocked the X-rays. In a CAT scan (Figure 6), the X-ray detectors produce data sufficient to reconstruct the scanned object in three dimensions. Computerized algorithms can then be used to produce an image of any particular slice through the object. The multiple angles and computer analysis make it possible to pick out the relatively small density differences within the brain that traditional X-ray technology could not distinguish and to use them to produce images of the soft tissue.
8. See Jane Campbell Moriarty & Daniel D. Langleben, Who Speaks for Neuroscience? Neuroimaging Evidence and Courtroom Expertise, 68 Case W. Rsrv. L. Rev. 783 (2018), https://perma.cc/ECK8-D7WS.
Source: https://en.wikipedia.org/wiki/File:Computed_tomography_of_human_brain_-_large.png.
The CAT scan provides a structural image of the brain. It is useful for showing some kinds of structural abnormalities, but it provides no direct information about the brain’s functioning. A CAT scan brain image is not as precise as the image produced from an MRI, but because the procedure is both quick and (relatively) inexpensive, CAT scanners are common in hospitals. Medically, brain CAT scans are used mainly to look for bleeding or swelling inside the brain, although they also will record sizable tumors or other large structural abnormalities. For neuroscience, the great advantage of the CAT scan was its ability, for the first time, to see some details inside the skull, an ability that has been largely superseded for research by MRI. CAT scans have been used for decades in litigation to provide evidence about the physical condition of the brain, often in cases about brain injuries or medical malpractice. But they have also been used to argue that structural changes in the brain, shown on the CAT scan, are evidence of insanity or other mental impairments. Perhaps their most notable use was in 1982 in the trial of John Hinckley Jr., for the attempted assassination of President Ronald Reagan. A CAT scan of Hinckley’s brain that showed widened sulci (the “valleys” in the surface of the brain) was introduced
into evidence to show that Hinckley suffered from organic brain damage in the form of shrinkage of his brain.9
Traditional X-ray machines and their more sophisticated descendant, the CAT scanner, project X-rays through the skull and create images based on how much of the X-rays are blocked or absorbed. PET scans and SPECT scans operate very differently. In these methods, a substance that emits radiation is introduced into the body. That radiation then is detected from outside the body in a way that can determine the location of the radiation source. These scans generally are not used for determining the brain’s structure, but are used for understanding how it is functioning. They are particularly good at measuring one aspect of brain structure—the density of particular receptors, such as those for dopamine, at synapses in some areas of the brain, such as the frontal lobes.
Radioactive decay of atoms can take several forms, producing alpha, beta, or gamma radiation. PET scanners take advantage of isotopes of atoms that decay by giving off positive beta radiation. Beta decay usually involves the emission of an electron; positive beta decay involves the emission of a positron, the positively charged antimatter equivalent of an electron. When positrons (antimatter) meet electrons (matter), the two particles are annihilated and converted into two photons of gamma radiation with a known energy (511,000 electron volts) that follow directly opposite paths from the site of the annihilation. Inside the body, the collision between the positron and electron and the consequent production of the gamma radiation photons take place within a short distance (a millimeter or two) of the site of the initial radioactive decay that produced the positron.
PET scans, therefore, start with the introduction into a person’s body of a radioactive tracer that decays by giving off a positron. One common tracer is fluorodeoxyglucose (FDG), a molecule that is almost identical to the simple sugar glucose, except that one of the oxygen atoms in glucose is replaced by an atom of fluorine-18, an isotope of the element fluorine with nine protons and nine neutrons. Fluorine normally found in nature is fluorine-19, with nine protons and ten neutrons, and is stable. Fluorine-18 is very unstable and decays quickly, through positive beta decay, losing about half of its mass every 110 minutes (its half-life). The body treats FDG as though it were glucose, and so the FDG is concentrated where the body needs the energy supplied by glucose. A major clinical
9. The effects of this evidence on the verdict are unclear. See Lincoln Caplan, The Insanity Defense and the Trial of John W. Hinckley, Jr. (1984), for a discussion of the case and its consequences for the law.
use of PET scans derives from the fact that tumor cells use energy, and hence glucose, at much higher rates than normal cells.
After giving the FDG time to become concentrated in the body, which usually takes about an hour, the person is put inside the scanner itself. There, the person is entirely surrounded by a very sensitive radiation detector, tuned to respond to gamma radiation of the energy produced by annihilated positrons. When two “hits” are detected by two sensors at about the same time, the source is known to be located on a line connecting the two. Very small differences in the timing of when the radiation is detected can help determine where along that line the annihilation took place. In this way, as more gamma radiation from the decaying FDG is detected, the general location of the FDG within the body can be determined and, as a result, tissue that is using a lot of glucose, such as a tumor, can be located.
In neuroscience research, PET scans also can be taken using different molecules that bind more specifically to particular tissues or cells. Some of these more specific ligands use fluorine-18, but others use a different radioactive tracer that also decays by emitting a positron—oxygen-15. This can be used to determine what parts of the brain are using more or less oxygen. Oxygen-15, however, has a much shorter half-life (two minutes) and so is more difficult and expensive to use than FDG. Similarly, carbon-11, with a half-life of twenty minutes, also can be used. Carbon-11 atoms can be introduced into various molecules that bind to important receptors in the brain, such as receptors for dopamine, serotonin, or opioids. This allows the study of the distribution and function of these receptors, both in healthy people and in people with various mental illnesses or neurological diseases.
The result of a PET scan is a record of the locations of positron decay events in the brain. Computer visualization tools can then create cross-sectional images of the brain, showing higher and lower rates of decay, with differences in magnitude typically depicted through the use of different colors.
PET scans are excellent for showing the location of various receptors in normal and abnormal brains (Figure 7). PET scans are also very good for showing areas of different glucose use and, hence, of different levels of metabolism. This can be very useful, for example, in detecting some kinds of brain damage, such as the damage that occurs with Alzheimer’s disease, where certain regions of the brain become abnormally inactive, or in brain regions that have been damaged by a stroke.
In addition, the comparison (subtraction) of two PET scan measurements—one scan when a person is engaged in a task that is thought to require particular brain functions and a second control (or baseline) scan that is not thought to require these functions—allows researchers indirectly to measure brain function. PET scans were initially used in this way in research to show what areas of the brain were used when people experienced various stimuli or performed particular tasks. It has been substantially superseded for this purpose by functional
Source: https://commons.wikimedia.org/wiki/File:PET-image.jpg.
MRI, which is less expensive, does not involve radiation exposure, provides better spatial resolution, and allows a longer period of testing.
SPECT scans are similar to PET scans. Each can produce a three-dimensional model of the brain and display images of any cross section through the brain. Like PET scans, they require the injection of a radioactive tracer material; unlike PET scans, the radioactive tracer directly emits gamma radiation rather than emitting positrons. These kinds of tracers are more stable, more accessible, and much cheaper than the positron-emitting tracers needed for PET scans. With a PET scan, the gamma detector entirely surrounds the person; with a SPECT scan, one to three gamma detectors are rotated around the body for about fifteen to twenty minutes. As with PET scans, the SPECT tracers can be used to measure brain metabolism or to attach to specific molecular receptors in the brain. The spatial resolution of a SPECT scan, however, is poorer than with a PET scan, with an uncertainty of about one centimeter.
Both PET and SPECT scans are most useful if coupled with good structural images. Contemporary PET and SPECT scanners often include a simultaneous CAT scan; there is some experimental work aimed at providing simultaneous PET and MRI scans.
MRI was developed in the 1970s, first came into wide use in the 1980s, and is currently the dominant neuroimaging technology for producing detailed images of the brain’s structure and for measuring aspects of brain function. MRI
operates on completely different principles than either CAT scans or PET or SPECT scans; it does not rely on X-rays passing through the brain or on the decay of radioactive tracer molecules inside the brain. Rather, MRI’s workings involve more complicated physics. This section will discuss the general characteristics of magnetic resonance imaging and then will focus on structural MRI, diffusion tensor imaging, and finally functional MRI.
The power of an MRI scanner is measured by the strength of its magnetic field, measured in units called tesla (T). The magnetic field of a small bar magnet is about 0.01T. The strength of the Earth’s magnetic field is about 0.00005T. The MRI machines used for clinical purposes use magnetic fields of between 0.2T and 3.0T, with 1.5T or 3.0T being the systems most commonly used today. MRI machines for human research purposes have reached 9.4T. In general, the stronger the magnetic field, the better the image, although higher fields also can create their own measurement difficulties, especially when imaging brain function. MRI machines achieve these high magnetic fields through use of superconducting magnets, made by cooling the electromagnet with liquid helium at a temperature four degrees Celsius above absolute zero. For this and other reasons, MRI systems are complicated, with higher initial and continuing maintenance costs compared to some other methods for functional imaging (e.g., electroencephalography; see below).
In most MRI systems, the individual, on an examination table, slides into a cylindrical opening in the machine so that the part of the body to be imaged is in the middle of the magnet (Figure 8). Depending on the kind of imaging performed, the examination or experiment can take from about thirty minutes to more than two hours; throughout the scanning process the individual being examined needs to stay as motionless as possible to avoid corrupting the images. The main sensations are the loud thumping and buzzing noises made by the machine, as well as the machine’s vibration.
MRI examinations appear to involve minimal risk. Unlike the other neuroimaging technologies discussed above, MRI does not involve any high-energy radiation. The magnetic field seems to be harmless, at least as long as magnetizable objects are kept away from it. MRI participants need to remove most metal objects; people with some kinds of implanted metallic devices, with tattoos with metal in their ink, or with fragments of ferrous metal anywhere in their bodies cannot be scanned because of the dangerous effects of the field on those bits of metal.
When an individual is positioned in the MRI scanner, the powerful field of the magnet causes the nuclei of atoms (usually the hydrogen nuclei of the body’s water molecules) to align with the direction of the main magnetic field of the magnet. Using a brief electromagnetic pulse, these aligned atoms are then “flipped” out of alignment from the main magnetic field, and, after the pulse stops, the nuclei then rapidly realign with the strong main magnetic field. Because the
Source: Courtesy of Anthony Wagner, Stanford University.
nuclei spin (like a top), they create an oscillating magnetic field that is measured by a receiver coil. During structural imaging, the strength of the signal generated partially depends on the relative density of hydrogen nuclei, which varies from point to point in the body according to the density of water. In this manner, MRI scanners can generate images of the body’s anatomy or of other scanned objects. Because an MRI scan can effectively distinguish between similar soft tissues, MRI can provide very high-resolution images of the brain’s anatomy, which is, after all, made up of soft tissue.
Structural MRI scans produce very detailed images of the brain (Figure 9). They can be used to spot abnormalities, large and small, as well as to see normal
Source: Courtesy of Anthony Wagner, Stanford University.
variation in the size and shape of brain features. Structural MRI can be used, for example, to see how brain features change as a person ages. Previously, getting that kind of detailed information about a brain required an autopsy or, at a minimum, extensive neurosurgery. This ability makes structural MRI both an important clinical tool and a very useful technique for research that tries to correlate human differences, normal and abnormal, to differences in brain structure, as well as for research that seeks to understand brain development.
Functional MRI (fMRI; Figure 10) harnesses the power of MRI in neuroscience for understanding brain function. While it is perhaps one of the most exciting technologies used in neuroscience, as discussed in the section titled “Replication” below, it has proven difficult to replicate many findings from fMRI research. The technique shows what regions of the brain are more or less active in response to the performance of particular tasks or the presentation of particular stimuli. It does not measure brain activity (the firing of neurons) directly but, instead, looks at how blood flow changes in response to brain activity and uses those changes, through the so-called BOLD response (the blood-oxygen-level-dependent response), to allow the researcher to infer patterns of brain activity.
Structural MRI generally creates its images through detecting the density of hydrogen atoms in the participant and flipping them with radio pulses. For fMRI, the scanner detects changes in the ratio of oxygenated hemoglobin (oxyhemoglobin) and deoxygenated hemoglobin (deoxyhemoglobin) in particular locations of the brain. Hemoglobin is the protein in red blood cells that carries oxygen from the lungs to the body. Based on metabolic demands, hemoglobin molecules supply oxygen for the body’s needs. Accordingly, “fresher” blood will have a higher ratio of oxyhemoglobin to deoxyhemoglobin than more “used” blood. Importantly, because deoxyhemoglobin (which is found at a higher level in “used” blood) causes the fMRI signal to decay, a higher ratio of oxyhemoglobin to deoxyhemoglobin will produce a stronger fMRI signal.
Neural activity is energy intensive for neurons, and neurons do not contain any significant reserves of oxygen or glucose. Therefore, the brain’s blood vessels respond quickly to increases in activity in any one region of the brain by sending more fresh blood to that area. This is the basis of the BOLD response, which measures changes in the ratio of oxyhemoglobin to deoxyhemoglobin in a brain region several seconds after activity in that region. In particular, when a brain region becomes more active, there is first, perhaps more intuitively, a decline in the ratio of oxyhemoglobin to deoxyhemoglobin immediately after activity in the region, apparently corresponding to the depletion of oxygen in the blood at the site of the activity. This decline, however, is very small and very hard to detect with fMRI. Immediately after this decrease, there is an infusion of fresh (oxyhemoglobin-rich) blood, which can take several seconds to reach maximum; it is this infusion that results in the increase in the oxy/deoxyhemoglobin ratio
that is measured in BOLD fMRI studies. Because even this subsequent increase is relatively small and variable, fMRI experiments typically involve many trials of the same task or class of stimuli in order to be able to see the signal amid the noise.
Thus, in a typical fMRI experiment, the participant will be placed in the scanner and the researchers will measure differences in the BOLD response throughout their brain under different conditions. A participant might, for example, be told to look at a video screen on which images of places alternate with images of faces. For purposes of the experiment, the computer will impose a spatial map on the participant’s brain, dividing it into thousands of little cubes, each a few cubic millimeters in size, referred to as “voxels.” Either while the data are being collected (so-called real-time fMRI10) or after an entire dataset has been gathered, a computerized program will compare the BOLD signal for each voxel when the screen was showing places to the signals when the screen contained faces. Regions that showed a statistically significant increase in the BOLD response several seconds after the face was on the video screen compared with the effects several seconds after a screen showing a face appeared. The researchers will infer that those regions were, in some way, involved in how the brain processes images of faces. The results will typically be shown as a structural brain image on which areas of more or less activation, as shown by a statistical test, will be represented by different colors.11
fMRI was first proposed in 1990, and the first research results using BOLD-contrast fMRI in humans were published in 1992. The first decade of this century saw an explosive increase in the number of research articles based on fMRI, with nearly 2,500 articles published in 2008 alone—compared to about 450 in 1998.12 The tide continued to rise; in 2015 alone, nearly 30,000 fMRI papers were published. MRI (functional and structural) is quite safe, and MRI machines are widespread in developed countries, largely for clinical use but increasingly for research use as well. Although fMRI research is subject to many questions and controversies (discussed below), this technique sparked a surge of
10. Use of this “real-time fMRI” has been increasing, but it is not yet clear whether the claims for it will stand up.
11. This example is actually a simplified version of experiments performed by Professor Nancy Kanwisher at MIT in the early 2000s that explored a region of the brain called the fusiform face area that is particularly involved in processing visions of faces. See Kathleen M. O’Craven & Nancy Kanwisher, Mental Imagery of Faces and Places Activates Corresponding Stimulus-Specific Brain Regions, 12 J. Cognitive Neuroscience 1013 (2000), https://doi.org/10.1162/08989290051137549.
12. For a census of fMRI articles from 1993 to 2008, see Carole A. Federico, Sofia Lombera & Judy Illes, Intersecting Complexities in Neuroimaging and Neuroethics, in Oxford Handbook of Neuroethics 377 (Judy Illes & Barbara J. Sahakian eds., 2012), https://doi.org/10.1093/oxfordhb/9780199570706.013.0098. This continued an earlier census from 1993 to 2001. Judy Illes, Matthew P. Kirschen & John D.E. Gabrieli, From Neuroimaging to Neuroethics, 6 Nature Neuroscience 205 (2003), https://doi.org/10.1038/nn0303-205.
Source: Courtesy of Anthony Wagner, Stanford University.
interest and commentary on the relationship between law and neuroscience, from questions about criminal responsibility and lie detection to how judges13 and jurors14 make decisions.
13. See, e.g., Anna Spain Bradley, The Disruptive Neuroscience of Judicial Choice, 9 U.C. Irvine L. Rev. 1 (2018), https://perma.cc/UVL3-EJEA (drawing from neuroscience evidence, the author argues that rather than based purely on rationality, judicial decision-making is influenced by bias, emotion, and empathy).
14. Edith Greene & Brian S. Cahill, Effects of Neuroimaging Evidence on Mock Juror Decision Making, 30 Behav. Scis. & L. 280 (2012), https://doi.org/10.1002/bsl.1993; Shelby Hunter, N. J. Schweitzer & Jillian M. Ware, Neuroscience and Jury Decision-Making, in Criminal Juries in the
DTI (diffusion tensor imaging) is a different application of MRI, one that uses the MRI data not to build a picture of the brain (structural MRI) or of changes in the ratio of oxygenated to deoxygenated hemoglobin over time in the brain (functional MRI) but to follow the ways water diffuses through brain tissue to map out the brain’s white matter. As noted above, neuronal tissue in the brain can be divided roughly into gray matter (the bodies of neurons) and white matter (neuronal axons that transmit signals over distance). DTI uses MRI to see what direction water diffuses through brain tissue. Tracts of white matter are made up of bundles of axons coated with fatty myelin. Water will diffuse through that white matter along the direction of the axons and not, generally, across them. This method can be used, therefore, to trace the location of these bundles of white matter and hence the long-distance connections between different parts of the brain.
These kinds of “wiring diagrams” can be very important in brain research, but they also have clinical applications. Abnormal patterns of these connections may be associated with various conditions, from traumatic brain injury and stroke to brain tumors, Alzheimer’s disease, and dyslexia, some of which may have legal implications.15
Like functional magnetic resonance imaging (fMRI), functional near-infrared spectroscopy can detect changes in hemoglobin within the brain, but it does so by measuring differences in the absorption of light that occurs through blood flow to different regions of the brain. fNIRS is effective because biological tissue is relatively transparent to infrared light in wavelengths ranging from 650 to 925 nanometers (nm). Light in this range is absorbed more strongly by oxygenated versus deoxygenated blood, and at different wavelengths. Deoxygenated blood is absorbed more strongly below 790 nm, and oxygenated blood more strongly above 790 nm.16 By detecting the changes in the relative concentrations of
21st Century: Psychological Science and the Law 221 (Cynthia Najdowski & Margaret Stevenson eds., 2018), https://doi.org/10.1093/oso/9780190658113.003.0011.
15. See Jennifer Christine van Velkinburgh, Mark D. Herbst & Stewart M. Casper, Diffusion Tensor Imaging in the Courtroom: Distinction Between Scientific Specificity and Legally Admissible Evidence, 11 World J. Clinical Cases 4477 (2023), https://perma.cc/LLV4-28L3; Andrew M. Lehmkuhl II, Diffusion Tensor Imaging: Failing Daubert and Fed. R. Evid. 702 in Traumatic Brain Injury Litigation, 87 U. Cin. L. Rev. 279 (2018), https://perma.cc/GW32-7BUX.
16. Wei-Liang Chen et al., Functional Near-Infrared Spectroscopy and Its Clinical Application in the Field of Neuroscience: Advances and Future Directions, 14 Frontiers Neuroscience 1 (2020), https://doi.org/10.3389/fnins.2020.00724.
different light-absorbing molecules, fNIRS can measure energy metabolism in the brain, including regional changes in brain activity.
fNIRS is portable, noninvasive, and cost-effective, and it allows real-time temporal tracking of brain activity. For these reasons, it is often used in clinical and research-based settings. For example, in the pediatric ICU, brain oxygenation levels are regularly monitored using fNIRS where long-term real-time monitoring of brain activity is necessary.17 Its disadvantages are that it cannot be used on parts of the brain more than four centimeters (a bit more than one and a half inches) below the skull and that its spatial resolution is low compared to fMRI, but higher than EEG, which we turn to next.
EEG (electroencephalography) is the measurement of the brain’s electrical activity as exhibited on the scalp; MEG (magnetoencephalography) is the measurement of the small magnetic fields generated by the brain’s electrical activity. The roots of EEG go back to the nineteenth century, but its use increased dramatically in the 1930s and 1940s.
The process for medical or research-grade EEG uses electrodes touching the participant’s head, applied with an electrically conductive substance (a paste or a gel) to record electrical currents on the surface of the scalp. Multiple electrodes are used; for clinical purposes, 16 to 25 electrodes are commonly used, although arrays of more than 200 electrodes can be used. In MEG, superconducting “SQUIDS”18 are positioned over the scalp to detect the brain’s tiny magnetic signals. The electrical currents are generated by the neurons throughout the brain, although EEG is more sensitive to currents emerging from neurons closer to the skull. It is therefore more challenging to use EEG to reveal the functioning of structures deep in the brain.
Because EEG and MEG directly measure neural activity, in contrast to the measures of blood flow in fMRI and fNIRS, the timing of the neural activity can be measured with great precision (the temporal resolution), down to milliseconds. On the other hand, in comparison to fMRI, EEG and MEG are poor at determining the location of the sources of the currents (the spatial resolution). Any one pattern of EEG or MEG signal at the scalp has an infinite number of possible source patterns, making the problem of determining the brain source of measured EEG/MEG signal particularly challenging and the results less precise.
17. Id.
18. SQUID stands for superconducting quantum interference device (and has nothing to do with the marine animal). This device can measure extremely small magnetic fields, including those generated by various processes in living organisms, and so is useful in biological studies.
Thus, EEG/MEG signal is a summation of the activity of thousands to millions of neurons at any one time.
The results of clinical EEG and MEG tests can be very useful for detecting some kinds of brain conditions, notably epilepsy, and are also part of the process of diagnosing brain death. EEG and MEG are also used for research, particularly in the form of event-related potentials, which correlate the size or pattern of the EEG or MEG signal with the performance of particular tasks or the presentation of particular stimuli. Thus, as with the hypothetical fMRI experiment described above, one could look for any consistent changes in the EEG or MEG signal when a participant sees faces rather than a blank screen. Apart from the determination of brain death, where EEG is already used, the most discussed possible legally relevant uses of EEG have been lie detection and memory detection.
EEG is safe, cheap, quiet, and portable. MEG is safe and quiet, but the technology is considerably more expensive than EEG and is not easily portable. EEG methods can tolerate much more head movement by the participant than PET or MRI techniques, although movement is often a challenge for MEG. EEG and MEG have good temporal resolution, distinguishing between milliseconds, which makes them very attractive for research, but their spatial resolution is inadequate for many research questions. As a result, some researchers use a combination of methods, integrating MRI and EEG or MEG data (acquired simultaneously or at different times) using sophisticated data analysis techniques.
Although EEG has most often been used in clinical and research-based settings, an increasing number of companies offer consumer-based devices with EEG electrodes. These devices have many fewer electrodes than clinical-grade EEG devices, and so offer much lower resolution and much noisier data. Large mainstream technology companies from Meta to Microsoft are developing consumer-based neural interface devices to enable consumers to meditate, play brain-controlled games, monitor their brain health, or even one day replace peripheral devices like keyboards and mice to interact with other technology. Widespread adoption of these consumer-based neural interface devices could significantly increase the amount of neural data being passively recorded in everyday life and potentially introduced into the legal system. The quality of the data and algorithms used to process the collected data will become increasingly at issue as these data are introduced into legal settings.
Several other neuroscience techniques may also have legal applications. This section will briefly describe four other methods that may be discussed in court: lesion studies, transcranial magnetic stimulation, deep brain stimulation, and implanted microelectrode arrays.
One powerful way to test whether particular brain regions are associated with particular mental processes is to study mental processes after those brain regions have been destroyed or damaged. Observations of the consequences of such lesions, created by accidents or disease, were, in fact, the main way in which localization of brain function was originally understood.
For ethical reasons, the experimental destruction of brain tissue is limited to nonhuman animals. Nonetheless, in addition to accidental damage, on occasion, human brains will need to be intentionally damaged for clinical purposes. Tumors may have to be removed or, in some cases, epilepsy may have to be treated by removing the region of the brain where the seizures began. Valuable knowledge may be gained from following these individuals.
Our understanding of the role of the hippocampus in creating memories, as one example, was greatly aided by study of a patient known as H.M.19 When he was twenty-seven years old, H.M. was treated for intractable epilepsy, undergoing an experimental procedure that surgically removed his left and right medial temporal lobes, including most of his two hippocampi. The surgery was successful, but from that time until his death in 2008, H.M. could not form new long-term memories, either of events or of facts. His short-term memory, also known as working memory, was intact, and he could learn new motor, perceptual, and (some) cognitive skills (his “procedural memory” still functioned). He also could remember his life’s events from before his surgery, although his memories were weaker the closer the events were to the surgery. Those brain regions were clearly involved in making new long-term memories for facts or events, but not in storing old ones.
Transcranial magnetic stimulation (TMS) is a noninvasive method of creating a temporary, reversible, functional brain lesion. Using this technique, researchers disrupt the organized activity of the brain’s neurons by applying an electrical current. The current is formed by a rapidly changing magnetic field that is generated by a coil held next to the participant’s skull. The field penetrates the scalp and skull easily and causes a small current in a roughly conical portion of the
19. H.M.’s name, not publicly released until his death, was Henry Gustav Molaison. Details of his life can be found in several obituaries, including Benedict Carey, H.M., An Unforgettable Amnesiac, Dies at 82, N.Y. Times, Dec. 4, 2008, at A1, https://www.nytimes.com/2008/12/05/us/05hm.html, and H.M., A Man Without Memories, Economist, Dec. 18, 2008, https://perma.cc/7CHX-UQNH. The first scientific report of his case was William Beecher Scoville & Brenda Milner, Loss of Recent Memory After Bilateral Hippocampal Lesions, 20 J. Neurology Neurosurgery & Psychiatry 11 (1957), https://doi.org/10.1136/jnnp.20.1.11.
brain below the coil. This current induces a change in the typical responses of the neurons, which can block the normal functioning of that part of the brain.
TMS can be done in a number of ways. In some approaches, TMS happens while the participant performs the task to be studied. In these approaches, while the task is performed, TMS can be delivered as single pulses, paired pulses, or repetitive, rapid (more than once per second) pulses. Another method uses TMS for an extended period, often several minutes, before the task is performed. This sequential TMS uses slow (less than once per second) repetitive TMS.
The effects of single-pulse/paired-pulse and concurrent repetitive TMS are present while the coil is generating the magnetic field, and can extend for a few tens of milliseconds after the stimulation is turned off. By contrast, the effects of pretask repetitive TMS are thought to last for a few minutes (about half as long as the actual stimulation). When TMS is repeated regularly in nonhumans, long-term effects have been observed. Therefore, guidelines regarding how much stimulation can be applied in humans have been established.
The Food and Drug Administration (FDA) has approved TMS as a treatment for otherwise untreatable depression. The neuroscience research value of TMS stems more from its ability to create alteration of brain function in a relatively small area (about two centimeters) in an otherwise healthy brain, thus allowing for targeted testing of the role of a particular brain region for a particular class of cognitive abilities. By blocking normal functioning of the affected neurons, this can be equivalent, in effect, to a temporary lesion of that area of the brain. TMS appears to have minimal risks, but its long-term effects are not known.
Deep brain stimulation (DBS) is an FDA-approved treatment for several neurological conditions affecting movement, notably Parkinson’s disease, essential tremor, and dystonia. The device used in DBS includes a lead that is implanted into a specific brain region, a pulse generator (generally implanted under the shoulder or in the abdomen), and a wire connecting the two. The pulse generator sends an electric current to the electrodes in the lead, which in turn affect the functioning of neurons in an area around the electrodes.
The precise manner by which DBS affects brain function remains unclear. Even for Parkinson’s disease, for which it is widely used, individual patients sometimes benefit in unpredictable ways from placement of the lead in different locations and from different frequency or power of the stimulation.
Researchers are continuing to experiment with DBS for other conditions, such as depression, minimally conscious state, chronic pain, and overeating that leads to morbid obesity. The results are sometimes surprising. In a Canadian trial of DBS for appetite control, the obese patient did not ultimately lose weight but did suddenly develop a remarkable memory. That research group quickly started
a trial of DBS for dementia (although thus far without any obvious success).20 Other surprises include some observed negative side effects from DBS, such as compulsive gambling, hypersexuality, and hallucinations. These kinds of unexpected consequences from DBS make it of continuing broader research interest.
Ultimately, to understand the brain fully, one would like to know what each of its 100 billion neurons is doing at any given time, analyzed in terms of their collective patterns of activity.21 No current technology comes close to that kind of resolution. For example, although fMRI has a voxel size of a few cubic millimeters, it is looking at the blood flow responding to thousands or millions of neurons at each point in the brain. Conversely, while direct electrical recordings allow individual neurons to be examined and manipulated, it is not easy to record from many neurons at once. While still on a relatively small scale, recent developments now offer one method for recording from multiple neurons simultaneously by using an implanted microelectrode array.
A chip containing many tiny electrodes can be implanted directly into brain tissue. Some of those electrodes will make usable connections with neurons and can then be used either to record the activity of that neuron (when it is firing or not) or to stimulate the neuron to fire. These kinds of implants have been used in research on motor function, both in monkeys and occasionally in human patients. The research has been aimed at understanding better what neuronal activity leads to motion and hence, in the long run, perhaps to a method of treating quadriplegia or other motion disorders.
These arrays have several disadvantages as research tools. Arrays require neurosurgery for their implantation, with all of its consequent risks of infection or damage. They also have a limited lifespan, because the brain’s defenses eventually prevent the electrical connection between the electrode and the neuron, usually over the span of a few months. Finally, the arrays can only reach a tiny number of the billions of neurons in the brain; current arrays have about one hundred micro-electrodes. But the field and devices in use are changing rapidly. Nature Electronics
20. See Clement Hamani et al., Memory Enhancement Induced by Hypothalamic/Fornix Deep Brain Stimulation, 63 Ann. Neurology 119 (2008), https://doi.org/10.1002/ana.21295. See also small and mixed results reported in Andres M. Lozano et al., A Phase II Study of Fornix Deep Brain Stimulation in Mild Alzheimer’s Disease, 54 J. Alzheimer’s Disease 777 (2016), https://pubmed.ncbi.nlm.nih.gov/27567810.
21. See discussion in Emily R. Murphy & Henry T. Greely, What Will Be the Limits of Neuroscience-Based Mindreading in the Law?, in The Oxford Handbook of Neuroethics 635 (Judy Illes & Barbara J. Sahakian eds., 2011).
declared brain-computer interface technology the “Technology of the Year” in 2023, and dedicated a volume to its current state of development.22
Lawyers trying to introduce neuroscience evidence will almost always be arguing that, when interpreted in the light of some preexisting research study, some kind of neuroscience-based test of the brain of a person in the case—usually a party, though sometimes a witness—is relevant to the case. It might be a claim that a PET scan shows that a criminal defendant was likely to have been legally insane at the time of the crime; it could be a claim that an fMRI of a witness demonstrates that they are lying. The judge will have to determine whether the scientific evidence is admissible at all under the Federal Rules of Evidence, and particularly under Rule 702. If the evidence is admissible, the finder of fact will need to consider the validity and strength of the underlying scientific finding, the accuracy of the particular test performed on the party or witness, and the application of the former to the latter.
Neuroscience-based evidence will commonly raise several scientific issues relevant to both the initial admissibility decision and the eventual determination of the weight to be given the evidence. This section will examine seven of these issues: replication, experimental design, participant selection and number, group averages, technical accuracy, statistical issues, and countermeasures. The discussion will focus on fMRI-based evidence, as that seems likely to be the method that will be used most frequently in the coming years, but most of the issues apply more broadly.
One general point is absolutely crucial. The various techniques discussed in the section titled “Some Common Neuroscience Techniques” above are generally accepted scientific procedures, for use both in research and, in most cases, in clinical care. Each one is a good scientific tool in general. The crucial issue is not likely to be whether the techniques meet the requirements for admissibility when used for some purposes, but whether the techniques—when used for the purpose for which they are offered—meet those requirements. Sometimes proponents of fMRI-based lie detection, for example, argue that the technique should be accepted because fMRI is the subject of tens of thousands of peer-reviewed publications. That is true, but irrelevant—the question is the application of fMRI to lie detection, which is the subject of far fewer, and much less definitive, publications.
22. Editorial, An Interface Connects, 6 Nature Elecs. 89 (2023), https://doi.org/10.1038/s41928-023-00938-8.
A good general rule of thumb in science is never to rely on any experimental finding until it has been independently replicated. This may be particularly true with fMRI experiments, not because of fraud or negligence on the part of the experimenters, but because, for reasons discussed below, these experiments are very complicated. Replication builds confidence that those complications have not led to false results.
In many scientific fields, including much of fMRI research, replication is sometimes not as common as it should be. A scientist often is not rewarded for replicating (or failing to replicate) another’s work. Grants, tenure, and awards tend to go to people doing original research. The rise of fMRI has meant that such original experiments are easy to conceive and to attempt—any researchers with experimental expertise, access to research participants (often undergraduates), and access to an MRI scanner (found at any major medical facility) can try their own experiments and, if the study design and logic are sound and the results are statistically significant, may well end up with published results. Experiments replicating, or failing to replicate, another’s work are neither as exciting nor as publishable.
For example, as discussed in more detail below, more than fifteen different laboratories have collectively published twenty to thirty peer-reviewed articles finding some statistically significant relationship between fMRI-measured brain activity and deception. None of the studies is an independent replication of another laboratory’s work. Each laboratory used its own experimental design, its own scanner, and its own method of analysis. Interestingly, the published results implicate many different areas of the brain as being activated when a participant lies. A few of the brain regions are found to be important in most of the studies, but many of the other brain regions showing a correlation with deception differ from publication to publication. Only a few of the laboratories have published replications of their own work; some of those laboratories have actually published findings with different results from those in their earlier publications.
That a finding has been replicated does not mean it is correct; different laboratories can make the same mistakes. Neither does failure of replication mean that a result is wrong. Nonetheless, the existence of independent replication is important support for a finding.
The most important part of an fMRI experiment is not the MRI scanner, but the design of the underlying experiment being examined in the scanner. A poorly designed experiment may yield no useful information, and even a well-designed experiment may lead to information of uncertain relevance.
A well-designed experiment must focus on the particular mental state or brain process of interest while minimizing any systematic biases. This can be especially difficult with fMRI studies. After all, these studies are measuring blood flow in the brain associated with neuronal responses in particular regions. If, for example, in an experiment trying to assess how the brain reacts to pain, the experimental participants are consistently distracted at one point in the experiment by thinking about something else, the areas of brain activation will include the areas activated by the distraction. One of the earliest published lie-detection experiments was designed so that the experimental participants pushed a button for “yes” only when saying (honestly) that they held the card displayed; they pushed the “no” button both when they did not hold the card displayed and when they did hold it but were following instructions to lie. They were to say “yes” only twenty-four times out of 432 trials.23 The resulting differences might have come from the differences in thinking about telling the truth or telling a lie—but they also may have come from the differences in thinking about pressing the “no” button (the most common action) and pressing the “yes” button (the less frequent response). The results themselves cannot distinguish between the two explanations.
Designing good experiments is difficult, but in some respects the better the experiment, the less relevant it may prove to a real situation. A laboratory experiment attempts to minimize distractions and differences among participants, but such factors will be common in real-world settings. Perhaps more importantly, for some kinds of experiments it will be difficult, if not impossible, to reproduce in the laboratory the conditions of interest in the real world. As an extreme example, if one is interested in how a murderer’s brain functions during a murder, one cannot conduct an experiment that involves having the participant commit a murder in the scanner. For ethical reasons, that condition of interest cannot be tested in the experiment.
The problem of trying to detect deception provides a different example. All published laboratory-based experiments involve people who know that they are taking part in a research project. Most of them are students and are being paid to participate in the project. They have received detailed information about the experiment and have signed a consent form. Typically, they are instructed to “lie” about a particular matter. Sometimes they are told what the lie should be (to deny that they see a particular playing card, such as the seven of clubs, on a screen in the scanner); sometimes they are told to make up a lie (about their most
23. Daniel D. Langleben et al., Telling Truth from Lie in Individual Subjects with Fast Event-Related fMRI, 26 Hum. Brain Mapping 262 (2005), https://doi.org/10.1002/hbm.20191. See discussion in Nancy Kanwisher, The Use of fMRI in Lie Detection: What Has Been Shown and What Has Not, in Emilio Bizzi et al., Using Imaging to Identify Deceit: Scientific and Ethical Questions 7, 10 (2009); see also discussion in Anthony Wagner, Can Neuroscience Identify Lies?, in A Judge’s Guide to Neuroscience: A Concise Introduction 13, supra note 5.
recent vacation, for example). In either case, they are following instructions—doing what they should be doing—when they tell the “lie.”
This situation is different from the realistic use of lie detection, when a guilty person needs to tell a convincing story to avoid a high-stakes outcome, such as arrest or conviction—and even an innocent person will be genuinely nervous about the possibility of an incorrect finding of deception. In an attempt to parallel these real-world characteristics, some laboratory-based studies have tried to give participants some incentive to lie successfully; for example, the participants may be told (falsely) that they will be paid more if they “fool” the experimenters. Although this may increase the perceived stakes, it seems unlikely that it creates a realistic level of stress. These differences between the laboratory and the real world do not mean that the experimental results of laboratory studies are unquestionably different from the results that would exist in a real-world situation, but they do raise serious questions about the extent to which the experimental data bear on detecting lies in the real world.
Few judges will be expert in the difficult task of designing valid experiments. Although judges may be able themselves to identify weaknesses in experimental design, more often they will need experts to address these questions. Judges will need to pay close attention to that expert testimony and the related argument, as details of experimental design may turn out to be absolutely crucial to the value of the experimental results.
Doing fMRI scans is expensive. The total cost of performing an hour-long research scan of a participant ranges from about $300 to $1,000. Much fMRI research, particularly work without substantial medical implications, is not richly funded. As a result, studies tend to use only a small number of participants—many fMRI studies use ten to twenty participants, and some use even fewer. In the lie-detection literature, for example, the number of participants used ranges from four to about thirty.
It is unclear how representative such a small group would be of the general population. This is particularly true of the many studies that use university students as research participants. Students typically are from a restricted age range, are likely to be of above-average intelligence and socioeconomic background, may not accurately reflect the country’s ethnic diversity, and typically will underrepresent people with serious mental conditions. In order to limit possible confounding variables, it can make sense for a study design to select, for example, only healthy, right-handed, native-English-speaking male undergraduates who are not using drugs. But the very process of selecting such a restricted group raises questions about whether the findings will be relevant to other groups of
people. They may be directly relevant, or they may not be. At the early stages of any fMRI research, it may not be clear what kinds of differences among participants will or will not be important.
Most fMRI-based research looks for statistically significant associations between particular patterns of brain activation across a number of participants. It is highly unlikely that any fMRI pattern will be found always to occur under certain circumstances in every person tested, or even that it will always occur under those circumstances in any one person. Human brains and their responses are too complicated for that. Research is highly unlikely to show that brain pattern “A” follows stimulus “B” each and every time and in every single person, although it may show that A follows B most of the time.
Consider an experiment with ten participants that examines how brain activation varies with the sensation of pain. A typical approach to analyzing the data is to take the average brain activation patterns of all ten participants combined, looking for the regions that, across the group, have the greatest changes—the most statistically significant changes—when the painful stimulus is applied compared to when it is absent. Importantly, though, the most significant region showing increased activation on average may not be the region with the greatest increase in activation in any particular one of the ten participants. It may not even be the area with the greatest activation in any of the ten participants, but it may be the region that was most consistently active across the brains of the ten participants, even if the response was small in each person.
Although group averages are appropriate for many scientific questions, the problem is that the law, for the most part, is not concerned with “average” people, but with individuals. If these “averaged” brains show a particular pattern of brain activation in fMRI studies and a defendant’s brain does not, what, if anything, does that mean?
It may or may not mean anything—or, more accurately, the chances that it is meaningful will vary. The findings will need to be converted into an assessment of an individual’s likelihood of having a particular pattern of brain activation in response to a stimulus, and that likelihood can be measured in various ways.24
Consider the following simplified example. Assume that 1,000 people have been tested to see how their brain responds to a particular painful stimulus. Each is scanned twice, once when touched by a painfully hot metal rod and once when the rod is at room temperature. Assume that all of them feel pain from the heated
24. Issues of sensitivity and specificity are discussed in more detail in David H. Kaye and Hal S. Stern, Reference Guide on Statistics and Research Methods, in this manual.
rod and that no one feels pain from the room-temperature rod. And, finally, assume that 900 of the 1,000 show a particular pattern of brain activation when touched with the hot rod, but only 50 of the 1,000 show the same pattern when touched with the room-temperature rod.
For these 1,000 people, using the fMRI activation pattern as a test for the perception of this pain would have a sensitivity of 90% (90% of the 1,000 who felt the pain would be correctly identified and only 10% would be false negatives). Using the activation as a test for the lack of the pattern would have a specificity of 95% (95% of those who did not feel pain were correctly identified and only 5% were false positives). Now ask, of all those who showed a positive test result, how many were actually positive? This percentage, the positive predictive value, would be 94.7%—900 out of 950. Depending on the planned use of the test, one might care more about one of these measures than another, and there are often trade-offs between them. Making a test more sensitive (so that it misses fewer people with the sought characteristic) often means making it less specific (so that it picks up more people who do not have the characteristic in question). In any event, when more people are tested, these estimates of sensitivity, specificity, and positive predictive value become more accurate.
There are other ways of measuring the accuracy of a test of an individual, but the important point is that some such conversion is essential. A research paper that reveals that the average participant’s brain (more accurately, the “averaged participants’ brain”) showed a particular reaction to a stimulus does not, in itself, say anything useful about how likely any one person is to have the same reaction to that stimulus. Further analyses are required to provide that information. Researchers, who are often more interested in identifying possible mechanisms of brain action than in creating diagnostic tests, will not necessarily have analyzed their data in ways that make them usefully applied to individuals—or might not have even obtained enough data for that to be possible. At least in the near future, this is likely to be a major issue for applying fMRI studies to individuals, in the courtroom or elsewhere.
MRI machines are variable, complicated, and finicky. The machines come in several different sizes, based on the strength of the magnet, with machines used for clinical purposes ranging from 0.2T to 3.0T and research scanners going as high as 9.4T, as noted earlier. Three companies dominate the market for MRI machines—General Electric, Siemens, and Philips—although several other companies also make the machines. Both the power and the manufacturer of an MRI system can make a substantial difference in the resulting data (and images).
These variations can be more important with functional MRI (though they also apply to structural MRI), so that a result seen on a 1.5T Siemens scanner might not appear on a 3.0T General Electric machine. Similarly, results from one 3.0T General Electric machine may be different from those on an identical model.
Even the exact same MRI machine may behave differently from day to day or month to month. The machines frequently need maintenance or adjustments and sometimes can be inoperable for days or even weeks at a time. Comparing results from even the same machine before and after maintenance—or a system upgrade—can be difficult. This can make it hard to compare results across different studies or between the group average of one study and results from an individual participant.
These issues concern not only the quality of the scans done in research, but, even more importantly, the credibility of the individual scan sought to be introduced at trial. If different machines were used, care must be taken to ensure that the results are comparable. The individual scans also can have other problems. Any one scan is subject not only to machine-derived artifacts and other problems noted above, but also to human-generated artifacts, such as those caused by the participant’s movements during the scan.
Finally, another technical problem of a different kind comes from the nature of fMRI research itself. The scanner will record changes in the relative levels of oxyhemoglobin to deoxyhemoglobin for thousands of voxels throughout the brain. During data analysis, these signal changes will be tested to see if they show any change in the response between the experimental condition and the baseline (or control) condition. Importantly, with fMRI, there is no definitive way to quantify precisely how large a change there was in the neural response compared to baseline; hence, the researcher must set a somewhat arbitrary statistical cutoff value (a threshold) for saying that a voxel was activated or deactivated. A researcher who wants to look only at strong effects will require a large change from baseline; a researcher who wants to see a wide range of possible effects will allow smaller changes from baseline to count.
Neither way is “right”—we do not know whether there is some minimum change in the BOLD response that means an “important” amount of brain activation has taken place, and if such a true value exists, it is likely to differ across brain regions, across tasks, and across experimental contexts. What this means is that different choices of statistical cutoff values can produce enormous differences in the apparent results. And, of course, the cutoff values used in the studies and in the scan of the individual of interest must be consistent across repeated tests. This important fact often may not be known.
Interpreting fMRI results requires the application of complicated statistical methods. These methods are particularly difficult, and sometimes controversial,
for fMRI studies, partly because of the thousands of voxels being examined. Fundamentally, most fMRI experiments look at many thousands of voxels and try to determine whether any of them are, on average, activated or deactivated as a result of the task or stimulus being studied. A simple test for statistical significance asks whether a particular result might have arisen by chance more than one time in twenty (or 5%): Is it significant at the 0.05 level? If a researcher is looking at the results for thousands of different voxels, it is likely that a number of voxels will show an effect above the threshold just by chance. There are statistical ways to control the rate of these false positives, but they need to be applied carefully. At the same time, rigid control of false positives through statistical correction (or the use of a very conservative threshold) can create another problem—an increase in the false negative rate, which results in failing to detect true brain responses that are present in the data but that fall below the statistical threshold. The community of fMRI researchers recognizes that these issues of statistical significance are difficult to resolve.
Over the past decade, other statistical techniques are increasingly being used in neuroimaging research, including techniques that do not look at the statistical significance of changes in the BOLD response in individual voxels, but that instead examine changes in the distributed patterns of activation across many voxels in a region of the brain or across the whole brain. These techniques include methods known as principal component analysis, multivariate analysis, machine learning algorithms, and artificial intelligence. These methods, the details of which are not reviewed in this reference guide, are being used increasingly in neuroimaging research and are producing some of the most interesting results in the field. The techniques are complex, and determining how to interpret the results of these tests can be controversial. Thus, these methods alone may require substantial and potentially confusing expert testimony in addition to all the other expert testimony about the underlying neuroscience evidence.
When neuroimaging is being used to compare the brain of one individual—a defendant, plaintiff, or witness, for example—to others, the individual undergoing neuroimaging might be able to use countermeasures to make the results unusable or misleading. And at least some of those countermeasures may prove especially hard to detect.
Individuals can disrupt almost any kind of scanning, whether done for structural or functional purposes, by moving in the scanner. Unwilling participants could ruin scans by moving their bodies or heads, or possibly even by moving their tongues. Blatant movements to disrupt the scan would be apparent, both from watching the individual in the scanner and from seeing the results, leading
to a possible negative inference that the person was trying to interfere with the scan. Nonetheless, that scan itself would be useless.
More interesting are possible countermeasures for functional scans. Polygraphy may provide a useful comparison. Countermeasures have long been tried in polygraphy with some evidence of efficacy. Polygraphy typically looks at the differences in physiological measurements of the participant when asked anxiety-provoking questions or benign control questions. Individuals can use drugs or alcohol to try to dampen their body reactions when asked anxiety-provoking questions. They can try to use mental measures to control or affect their physiological reactions, calming themselves during anxiety-provoking questions and increasing their emotional reaction to control questions. And, when asked control questions, they can try to increase the physiological signs the polygraph measures through physical means. For example, individuals might bite their tongues, step on tacks hidden in their shoes, or tighten various muscles to try to increase their blood pressure, galvanic skin response, and so on.
Basic science and polygraph research give reason for concern that polygraph test accuracy may be degraded by countermeasures, particularly when used by major security threats who have a strong incentive and sufficient resources to use them effectively. If these measures are effective, they could seriously undermine any value of polygraph security screening.25
Some of the countermeasures used by a polygraph participant can be detected by, for example, drug or alcohol tests or by carefully watching the individual’s body. But purely mental actions cannot be detected. These kinds of countermeasures may be especially useful to individuals seeking to beat neuroscience-based lie detection. For example, some argue that deception produces different activation patterns than telling the truth because it is mentally harder to tell a lie—more of the brain needs to work to decide whether to lie and what lie to tell. If so, two mental countermeasures immediately suggest themselves: Make the lie easier to tell (through, perhaps, memorization or practice) or make the brain work harder when telling the truth (through, perhaps, counting backward from one hundred by sevens).
Countermeasures are not, of course, potentially useful only in the context of lie detection. A neuroimaging test to determine whether a person was having the subjective feeling of pain might be fooled by the participant remembering, in great detail, past experiences of pain. The possible uses of countermeasures in neuroimaging have yet to be extensively explored, but at this point they cast additional doubt on the reliability of neuroimaging in investigations or in litigation.
25. See Nat’l Rsch. Council, The Polygraph and Lie Detection 5 (2003), https://doi.org/10.17226/10420. This report is an invaluable resource for discussions of not just the scientific evidence about the reliability of the polygraph, but also for general background about the application of science to lie detection.
The admissibility of neuroscience evidence will depend on many issues, some of them arising from rules of evidence, some from the U.S. Constitution, and some from other legal provisions. Another often-overlooked reality is that judges may have to decide whether to order the creation of this kind of evidence. Certainly, judges may be called to rule on the requests for criminal defendants (or convicts seeking postconviction relief) to be able to use neuroimaging. They may also have to decide motions in civil or criminal cases to compel neuroimaging. One could even imagine requests for warrants to “search the brains” of possible witnesses for evidence, or efforts to subpoena neural data collected by third parties who manufacture or supply consumer-based wearable EEG or MEG devices. This reference guide will not seek to resolve any of these questions, but will point out some of the problems that are likely to be raised regarding admitting neuroscience evidence in court.
This discussion looks at the main evidentiary issues that are likely to be raised in cases involving neuroscience evidence. Note, though, that judges will not always be governed by rules of evidence. In criminal sentencing or in probation hearings, among other things, the Federal Rules of Evidence do not apply26 (although federal sentencing guidelines do require that evidence used in sentencing meet a lower bar that “has sufficient indicia of reliability to support its probable accuracy”27). The Rules do apply, with limitations, in other contexts.28 Nonetheless, even when the Rules do not directly apply, many of the principles behind them, discussed below, will be important.
The starting point for all evidentiary questions must be relevance. If evidence is not relevant to the questions at hand, no other evidentiary concerns matter.
26. Fed. R. Evid. 1101(d).
27. U.S. Sentencing Guidelines Manual § 6A1.3(a).
28. Fed. R. Evid. 1101(e).
This basic reminder may be particularly useful with respect to neuroscience evidence. Evidence admitted, for example, to demonstrate that a criminal defendant had suffered brain damage sometime before the alleged crime is not, in itself, relevant. The proffered fact of the defendant’s brain damage must be relevant. It may be relevant, for example, to whether the defendant could have formed the necessary criminal intent, to whether the defendant should be found not guilty by reason of insanity, to whether the defendant is currently competent to stand trial, or to mitigation in sentencing. It must, however, be relevant to something in order to be admissible at all, and specifying its relevance will help focus the evidentiary inquiry. The question, for example, would not be whether PET scans meet the evidentiary requirements to be admitted to demonstrate brain damage, but whether they have “any tendency to make a fact more or less probable than it would be without the evidence.”29 The brain damage may be relevant to a fact, but that fact must be “of consequence in determining the action.”30
Neuroscience evidence will almost always be “scientific . . . knowledge” governed by Rule 702 of the Federal Rules of Evidence, as interpreted in Daubert v. Merrell Dow Pharmaceuticals31 and its progeny, both before and after the amendments to Rule 702 in 2023.32 Rule 702 allows the testimony of a qualified expert if:
A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if the proponent demonstrates to the court that it is more likely than not that:
- the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;
- the testimony is based on sufficient facts or data;
- the testimony is the product of reliable principles and methods; and
- the expert’s opinion reflects a reliable application of the principles and methods to the facts of the case.
29. Fed. R. Evid. 401(a).
30. Fed. R. Evid. 401(b).
31. Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993).
32. Rule 702 is a subject covered by Liesa L. Richter and Daniel J. Capra in The Admissibility of Scientific Evidence, in this manual. That reference guide should be consulted for details about Rule 702 and its application. This reference guide merely briefly lays out the Rule and its application to neuroscience evidence.
In Daubert, the U.S. Supreme Court listed several nonexclusive guidelines for trial court judges considering testimony under Rule 702. The committee that proposed the 2000 amendments to Rule 702 summarized these factors as follows:
The specific factors explicated by the Daubert Court are (1) whether the expert’s technique or theory can be or has been tested—that is, whether the expert’s theory can be challenged in some objective sense, or whether it is instead simply a subjective, conclusory approach that cannot reasonably be assessed for reliability; (2) whether the technique or theory has been subject to peer review and publication; (3) the known or potential rate of error of the technique or theory when applied; (4) the existence and maintenance of standards and controls; and (5) whether the technique or theory has been generally accepted in the scientific community.33
The 2023 amendments were similarly summarized by the advisory committee as follows:
First, the rule has been amended to clarify and emphasize that expert testimony may not be admitted unless the proponent demonstrates to the court that it is more likely than not that the proffered testimony meets the admissibility requirements set forth in the rule. . . . Rule 702(d) has also been amended to emphasize that each expert opinion must stay within the bounds of what can be concluded from a reliable application of the expert’s basis and methodology.34
The tests laid out in Daubert and in the evidentiary rules governing expert testimony have been the subjects of enormous discussion, both by commentators and by courts. And, to the extent that some neuroscience evidence has been admitted in federal courts (and the courts of states that follow Rule 702 or Daubert), it has passed those tests. We do not have the knowledge needed to analyze them in detail, but we will merely point out a few aspects that seem especially relevant to neuroscience evidence.
Neuroscience evidence should often be subject to tests, as long as the point of the neuroscience evidence is kept in mind. An fMRI scan might provide evidence that someone was having auditory hallucinations, but it could not prove that someone was not guilty by reason of insanity. The latter is a legal conclusion, not a scientific finding. The evidence might be relevant to the question of insanity, but one cannot plausibly conduct a scientific test of whether a particular pattern of brain activation is always associated with legal insanity. One might offer neuroimaging evidence about whether a person is likely to have unusual difficulty controlling his or her impulses, but that is not, in itself, proof that the person acted recklessly. The idea of testing helps separate the conclusions that
33. Fed. R. Evid. 702 advisory committee’s notes.
34. Id.
neuroscience might be able to reach from the legal conclusions that will be beyond it.
Daubert’s stress on the presence of peer review and publication corresponds nicely to scientists’ perceptions. In many scientific fields, findings that are not published in a peer-reviewed journal are deeply discounted. In those fields, scientists usually begin to have confidence in findings only after peers, both those involved in the editorial process and, more importantly, those who read the publication, have had a chance to dissect them and to search intensively for errors either in theory or in practice. It is crucial, however, to recognize that publication and peer review are not in themselves enough. The publications need to be compared carefully to the evidence that is proffered.
First, the published, peer-reviewed articles must establish the specific scientific fact being offered. An (accurate) assertion that fMRI has been the basis of more than 12,000 peer-reviewed publications will help establish that fMRI can be used in ways that the scientific community finds reliable. By themselves, however, those publications do not establish any particular use of fMRI. If fMRI is being offered as proof of deception, the twenty or thirty peer-reviewed articles concerning its ability to detect deception are most important, not the 11,980 articles involving fMRI for other purposes.
Second, the existence of several peer-reviewed publications on the same general method does not support the accuracy of any one approach if those publications are mutually inconsistent. There are now about twenty to thirty peer-reviewed publications that, using fMRI, find statistically significant differences in patterns of brain activation depending on whether the participants were telling the truth or (typically) telling a lie when instructed to do so. Many of those publications find patterns that are different from, and often inconsistent with, the patterns described in the other publications. Multiple inconsistent publications do not add weight to a scientific method or theory; indeed, they may subtract from it.
Third, the peer-reviewed publication needs to describe in detail the method about which the expert plans to testify. A commercial firm might, for example, claim that its method is “based on” some peer-reviewed publications, but unless the details of the firm’s methods were included in the publication, those details were neither published nor peer-reviewed. A proprietary algorithm used to generate a finding published in the peer-reviewed literature is not adequately supported by that literature.
The error rate is also crucial to most neuroscience evidence, in two different senses. One is the degree to which the machines used to produce the evidence make errors. While these kinds of errors may balance out in a large sample used in published literature, any scan of any one individual may well be affected by errors in the scanning process. Second, and more importantly, neuroscience evidence will almost never give an absolute answer, but will give a probabilistic one. For example, a certain brain structure or activation pattern will be found in some percentage of people with a particular mental condition or state. These
group averages will have error rates when they are applied to individuals. Those rates need to be known and presented.
The issue of standards and controls also is important in neuroscience. This area is new and has not undergone the kind of standardization seen, for example, in forensic DNA analysis. When trying to apply neuroscience findings to an individual, evidence from the individual needs to have been acquired in the same way, with the same standards and conditions, as the evidence from which the scientific conclusions were drawn—or, at least, in ways that can be made readily comparable. For example, there is no one standard in fMRI research for what statistical threshold should be used for a change in the BOLD signal to “count” as a meaningful activation or deactivation. An individual’s scan would need to be analyzed under the same definition for activation as was used in the research supporting the method, and the effects of the chosen threshold on finding a false positive or false negative must be considered.
The final consideration—general acceptance in the scientific community—also needs to be applied carefully. There is clearly general acceptance in the scientific community that fMRI can provide scientifically and sometimes clinically useful information about the workings of human brains, but that does not mean there is general acceptance of any particular fMRI application. Similarly, there may be general acceptance that fMRI can provide some general information about the physical correlates of a particular mental state, but without general acceptance that it can do so reliably in an individual case.
Rule 702 is not the only test that neuroscience evidence will need to pass to be admitted in court. Even evidence admissible under that rule must still escape the exclusion provided by Rule 403: Although relevant, evidence may be excluded “if its probative value is substantially outweighed by a danger of . . . unfair prejudice, confusing the issues, misleading the jury, undue delay, wasting time, or needlessly presenting cumulative evidence.”35
As discussed in detail in one article,36 Rule 403 may be particularly important with some attempted applications of neuroscience evidence because of the balance it requires between the value of evidence to the decision-maker and its costs.
The probative value of such evidence may often be questioned. Neuroscience evidence will rarely, if ever, be definitive. It is likely to have a range of
35. Fed. R. Evid. 403.
36. Teneille Brown & Emily Murphy, Through a Scanner Darkly: Functional Neuroimaging as Evidence of a Criminal Defendant’s Past Mental States, 62 Stan. L. Rev. 1119 (2010), https://perma.cc/C2BL-VWHV.
uncertainties, from the effectiveness of the method in general, to questions of its proper application in this case, to whether any given individual’s reactions are the same as those previously tested.
The other side of Rule 403, however, is even more troublesome. The time necessary to introduce such evidence, and to educate the jury (and judge) about it, will usually be extensive. The possibilities for confusion are likely to be great. And there is at least some evidence that jurors (or, to be precise, “mock jurors”) are particularly likely to overestimate the power of neuroscience evidence.37 A high-tech “picture” of a living brain, complete with brain regions shown in bright orange and deep purple (colors not seen in an actual brain), may have an unjustified appeal to a jury. In each case, judges will need to weigh possibilities of confusion or prejudice, along with the near certainty of lengthy testimony, against the claimed probative value of the evidence.
Neuroscience evidence will, of course, be subject in individual cases to all evidentiary rules (in the Federal Rules of Evidence or otherwise) and could be affected by many of them. Four examples follow where the application of several such rules to this kind of evidence may raise interesting issues; there are undoubtedly many others.
First, in June 2009 the U.S. Supreme Court decided Melendez-Diaz v. Massachusetts, where the five-justice majority held that the Confrontation Clause required the prosecution to present the testimony at trial of state laboratory analysts who had identified a substance as cocaine.38 This would seem to apply to any use by the prosecution in criminal cases on neuropsychological or neuroimaging of a criminal defendant or witness, although it is unclear to whom it would apply. Would testimony be required from the person who observed the procedure, the person who analyzed the results of the procedure, an AI system that analyzed the results, or programmers of that AI? These are not questions unique to neuroscience evidence, but are provoked by Melendez-Diaz.
37. See Deena Skolnick Weisberg et al., The Seductive Allure of Neuroscience Explanations, 20 J. Cognitive Neuroscience 470 (2008), https://doi.org/10.1162/jocn.2008.20040; David P. McCabe & Alan D. Castel, Seeing Is Believing: The Effect of Brain Images on Judgments of Scientific Reasoning, 107 Cognition 343 (2008), https://doi.org/10.1016/j.cognition.2007.07.017. These articles are discussed in Brown & Murphy, supra note 36, at 1199–1202. But see Nicholas J. Schweitzer et al., Neuroimages As Evidence in a Mens Rea Defense: No Impact, 17 Psych. Pub. Pol’y & L. 357 (2011), https://doi.org/10.1037/a0023581 (explaining that the experimental results seem to indicate that showing neuroimages to mock jurors does not affect their decisions).
38. 557 U.S. 305 (2009).
Second, the Federal Rules of Evidence put special limits on the admissibility of evidence of character and, in some cases, of predisposition.39 In some cases, neuroscience evidence offered for the purpose of establishing a regular behavior of the person might be viewed as evidence of character40 or predisposition (or lack of predisposition). Whether such evidence could be admitted might hinge on whether it was offered in a civil case or a criminal case, and, if in a criminal case, by the prosecution or the defendant.
Third, Federal Rule of Evidence 406 allows the admission of evidence about a habit or routine practice to prove that the relevant person’s actions conformed to that habit or routine practice. It is conceivable that neuroscience evidence might be used to describe “habits of mind” and thus be offered under this rule.
The fourth example applies to neuroscience-based lie detection. Although New Mexico is the only U.S. jurisdiction that generally allows the introduction of polygraph evidence,41 several jurisdictions allow polygraph evidence in two specific situations. First, polygraph evidence is sometimes allowed when both parties have stipulated to its admission in advance of the performance of the test. (This does lead one to wonder whether a court would allow evidence from a psychic or from a fortune-telling toy, like the Magic 8 Ball, if both parties stipulated to it.) Second, polygraph evidence is sometimes allowed to impeach or to
39. Fed. R. Evid. 404, 405, 412–15, 608.
40. Evidence about lie detection has sometimes been viewed as “character evidence,” introduced to bolster a witness’s credibility. The Canadian Supreme Court has held that polygraph evidence is inadmissible in part because it violates the rule limiting character evidence:
What is the consequence of this rule in relation to polygraph evidence? Where such evidence is sought to be introduced, it is the operator who would be called as the witness, and it is clear, of course, that the purpose of his evidence would be to bolster the credibility of the accused and, in effect, to show him to be of good character by inviting the inference that he did not lie during the test. In other words, it is evidence not of general reputation but of a specific incident, and its admission would be precluded under the rule. It would follow, then, that the introduction of evidence of the polygraph test would violate the character evidence rule. R. v. Béland, [1987] 2 S.C.R. 398, 414. The Canadian court also held that polygraph evidence violated another rule concerning character evidence, the rule against “oath-helping”:
From the foregoing comments, it will be seen that the rule against oath-helping, that is, adducing evidence solely for the purpose of bolstering a witness’s credibility, is well grounded in authority. It is apparent that, since the evidence of the polygraph examination has no other purpose, its admission would offend the well-established rule. Id. at 408 (the court also ruled against polygraph evidence as violating the rule against prior consistent statements and, because the jury needs no help in assessing credibility, the rule on the use of expert witnesses.).
41. Lee v. Martinez, 96 P.3d 291 (N.M. 2004).
corroborate a witness’s testimony.42 If a neuroscience-based lie-detection technique were found to be as reliable as the polygraph, presumably those jurisdictions would have to consider whether to extend these exceptions to such neuroscience evidence.
In many contexts, courts will be asked to admit neuroscience evidence or to order, allow, or punish its creation. Such actions may implicate a number of constitutional rights, international law, and international human rights, as well as other substantive legal and regulatory provisions. While this section will not seek to discuss all possible such claims or to resolve any of them, it raises some of the most interesting issues.
Much of the discussion that follows presumes the compulsory creation of neuroscience data, or compulsory submission to neuroimaging or other testing that would generate neuroscience data. As brain sensors become more widely integrated into everyday headphones, earbuds, and even wrist-worn devices to detect motor neuron activity, neuroscience data may be increasingly created passively, without compulsion, which would reshape the legal analysis that would follow. Passively created data may be subject to discovery through subpoenas to third parties who operate the devices or consumer applications that collect the data generated by these devices.
Could a person be required to submit to neuroimaging to gather neuroscience data from their brains, or would that violate their privilege against self-incrimination? This has been discussed by legal scholars for more than a decade now because of the novel fact that neuroscience evidence could be seen as “real” physical evidence rather than “testimonial” evidence because it measures brain activity and blood flow rather than spoken words by an individual. The real-world use in the United States of such evidence has been quite limited, so the question remains a theoretical one. For what purpose might a person be asked to
42. See, e.g., United States v. Piccinonna, 885 F.2d 1529 (11th Cir. 1989) (en banc). See also United States v. Allard, 464 F.3d 529 (5th Cir. 2006); Thornburg v. Mullin, 422 F.3d 1113 (10th Cir. 2005).
submit to such examination? To gather identifying evidence, automatically processed memories, or silently uttered responses in their brain.43
If neuroscience evidence were held to be “testimonial” and its creation compelled rather than “real” physical evidence passively created, it would be subject to the privilege, but if it were nontestimonial and not compelled, it would, under current law, not be. Examples of nontestimonial evidence for purposes of the privilege against self-incrimination include a blood alcohol test or medical X-rays. Examples of evidence created without compulsion include a private diary or other business documents created without government instruction to do so.
It is not the type of neuroimaging that addresses whether the privilege against self-incrimination is implicated but the nature of the evidence collected, the inferences that can be drawn from it, and whether a person is compelled to create it. How the evidence from the brain is generated (passively or in response to questions, for example) and the nature of the neuroscience evidence sought (evidence of brain damage, for example, versus silently uttered responses to questions or recalled memories that could infer unspoken thoughts) will be more relevant to deciding whether it runs afoul of the privilege against self-incrimination.
It is possible, however, that conscious answers may not be necessary but the brain could be queried for memories or other information nonetheless. Two EEG-based systems claim to be able to determine whether a person either recognizes or has “experiential knowledge” of an event (a memory derived from experience as opposed to being told about it).44 Very substantial scientific questions exist about each system, but, assuming they were to be admitted as reliable, they would raise this question more starkly because they do not require the participant in the procedure to consciously create answers or respond to questions. The participant is shown photographs of relevant locations or read a description of the events while hooked up to an EEG. The brain waves, it is
43. See discussion of “spectrum of evidence” in Nita A. Farahany, Incriminating Thoughts, 64 Stan. L. Rev. 351 (2012), https://perma.cc/2VX7-N6P7.
44. The first system is the so-called Brain Fingerprinting, developed by Dr. Larry Farwell. This method was introduced successfully in evidence at the trial court level in a postconviction relief case in Iowa; the use of the method in that case is discussed briefly in the Iowa Supreme Court’s decision on appeal. Harrington v. State, 659 N.W.2d 509, 516 n.6 (Iowa 2003). The court expressed no view on whether that evidence was properly admitted. See id. at 516. The method is discussed on the website of Farwell’s company, Brain Fingerprinting Laboratories. It is discussed from a scientific perspective in J. Peter Rosenfeld, ‘Brain Fingerprinting’: A Critical Analysis, 4 Sci. Rev. Mental Health Prac. 20 (2005), https://perma.cc/R4KF-92AS. See also the brief discussion in Henry T. Greely & Judy Illes, Neuroscience-Based Lie Detection: The Urgent Need for Regulation, 33 Am. J.L. & Med. 377, 387–88 (2007), https://doi.org/10.1177/009885880703300211.
The second system is called Brain Electrical Oscillation Signature (BEOS) and was developed in India, where it has been introduced in trials and has been important in securing criminal convictions. See Anand Giridharadas, India’s Novel Use of Brain Scans in Courts Is Debated, N.Y. Times, Sep. 14, 2008, at A10, https://www.nytimes.com/2008/09/15/world/asia/15brainscan.html.
asserted, demonstrate whether the individual recognizes the photographs or has “experiential knowledge” of the events—no volitional communication is necessary. It might be harder to classify these automatically processed responses as “testimonial,” compared to unspoken silent utterances in response to questions.45
Even if the privilege against self-incrimination applies to neuroscience methods of obtaining evidence, it only applies where someone invokes the privilege. The courts and other government bodies force people to answer questions all the time, often under penalty of criminal or civil sanctions or of the court’s contempt power. And the government can immunize a person from criminal prosecution and compel responses without running afoul of the privilege against self-incrimination. For example, a plaintiff in a civil case alleging damage to their health can be compelled to undergo medical testing at a defendant’s appropriate request. In that case, the plaintiff can refuse, but only at the risk of seeing their case dismissed. Presumably, a party could similarly demand that a party, or a witness, undergo a neuropsychological or neuroimaging examination, looking for either structural or functional aspects of the person’s brain or mental status relevant to the case or, for example, relevant to the accuracy of their memories.46 If the privilege against self-incrimination is not available, or is available but not attractive, could the person asked have any other protection?
The answer is not clear. One might try to argue, along the lines of Rochin v. California,47 that such a procedure violates the Due Process Clause of the Fifth and Fourteenth Amendments because it intrudes on the person in a manner that “shocks the conscience.” One might try to use the concept of “freedom of thought” referenced in some U.S. Supreme Court First Amendment cases to argue that the First Amendment’s freedoms of religion, speech, and the press encompass a broader protection of the contents of the mind through a right to cognitive liberty.48 Until recently, governments could not directly access our thoughts49—which is why freedom of thought as a fundamental right has largely been glossed over in history. Justices Murphy, Black, and Douglas, along with Chief Justice Stone, reflected this prevailing view in their 1942 dissenting and
45. Farahany, supra note 43; Farahany, supra note 2.
46. See, e.g., Stalcup v. State, 311 P.3d 104 (Wyo. 2013).
47. Rochin v. California, 342 U.S. 165 (1952).
48. See Paul Root Wolpe, Is My Mind Mine? Neuroethics and Brain Imaging, in The Penn Center Guide to Bioethics 86 (Arthur L. Caplan, Autumn Fiester & Vardit Ravitsky eds., 2009).
49. Simon McCarthy-Jones, The Autonomous Mind: The Right to Freedom of Thought in the Twenty-First Century, 2 Frontiers A.I. 1 (2019), https://doi.org/10.3389/frai.2019.00019.
concurring opinion in the U.S. Supreme Court case of Jones v. City of Opelika, when they argued that while freedom of thought is absolute, even “the most tyrannical government is powerless to control the inward workings of the mind.”50 Now that there are ways to infer thought using neurotechnology, we find ourselves with very little history or theoretical work to help define its scope.
Scholars have advanced a theory of “cognitive liberty” as a human right that would include the right to mental privacy as part of the human right to privacy under the Universal Declaration of Human Rights (UDHR), the right to keep one’s thoughts private and not be punished for one’s thoughts under the right to freedom of thought in the UDHR, and a right to self-determination over one’s brain and mental experiences.51
That right, as one of this reference guide’s authors has argued, has been implicitly recognized in tort cases in the United States, which do not extend a duty to mitigate one’s injuries to measures that would tamper with memories or reduce one’s psychological injuries in the same manner that other physical injuries must be mitigated or one’s damages will be reduced in civil cases.52 And it is implicitly recognized in First Amendment cases, where freedom of thought is recognized as a precursor to freedom of speech.
But all of these rights and their applications and implications for neuroscience evidence remain speculative until courts grapple with new forms of neuroscience evidence and challenges to their creation or introduction in legal cases.
At least one form of possible neuroscience evidence may already be covered by statutory provisions limiting its creation and use—lie detection. In 1988, Congress passed the federal Employee Polygraph Protection Act (EPPA).53 Under this Act, almost all employers are forbidden to “directly or indirectly . . . require, request, suggest, or cause any employee or prospective employee to take or submit to any lie detector test” or to “use, accept, refer to, or inquire concerning the results of any lie detector test of any employee or prospective employee.”54 The Act defines a “lie detector” broadly, as “a polygraph, deceptograph, voice stress
50. 316 U.S. 584, 618 (1942) (Murphy, J., dissenting).
51. See, e.g., Farahany, supra note 2.
52. See, e.g., Nita A. Farahany, The Costs of Changing Our Minds, 69 Emory L.J. 75 (2019), https://perma.cc/L488-XMBW.
53. Federal Employee Policy Protection Act of 1988, Pub. L. No. 100–347, 102 Stat. 646 (codified at 29 U.S.C. §§ 2001–2009). See generally the discussion of federal and state laws in Greely & Illes, supra note 44, at 405–10, 421–31.
54. 29 U.S.C. § 2002 (1)–(2) (The section also prohibits employers from taking action against employees because of their refusal to take a test, because of the results of such a test, or for asserting their rights under the Act.); see id. § 2001 (3)–(4) (definitions).
analyzer, psychological stress evaluator, or any other similar device (whether mechanical or electrical) that is used, or the results of which are used, for the purpose of rendering a diagnostic opinion regarding the honesty or dishonesty of an individual.”55 The Department of Labor can punish violators with civil fines, and those injured have a private right of action for damages.56 The Act does provide narrow exceptions for polygraph tests in some circumstances.57
In addition to federal statutes, many states passed their own versions of the EPPA, either before or after the federal act. The laws passed after the EPPA generally apply similar prohibitions to some employers not covered by the federal act (such as state and local governments), but with their own idiosyncratic set of exceptions. Many states have also passed laws regulating lie-detection services. Most of these seem clearly aimed at polygraphy, but, in some states, the language used is quite broad and may well encompass neuroscience-based lie detection.58
States also may provide protection against neuroscience evidence that goes beyond lie detection and could prevent involuntary neuroscience procedures. Some states have constitutional or statutory rights of privacy that could be read to include a broad freedom for mental privacy. And in some states, such as California, such privacy rights apply not just to state action but to private actors as well.59 Most employment cases would be covered by the EPPA and its state equivalents, but such state privacy protections might be used to help decide whether courts could compel neuroimaging scans or whether they could be required in nonemployment relationships, such as school/student or parent/child.
One might also argue that some kinds of neuroscience evidence could be excluded from evidence as a result of the federal constitutional rights to trial by
55. Id. § 2001(3).
56. Id. § 2005.
57. Id. § 2006.
58. See generally Greely & Illes, supra note 44, at 409–10, 421–31 (for both state laws on employee protection and state laws more broadly regulating polygraphy).
59. “All people are by nature free and independent and have inalienable rights. Among these are enjoying and defending life and liberty, acquiring, possessing, and protecting property, and pursuing and obtaining safety, happiness, and privacy.” Calif. Const., art. I, § 1 (emphasis added). The words “and privacy” were added by constitutional amendment in 1972. The California Supreme Court had applied these privacy protections in suits against private actors: “In summary, the Privacy Initiative in article I, section 1 of the California Constitution creates a right of action against private as well as government entities.” Hill v. Nat’l Collegiate Athletic Ass’n, 865 P.2d 633, 644 (Cal. 1994).
jury in criminal and most civil cases. In United States v. Scheffer,60 the U.S. Supreme Court upheld an express ban in the Military Rules of Evidence on the admission of any polygraph evidence against a criminal defendant’s claimed Sixth Amendment right to introduce the evidence in their defense. Justice Thomas wrote the opinion of the Court, holding that the ban was justified by the questionable reliability of the polygraph. Justice Thomas continued, however, in a portion of the opinion joined only by Chief Justice Rehnquist and Justices Scalia and Souter, to hold that the Rule could also be justified by an interest in the role of the jury:
It is equally clear that Rule 707 serves a second legitimate governmental interest: Preserving the jury’s core function of making credibility determinations in criminal trials. A fundamental premise of our criminal trial system is that “the jury is the lie detector.” Determining the weight and credibility of witness testimony, therefore, has long been held to be the “part of every case [that] belongs to the jury, who are presumed to be fitted for it by their natural intelligence and their practical knowledge of men and the ways of men.”61
The other four justices in the majority, and Justice Stevens in dissent, disagreed that the role of the jury justified this rule, but the question remains open. Justice Thomas’s opinion did not argue that exclusion was required as part of the rights to jury trials in criminal and civil cases under the Sixth and Seventh Amendments, respectively, but one might try to extend his statements of the importance of the jury as “the lie detector” to such an argument.62
In one of many ways in which “death is different,” in Lockett v. Ohio,63 the U.S. Supreme Court held that the Eighth Amendment guarantees a convicted defendant in a capital case a sentencing hearing in which the sentencing authority is able to consider any mitigating factors. In Rupe v. Wood,64 the Ninth Circuit, in
60. 523 U.S. 303 (1998).
61. Id. at 312–13 (internal citation omitted).
62. The Federal Rules of Criminal Procedure effectively give the prosecution a right to a jury trial, by allowing a criminal defendant to waive such a trial only with the permission of both the prosecution and the court. Fed. R. Crim. P. 23(a). Many states allow a criminal defendant to waive a jury trial unilaterally, thus depriving the prosecution of an effective “right” to a jury.
63. 438 U.S. 586 (1978).
64. 93 F.3d 1434, 1439–41 (9th Cir. 1996). But see United States v. Fulks, 454 F.3d 410, 434 (4th Cir. 2006). See generally Christopher Domin, Comment, Mitigating Evidence? The Admissibility of
an appeal from the defendant’s successful habeas corpus proceeding, applied that holding to find that a capital defendant had a constitutional right to have polygraph evidence admitted as mitigating evidence in his sentencing hearing. The court agreed that totally unreliable evidence, such as astrology, would not be admissible, but that the district court had properly ruled that polygraph evidence was not that unreliable. (The Washington Supreme Court had previously decided that polygraph evidence should be admitted in the penalty phase of capital cases under some circumstances.65) Thus, capital defendants may argue that they have the right to present neuroscience evidence as mitigation even if it would not be admissible during the guilt phase.
The Scheffer case arose in the context of another right guaranteed by the Sixth Amendment, the right of a criminal defendant to present a defense. It seems likely that neuroimaging will first be offered by parties who have been its voluntary participants and who will argue that it strengthens their cases, just as other kinds of neuroscience evidence have. In fact, the main use of neuroimaging in the courts so far, at least in criminal cases, has been by defendants seeking to demonstrate a defense or mitigation of sentencing. If jurisdictions were to exclude such evidence categorically, they might face a similar Sixth Amendment challenge.
The U.S. Supreme Court has held that some prohibitions on evidence in criminal cases violate the right to present a defense. Thus, in Rock v. Arkansas,66 the Court struck down a per se rule in Arkansas against the admission of hypnotically refreshed testimony, holding that it was “arbitrary or disproportionate to the purposes [it is] designed to serve.” The Scheffer case probably provides the model for how arguments about exclusions of neuroscience evidence would play out. Eight of the Justices in Scheffer agreed that the reliability of polygraphy was sufficiently questionable as to justify the per se ban on its use. Justice Stevens, however, dissented, finding polygraphy sufficiently reliable to invalidate its per se exclusion.
The Fourth Amendment raises some particularly interesting questions. It provides, of course, that, “The right of the people to be secure in their persons,
Polygraph Results in the Penalty Phase of a Capital Trial, 43 U.C. Davis L. Rev. 1461 (2010), https://perma.cc/JA6C-DLGH (arguing that the Supreme Court should resolve the resulting circuit split by adopting the Ninth Circuit’s position).
65. State v. Bartholomew, 683 P.2d 1079, 1083–84 (Wash. 1984).
66. 483 U.S. 44, 56 (1987).
houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” On the one hand, an involuntary neuroscience examination would seem to be a search or seizure and thus “unreasonable” neuroscience examinations are prohibited. To that extent, the Fourth Amendment would appear to be a protection against compulsory neuroscience testing.
On the other hand, neuroimaging may be considered noninvasive in that one does not have to physically invade a person’s body or effects to obtain information from the brain. A suspect can be compelled to undergo compulsory urine or blood tests, for example, and might similarly be compelled to undergo neuroimaging without it being considered an “unreasonable” search under the Fourth Amendment. Not all evidence obtained from the brain will be equally sensitive—one could obtain identifying information, automatic functioning information such as the processing of alcohol, memories from the brain, or silent utterances. Courts will be confronted with having to parse not just whether a search has occurred, but whether that search is unreasonable when neuroscience evidence is compelled. And the answers will likely turn on the nature of the neuroimaging used, whether it is compelled or passively collected neuroscience information from a wearable device, and the nature of the evidence being sought and the privacy interest implicated as a result.67
Most of what follows deals with criminal cases. That is the area where the interest in neuroscience evidence has been strongest, and the scholarly literature, in both neuroscience and in law, has largely reflected that focus. But it is important to recognize that in U.S. civil cases, neuroscience evidence has been used to challenge or support the veracity of eyewitness memory, the reality and intensity of pain, and the mental capacity and competence of parties, witnesses, and testators.
For example, neuroscience has increasingly played an evidentiary role in civil cases involving allegations of brain injury, such as traumatic brain injury (TBI), stroke, or dementia.68 In a product liability case involving a tractor
67. See, e.g., Nita A. Farahany, Searching Secrets, 160 U. Pa. L. Rev. 1239 (2012), https://perma.cc/DU22-SDBU.
68. See, e.g., Dickson v. United Airlines, No. 4:20-CV-5014-RMP, 2021 WL 4199954 (E.D. Wash. Aug. 3, 2021) (dispute between experts on neuroimaging findings and tier consistency with TBI).
accident that resulted in the plaintiff’s facial and traumatic brain injury, neuroscience evidence played a central role. The plaintiff’s expert testimony, partially based on magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI), was challenged by the defendant. However, the court upheld the admissibility of this evidence under Rule 702 of the Federal Rules of Evidence, recognizing DTI as a reliable methodology for identifying TBI.69 Other decisions that have excluded DTI as scientifically unreliable have been challenged by scholars in the field.70
Neuroscience has also been used in civil cases involving claims of chronic pain—such as fibromyalgia, complex regional pain syndrome (CRPS), or phantom limb pain—to challenge or corroborate the subjective reports of pain by the plaintiffs, including in claims and challenges for long-term disability payments.71 However, a consensus statement by neuroscientists, ethicists, and legal scholars in Nature Reviews Neurology opined that neuroscience evidence of chronic pain should be used to educate decision-makers about the underlying pathophysiology of the pain, but that “brain imaging is not yet sufficiently reliable to be used as a pain detector to either support or contradict an individual’s self-report.”72
Civil cases involving issues of memory, such as false or recovered memories, eyewitness identification, or consent, have also increasingly involved neuroscientific studies, often to explain or challenge the accuracy and reliability of witness or party testimony. However, this trend has raised scholarly concerns that neuroimaging can disproportionately influence juror assessments, particularly in determining awards in cases involving claims of pain and suffering.73 All of these civil uses, and more, are likely to expand in coming years.
But we have some data about criminal cases. Over the past two decades, neuroscience evidence has increasingly been introduced by criminal defendants to challenge their competency and mental state, to put into question the voluntariness of their conduct, to support claims of insanity and mental diseases or defects, and to mitigate their punishment. Neuroscience evidence has been used to challenge the memory of eyewitnesses. And it could increasingly be used to challenge or interrogate the memory of witnesses, criminal suspects, or litigants
69. White v. Deere & Co., No. 13-CV-02173-PAB-NYW, 2016 WL 462960 (D. Colo. Feb. 8, 2016).
70. See, e.g., van Velkinburgh, Herbst & Casper, supra note 15.
71. See, e.g., Gilbert v. Principal Life Ins. Co., No. 8-CV-TDC-21-0128, 2022 WL 3369537 (D. Md. Aug. 16, 2022).
72. Karen D. Davis et al., Brain Imaging Tests for Chronic Pain: Medical, Legal and Ethical Issues and Recommendations, 13 Nature Revs. Neurology 624, 635 (2017), https://doi.org/10.1038/nrneurol.2017.122.
73. See generally Hannah J. Phalen, Jessica M. Salerno & Nicholas J. Schweitzer, Can Neuroimaging Prove Pain and Suffering?: The Influence of Pain Assessment Techniques on Legal Judgments of Physical Versus Emotional Pain, 45 L. & Hum. Behav. 393 (2021), https://doi.org/10.1037/lhb0000460.
in civil and criminal cases. There are very few cases, civil or criminal, where the mental states of the parties are not at least theoretically relevant on issues of competency, intent, motive, recklessness, negligence, good or bad faith, memory, or other issues. And even if the parties’ own mental states were not relevant, the mental states of witnesses almost always will be potentially relevant—are they telling the truth? Are they biased against one party or another? Is their memory as reliable as they claim it to be? The mental states and memory of jurors, and even of judges, occasionally may be called into question.74
There are some important caveats on understanding the use of neuroscience in legal proceedings. First, while neuroscience evidence has been increasingly used, the evidence is usually based on neuropsychological testing and not neuroimaging. In criminal cases, neuroscience evidence has been most often introduced when the defendant is charged with a serious felony offense, rather than more minor misdemeanors, because it is expensive to investigate the neurological status of the defendant and thus impractical to introduce in misdemeanor cases. The garden-variety breach of contract or assault and battery is also not likely to provide a plausible context for convincing neuroscience evidence, especially if there is no evidence that the actor or the actions were odd or bizarre. And many cases will not provide, or justify, the resources necessary for a neurological evaluation or neuroimaging.
Second, even when neuroscience evidence is introduced, it often has a “time machine” problem. The neurological examination of a criminal defendant may be taking place months or years after the offense. Neuroscience-based testing after the crime is unlikely to discern a person’s state of mind in the past. Unless the legally relevant action took place inside an MRI scanner or while a defendant was using wearable brain sensors, the best it may be able to do is to say that, based on the defendant’s current mental condition or state, as shown by the current brain structure or functioning, the defendant is more or less likely than average to have had a particular mental state or condition at the time of the relevant event. If the time of the relevant event is the time of trial (or shortly before trial)—as would be the case with the truthfulness of testimony, the existence of bias, or the existence of a particular memory—that would not be a problem, but otherwise it would be.
The possibility of using real-time recordings of brain activity has reportedly already been attempted. An accused sought evidence of his brain activity recorded through electrodes implanted to address epileptic seizures as evidence he was suffering from an epileptic seizure at the time of an alleged attack.75 Fitbit
74. See Emily Murphy, Evidence: Evidence of Memory from Brain Data, 5 Judges’ Book 67 (2021), https://perma.cc/69VY-EENM; Emily R.D. Murphy & Jesse Rissman, Evidence of Memory from Brain Data, 7 J.L. & Biosciences 1 (2020), https://doi.org/10.1093/jlb/lsaa078.
75. Jessica Hamzelou, How Your Brain Data Could Be Used Against You, MIT Tech. Rev., Feb. 24, 2023, https://perma.cc/68PP-HYKW.
data have similarly been used in several cases as circumstantial evidence to corroborate or disprove criminal case evidence.76
Neuroscience evidence has been increasingly introduced into criminal cases and accepted in a number of such cases. More than 2,800 judicial opinions from 2005 to 2015 discuss the use of neuroscience evidence by a criminal defendant as part of their defense.77 In general, this follows an evaluation of the defendant for specific neurological abnormalities that may at least in part explain, or partially excuse or mitigate, a defendant’s behavior. This includes medical history, neuropsychological testing, brain scanning, or unsubstantiated claims of a past history of head trauma or brain injury. Since 2015, hundreds more cases per year have discussed similar claims.
In the majority of these criminal cases, the most serious charge against the criminal defendant is some degree of homicide, but in about 40% of the cases, the most serious charge is a felony other than homicide. In only about 10% of the cases do the judicial opinions discuss the use of neuroimaging as part of the neuroscience evidence introduced.78
In some cases, rather than addressing the unique issues relevant to a specific person’s brain, it has been used to inform broader policy choices about categories of individuals. This is particularly true for juvenile defendants and those under the age of twenty-six, who argue that their developing brain, as a class of individuals, warrants treating them differently than adults. Developmental neuroscience, which has shown that adolescents have developing brains that make conforming with the law more difficult than it is for adults, has been the empirical basis for recent constitutional prohibitions against the execution of—or punishment of life without the possibility of parole for—juveniles.79 And an increasing number of criminal defendants have argued, albeit largely unsuccessfully, that the same logic ought to extend to them until at least twenty-six years of age, when the brain has by and large fully developed.80 Three of the handful of published cases in which fMRI evidence was offered in court concerned challenges to state laws requiring warning labels on violent video games.81 The states
76. Farahany, supra note 2, at 83.
77. Henry T. Greely & Nita A. Farahany, Neuroscience and the Criminal Justice System, 2 Ann. Rev. Criminology 451, 453 (2019), https://doi.org/10.1146/annurev-criminol-011518-024433.
78. Id. at 454–55.
79. Graham v. Florida, 560 U.S. 48 (2010); Miller v. Alabama, 567 U.S. 460 (2012); Roper v. Simmons, 543 U.S. 551 (2005).
80. Francis X. Shen et al., Justice for Emerging Adults After Jones: The Rapidly Developing Use of Neuroscience to Extend Eighth Amendment Miller Protections to Defendants Ages 18 and Older, 97 N.Y.U. L. Rev. 101 (2022), https://perma.cc/4GZV-R5E5.
81. Ent. Software Ass’n v. Hatch, 443 F. Supp. 2d 1065 (D. Minn. 2006); Ent. Software Ass’n v. Blagojevich, 404 F. Supp. 2d 1051 (N.D. Ill. 2005); Ent. Software Ass’n v. Granholm, 404 F. Supp. 2d 978 (E.D. Mich. 2005). Each of the three courts held that the state statutes violated the First Amendment.
sought, without success, to use fMRI studies of the effects of violent video games on the brains of children playing the games to support their statutes.82
These “wholesale” uses of neuroscience may (or may not) end up affecting the law, but the courts would be more affected if various “retail” uses of neuroscience were to become common, where a party or a witness is subjected to neuroscience procedures to determine something relevant only to that particular case. An incomplete list of some of the most plausible categories for such retail uses includes the following:
Many, but not all, of these issues have begun to be discussed in the literature. A few of them, such as criminal responsibility, mitigation, memory detection, and lie detection, are appearing in courtrooms; others, such as pain detection, have been introduced in a number of civil cases. This reference guide does not discuss all of these topics and does not discuss any of them in great depth, but it will discuss three of them substantially—criminal responsibility, detection of pain, and lie detection—in order to provide a flavor of the possibilities, as well as presenting a short discussion of a fourth: addiction.
82. The courts, sitting in equity and so without juries, all considered the scientific evidence and concluded that it was insufficient to sustain the statutes’ constitutionality. In Blagojevich the court heard testimony for the state directly from Dr. Kronenberger, the author of some of the fMRI-based articles on which the state relied, as well as from Dr. Howard Nusbaum, for the plaintiffs, who attacked Dr. Kronenberger’s studies. After a substantial discussion of the scientific arguments, the district court judge, Judge Matthew Kennelly, found that “Dr. Kronenberger’s studies cannot support the weight he attempts to put on them via his conclusions,” and did not provide a basis for the statute. Blagojevich, 404 F. Supp. at 1063–67. Judge Kennelly’s discussion of this point may be a good example of the kind of analysis neuroscience evidence may force upon judges.
Neuroscience may raise some deep questions about criminal responsibility. Assume we had excellent scientific evidence that a defendant could not help but commit the criminal acts because of a specific brain abnormality.83 Should that affect the defendant’s guilt and, if so, how? Should it affect their sentence or other subsequent treatment? The moral questions may prove daunting. Currently, the law assumes “legal” free will rather than engaging with the “philosophical” debates over free will.84 The criminal law in all U.S. jurisdictions presumes that individuals are responsible agents capable of making choices and intending the natural consequence of their actions. While some commentators believe that neuroscience should change that assumption in criminal law, “[a]cts should be judged by their tendency under known circumstances, not by the actual intent which accompanies them.”85 The criminal defendant may nonetheless evade or receive a diminution in criminal responsibility through a successful presentation of a justification or excuse.
A finding of criminal liability requires the government to prove the actor voluntarily engaged in a harmful or threatening act proscribed by criminal law and did so with the requisite mental awareness of the circumstances of fact that made the conduct criminal. A conviction generally requires both an actus reus and a mens rea—a “guilty act” and a “guilty mind.” An unconscious person cannot “act,” but even a conscious act is often not enough. Specific crimes often require specific intents, such as acting with a particular purpose or in a knowing, reckless, or negligent way. Some crimes require even more defined mental states, such as a requirement for specific intent to not only commit a homicide, but to do so with knowledge of the special status of the victim, such as a police officer. And almost all crimes can be excused by legal insanity. In these and other ways, the mental state of the defendant may be relevant to a criminal case.
Neuroscience may provide evidence in some cases to support a defendant’s claim of a justification or excuse by helping to establish that they suffer a mental disease or disorder. For example, a defendant who claims to have been insane at the time of the crime might try to support their claim by alleging that they are suffering from frontotemporal dementia, which prevented them from knowing the difference between right and wrong. Neuroimaging may be able to provide some evidence about whether the defendant is, in fact, suffering from frontotemporal dementia and how that affects the individual, at least at the time they
83. See Henry T. Greely, Neuroscience and Criminal Responsibility Proving “Can’t Help Himself” as a Narrow Bar to Criminal Liability, in 13 Law & Neuroscience, Current Legal Issues 61 (Michael Freeman ed., 2011), https://doi.org/10.1093/acprof:oso/9780199599844.003.0005.
84. Nita A. Farahany & James E. Coleman, Jr., Genetics, Neuroscience, and Criminal Responsibility, in The Impact of Behavioral Sciences on Criminal Law 183 (Nita A. Farahany ed., 2009), https://doi.org/10.1093/acprof:oso/9780195340525.003.0007.
85. Oliver Wendell Holmes, Jr., The Common Law 66 (1881).
are undergoing the neurological evaluation.86 Such neurological evaluation might even show that the defendant had a stroke or tumor in a particular part of the brain, which then could be used to argue in some way against the defendant’s criminal responsibility.87
Neuroimaging has been used more broadly in some criminal cases. For example, in the trial of John Hinckley Jr. for the attempted assassination of President Reagan, the defense used CAT scans of Hinckley’s brain to support the argument, based largely on his bizarre behavior, that he suffered from schizophrenia. The scientific basis for that conclusion, offered early in the history of brain CAT scans, was questionable at the time and has become even weaker since, but Hinckley was found not guilty by reason of insanity. In November 2009, testimony about an fMRI scan was introduced in the penalty phase of a capital case as mitigating evidence that the defendant suffered from psychopathy. The defendant was sentenced to death, but after longer jury deliberations than defense counsel expected.88 (This appears to have been the first time fMRI results were introduced in a criminal case.89) To date, the most common types of neuroimaging that are introduced to support a defendant’s claim about a brain abnormality or difference that at least in part explains their criminal conduct have been MRI or PET scans, rather than more advanced techniques like EEG, SPECT, or fMRI scanning.90
Neuroscience evidence also may be relevant in wider arguments about criminal justice. Evidence about the development of adolescent brains has been referred to in appellate cases concerning the punishments appropriate for people who committed crimes while underage, including, as noted above, U.S. Supreme Court decisions. More broadly, some have urged that neuroscience will undercut much of the criminal justice system. The argument is that neuroscience
86. See, e.g., Nita A. Farahany, Neuroscience and Behavioral Genetics in US Criminal Law: An Empirical Analysis, 2 J.L. & Biosciences 485 (2015), https://doi.org/10.1093/jlb/lsv059.
87. In at least one fascinating case, a man who was convicted of sexual abuse of a child was found to have a large tumor pressing into his brain. When the tumor was removed, his criminal sexual impulses disappeared. When his impulses returned, so had his tumor. The tumor was removed a second time and, again, his impulses disappeared. Jeffrey M. Burns & Russell H. Swerdlow, Right Orbitofrontal Tumor with Pedophilia Symptom and Constructional Apraxia Sign, 60 Archives Neurology 437 (2003), https://perma.cc/X2VU-VHWC; Doctors Say Pedophile Lost Urge After Tumor Removed, USA Today, July 28, 2003, https://perma.cc/NY89-2GQT; see also Greely, supra note 83 (offering a longer discussion of this case).
88. The defendant in this Illinois case, Brian Dugan, confessed to the murder but sought to avoid the death penalty. See Virginia Hughes, Science in Court: Head Case, 464 Nature 340 (2010), https://doi.org/10.1038/464340a (providing an excellent discussion of this case).
89. It should be noted that other forms of neuroimaging, particularly PET and structural MRI scans, have been more widely used in criminal cases. Dr. Ruben Gur at the University of Pennsylvania estimates that he has used neuroimaging in testimony for criminal defendants about thirty times. Id.
90. Farahany, supra note 86.
ultimately will prove that no one—not even the sanest defendant—has free will and that this will fatally weaken the retributive aspect of criminal justice.91
The application of neuroscience evidence to individual claims of a lack of criminal responsibility has proven challenging.92 Such claims suffer from the time machine problem—the neuropsychological testing or imaging will almost always be from after, usually long after, the crime was committed and so cannot directly show a defendant’s brain state (and hence, by inference, his or her mental state) at or before the time of the crime. While this could change with more passively collected brain data if consumer neurotechnology becomes widespread, it remains to be seen how much we will be able to infer from that data, or how robust or reliable the data will be in an environment that is not well controlled, like a laboratory or clinical setting.
Careful neuroscience studies, either structural or functional, of the brains of criminals are rare. It seems highly unlikely that a “responsibility region” will ever be found, one that is universally activated in law-abiding people and that is deactivated in criminals (or vice versa). At most, the evidence is likely to show that people with particular brain structures or patterns of brain functioning commit crimes more frequently than people without such structures or patterns. Applying this group evidence to individual cases will be difficult, if not impossible. All of the problems of technical and statistical analysis of neuroimaging data, discussed above in the section titled “Issues in Interpreting Study Results,” apply. And it is possible that the to-be-scanned defendants will be able to implement countermeasures to “fool” the expert analyzing the scan.
The use of neuroscience to undermine criminal responsibility faces another problem: identifying a specific legal argument. It is not generally a defense to a criminal charge to assert that one has a predisposition to commit a crime, or even a very high statistical likelihood, as a result of social and demographic variables, of committing a crime. It is not clear whether neuroscience would, in any
91. See, e.g., Robert M. Sapolsky, The Frontal Cortex and the Criminal Justice System, in Law and the Brain 227 (Semir Zeki & Oliver Goodenough eds., 2006), https://doi.org/10.1093/oso/9780198570103.003.0012; Joshua Greene & Jonathan Cohen, For the Law, Neuroscience Changes Nothing and Everything, in id. at 207, https://doi.org/10.1093/oso/9780198570103.003.0011. These arguments have been forcefully made by Professor Stephen Morse. See, e.g., Stephen J. Morse, Determinism and the Death of Folk Psychology: Two Challenges to Responsibility from Neuroscience, 9 Minn. J.L. Sci. & Tech. 1 (2008), https://perma.cc/985T-B4MR; Stephen J. Morse, The Non-Problem of Free Will in Forensic Psychiatry and Psychology, 25 Behav. Sci. & L. 203 (2007), https://doi.org/10.1002/bsl.744; Stephen J. Morse, Moral and Legal Responsibility and the New Neuroscience, in Neuroethics: Defining the Issues in Theory, Practice, and Policy 33 (Judy Illes ed., 2d ed. 2006), https://doi.org/10.1093/acprof:oso/9780198567219.003.0003; Stephen J. Morse, Brain Overclaim Syndrome and Criminal Responsibility: A Diagnostic Note, 3 Ohio St. J. Crim. L. 397 (2006).
92. A good short discussion of these challenges can be found in A Judge’s Guide to Neuroscience: A Concise Introduction 37, supra note 5.
more than a very few cases,93 provide evidence that was not equivalent to predisposition evidence. (And, of course, prudent defense counsel might think twice before presenting evidence to the jury that their client was strongly predisposed to commit crimes.)
We are at an early stage in our understanding of the brain and of the brain states related to the mental states involved in criminal responsibility. At this point, about all that can be said is that at least some criminal defense counsel, seeking to represent their clients zealously, will watch neuroscience carefully for arguments they could use to relieve their clients from criminal responsibility.
The use of neuroscience methods for lie detection probably has received more attention than any other issue raised in this reference guide.94 This is due in part
93. See Greely, supra note 83 (arguing for a very narrow neuroscience-based defense).
94. For a technology whose results have yet to be admitted in court, the legal and ethical issues raised by fMRI-based lie detection have been discussed in an amazingly long list of scholarly publications from 2004 to the present. An undoubtedly incomplete list follows: Brown & Murphy, supra note 36; Anthony D. Wagner, supra note 23; Frederick Schauer, Can Bad Science Be Good Evidence?: Neuroscience, Lie-Detection, Beyond, 95 Cornell L. Rev. 1191 (2010); Frederick Schauer, Neuroscience, Lie-Detection, and the Law: A Contrarian View, 14 Trends Cognitive Scis. 101 (2010), https://doi.org/10.1016/j.tics.2009.12.004; Bizzi et al., supra note 23; Joëlle Anne Moreno, The Future of Neuroimaged Lie Detection and the Law, 42 Akron L. Rev. 717 (2009), https://perma.cc/E58C-H7PQ; Julie Seaman, Black Boxes: fMRI Lie Detection and the Role of the Jury, 42 Akron L. Rev. 931 (2009), https://perma.cc/JHW9-RGP2; Jane Campbell Moriarty, Visions of Deception: Neuroimages and the Search for Truth, 42 Akron L. Rev. 739 (2009), https://perma.cc/SY9C-N24G; Benjamin Holley, It’s All in Your Head: Neurotechnological Lie Detection and the Fourth and Fifth Amendments, 28 Devs. Mental Health L. 1 (2009), https://perma.cc/2GWS-WWPZ; Brian Reese, Comment, Using fMRI as a Lie Detector—Are We Lying to Ourselves?, 19 Alb. L.J. Sci. & Tech. 205 (2009), https://perma.cc/XJ2D-4QQF; Cooper Ellenberg, Lie Detection: A Changing of the Guard in the Quest for Truth in Court?, 33 L. & Psych. Rev. 139 (2009), https://perma.cc/R58K-58ZX; Julie A. Seaman, Black Boxes, 58 Emory L.J. 427 (2008), https://perma.cc/PAL6-B4BY; Greely & Illes, supra note 44; Mark Pettit, FMRI and BF Meet FRE: Brain Imaging and the Federal Rules of Evidence, 33 Am. J.L. & Med. 319 (2007), https://doi.org/10.1177/009885880703300208; Jonathan H. Marks, Interrogational Neuroimaging in Counterterrorism: A “No-Brainer” or a Human Rights Hazard?, 33 Am. J.L. & Med. 483 (2007), https://perma.cc/R2TJ-WNCS; Leo Kittay, Admissibility of fMRI Lie Detection: The Cultural Bias Against “Mind Reading” Devices, 72 Brook. L. Rev. 1351, 1355 (2007), https://perma.cc/33D7-86TA; Jeffrey Bellin, The Significance (If Any) for the Federal Criminal Justice System of Advances in Lie Detector Technology, 80 Temp. L. Rev. 711 (2007), https://perma.cc/XNV3-RE36; Henry T. Greely, The Social Consequences of Advances in Neuroscience: Legal Problems, Legal Perspectives, in Neuroethics: Defining the Issues in Theory, Practice, and Policy 245, supra note 91, https://doi.org/10.1093/acprof:oso/9780198567219.003.0017; Charles N.W. Keckler, Cross-Examining the Brain: A Legal Analysis of Neural Imaging for Credibility Impeachment, 57 Hastings L.J. 509 (2006), https://perma.cc/QVY5-7ZSR; Archie Alexander, Functional Magnetic Resonance Imaging Lie Detection: Is a “Brainstorm” Heading
to the cultural interest in lie detection, dating back in its technological phase nearly a hundred years to the invention of the polygraph.95 But it is also due to the fact that for several years two commercial firms offered fMRI-based lie-detection services for sale in the United States: Cephos and No Lie MRI.96
Beyond the scientific validity of these techniques lies a host of legal questions. How accurate is accurate enough for admissibility in court or for other legal system uses? What are the implications of admissible and accurate lie detection for the Fourth, Fifth, Sixth, and Seventh Amendments? Would jurors be allowed to consider the failure, or refusal, of a party to take a lie-detector test? Would lie detection be available in discovery? Would each side get to do its own tests—and who would pay?
Accurate lie detection could make the justice system much more accurate. Incorrect convictions might become rare; so might incorrect acquittals. Accurate lie detection also could make the legal system much more efficient. It seems likely that far fewer cases would go to trial if the witnesses could expect to have their veracity accurately determined.
Inaccurate lie detection, on the other hand, holds the potential of ruining the innocent and immunizing the guilty. It is at least daunting to remember some of the failures of the polygraph, such as the case of Aldrich Ames, a Soviet (and then Russian) mole in the Central Intelligence Agency, who passed two Agency polygraph tests while serving as a paid spy.97 The courts already have begun to decide whether and how to use these new methods of lie detection in
Toward the “Gatekeeper”?, 7 Hous. J. Health L. & Pol’y 1 (2007), https://perma.cc/H4UR-HBR2; Paul Root Wolpe, Kenneth R. Foster & David D. Langleben, Emerging Neurotechnologies for Lie-Detection: Promises and Perils, 5 Am. J. Bioethics 39, 42 (2005), https://doi.org/10.1080/15265160590923367; Henry T. Greely, Premarket Approval Regulation for Lie Detection: An Idea Whose Time May Be Coming, 5 Am. J. Bioethics, 50–52 (2006), https://doi.org/10.1080/15265160590960988; Sean Kevin Thompson, Note, The Legality of the Use of Psychiatric Neuroimaging in Intelligence Interrogation, 90 Cornell L. Rev. 1601 (2005), https://perma.cc/DC5N-PFBK; Henry T. Greely, Prediction, Litigation, Privacy, and Property: Some Possible Legal and Social Implications of Advances in Neuroscience, in Neuroscience and the Law 114–56, supra note 5, https://perma.cc/9JXM-PVMV; Judy Illes, A Fish Story? Brain Maps, Lie Detection, and Personhood, 6 Cerebrum 73 (2004), https://perma.cc/8ZY4-V7X6.
95. An interesting history of the polygraph can be found in Ken Alder, The Lie Detectors: The History of an American Obsession (2007). Perhaps the best overall discussion of the polygraph, including some discussion of its history, is found in the National Research Council report, commissioned in the wake of the Wen Ho Lee case, on the use of the technology for screening. Nat’l Rsch. Council, supra note 25.
96. Cephos stopped providing fMRI-based lie-detection services sometime after 2012; it seems to have some continuing existence with respect to DNA forensics. See Cephos, https://perma.cc/F3JY-9D73. No Lie MRI appears to have changed its name to Truthful Brain Corporation and to have created a nonprofit venture called Medicine for Law. It is not clear whether either of those is still active, although the second does have a website: https://perma.cc/V78N-EB3F.
97. See Senate Select Committee on Intelligence, An Assessment of the Aldrich H. Ames Espionage Case and Its Implications for U.S. Intelligence (1994), https://perma.cc/26TT-55SK.
the judicial process; the rest of society also will soon be forced to decide on their uses and limits.
But, of course, lie detection might have applications to litigation without ever being introduced in trials. As is the case today with the polygraph, the fact that it is not generally admissible in court might not stop the police or the prosecutors from using it to investigate alleged crimes. Similarly, defense counsel might well use it to attempt to persuade the authorities that their clients should not be charged or should be charged with lesser offenses. One could imagine the same kinds of pretrial uses of lie detection in civil cases, as the parties seek to affect each other’s perceptions of the merits of the case.
Such lie-detection efforts could also affect society, and the law, outside of litigation. One could imagine prophylactic lie detection at the beginning of contractual relations, seeking to determine whether the other side honestly had the present intention of complying with the contract’s terms. One can also imagine schools using lie detection as part of investigations of student misconduct, or parents seeking to use lie detection on their children. The law more broadly may have to decide whether and how private actors can use lie detection, determining whether, for example, to extend to other contexts—or to weaken or repeal—the Employee Polygraph Protection Act.98
Efforts to use fMRI-based lie detection in the courts have faded in recent years, but another method of “recognition detection” may soon reach the courts. This section will discuss each approach, both to help judges if such evidence reaches their courts but also, at least for fMRI-based lie detection, as a case study of the legal system’s response to such evidence.
Currently, as far as we know, evidence from fMRI-based lie detection has not been admitted into evidence in any court, but it was offered—and rejected—in at least three cases, United States v. Semrau,99 Wilson v. Corestaff Services L.P.,100 and a widely publicized murder trial, Maryland v. Smith,101 in the early 2010s.102
98. See supra discussion accompanying note 54.
99. 693 F.3d 510 (6th Cir. 2012), aff’g, No. 1:07CR10074-01-MI, 2011 WL 1114441 (W.D. Tenn. Mar. 24, 2011).
100. 900 N.Y.S.2d 639 (N.Y. Sup. Ct. 2010).
101. Michael Laris, Debate on Brain Scans as Lie Detectors Highlighted in Maryland Murder Trial, Wash. Post, Aug. 26, 2012, https://perma.cc/EG4Z-FZ84.
102. In early 2009, a motion to admit fMRI-based lie-detection evidence, provided by No Lie MRI, was made, and then withdrawn, in a child custody case in San Diego. The case is discussed in a prematurely titled article by Alexis Madrigal, MRI Lie Detection to Get First Day in Court, WIRED Sci., Mar. 16, 2009, https://perma.cc/29SF-VKFH. A somewhat similar method of using EEG to look for signs of “recognition” in the brain was admitted into one state court hearing for
Since then, efforts to introduce evidence from this technology have largely disappeared—we find no reported cases about such attempted uses since 2012—but the story of its rise and fall provides a good example of a path for new neuroscience evidence. And the underlying science still exists and could well end up in court again. This section will begin by analyzing the issues raised for courts by this technology and then will discuss these three cases, before ending with a quick look at possible uses for this kind of technology outside the courtroom.
Published research on fMRI and detecting deception dates back to about 2001.103 As noted above, to date between twenty and thirty peer-reviewed articles from about fifteen laboratories have appeared claiming to find statistically significant correlations between patterns of brain activation and deception. Only a handful of the published studies have looked at the accuracy of determining deception in individual participants as opposed to group averages. Those studies generally claim accuracy rates of between about 75% and 90%. No Lie MRI has licensed the methods used by one laboratory, that of Dr. Daniel Langleben at the University of Pennsylvania; Cephos has licensed the method used by another laboratory, that of Dr. Frank A. Kozel, first at the Medical University of South Carolina and then at the University of Texas Southwestern Medical Center. (The method used by a British researcher, Dr. Sean Spence, has been used on a British reality television show.)
All of these studies rely on research participants, typically but not always college students, who are recruited for a study of deception. They are instructed to answer some questions truthfully in the scanner and to answer other questions inaccurately.104 In the Langleben studies, for example, undergraduates were shown images of playing cards while in the scanner and asked to indicate whether they saw a particular card. They were instructed to answer truthfully except when they saw one particular card. Some of Kozel’s studies used a different experimental paradigm, in which the participants were put in a room and told to take either a watch or a ring. When asked in the scanner separately whether they
postconviction relief at the trial court level in Iowa in 2001, and both it and another EEG-based method have been used in India. As far as we know, evidence from the use of EEG for lie detection has not been admitted in any other U.S. cases. See supra note 45.
103. The most recent reviews of the scientific literature on this subject are Wagner, supra note 23, and Shawn E. Christ et al., The Contributions of Prefrontal Cortex and Executive Control to Deception: Evidence from Activation Likelihood Estimate Meta-Analyses, 19 Cerebral Cortex 1557 (2009), https://doi.org/10.1093/cercor/bhn189. See also Greely & Illes, supra note 44 (for discussion of the articles through early 2007). The following discussion is based largely on those sources.
104. At least one fMRI study has attempted to investigate self-motivated lies, told by participants who were not instructed to lie, but who chose to lie for personal gain. Joshua D. Greene & Joseph M. Paxton, Patterns of Neural Activity Associated with Honest and Dishonest Moral Decisions, 106 PNAS 12506 (2009), https://doi.org/10.1073/pnas.0900152106. The experiment was designed to make it easy for participants to realize they would be given more money if they lied about how many times they correctly predicted a coin flip. Investigators could not, however, determine if a participant lied in any particular trial.
had taken the watch and then whether they had taken the ring, they were to reply “no” in both cases—truthfully once and falsely the other time. When analyzed in various ways, the fMRI results showed statistically different patterns of brain activation (small changes in BOLD response) when the participants were lying and when they were telling the truth.
In general, these studies are not guided by a consistent hypothesis about which brain regions should be activated or deactivated during truth or deception. The results are empirical; they see particular patterns that differ between the truth state and the lie state. Some have argued that the patterns show greater mental effort when deception is involved; others have argued that they show more impulse control when lying.
Are fMRI-based lie-detection methods accurate? As a class of experiments, these studies are subject to all the general problems discussed above in the section “Issues in Interpreting Study Results” regarding fMRI scans that might lead to neuroscience evidence. So far, there are only a few studies involving a limited number of participants. (The method used by No Lie MRI seems ultimately to have been based on the responses of four right-handed, healthy, male University of Pennsylvania undergraduates.105) There have been, to date, no independent replications of any group’s findings.
The experience of the research participants in these fMRI studies of deception seems to be different from “lying” as the court system would perceive it. The participants knew they were involved in research, they were following orders to lie, and they knew that the most harm that could come to them from being detected in a lie might be lesser payment for taking part in the experiment. This seems hard to compare to a defendant lying about participating in a murder.106 More fundamentally, it is not clear how one could conduct ethical but realistic experiments with lie detection. Research participants cannot credibly be threatened with jail if they do not convince the researcher of the truth of their lies.
Only a handful of researchers have published studies showing reported accuracy rates with individual participants and only with a small number of participants.107 Some of the studies used complex and somewhat controversial
105. Langleben et al., supra note 23, at 267.
106. Michael Pardo has forcefully made this point by dissecting, philosophically, the meaning of the term lie. In 2018, in one of the latest scholarly discussions of fMRI lie detection, he argues that the studies are not detecting “lying” because they examine participants who are doing what they have been instructed to do. Michael S. Pardo, Lying, Deception, and fMRI: A Critical Update, in Neurolaw and Responsibility for Action 143 (Bebhinn Donnelly-Lazarov ed., 2018), https://perma.cc/FLD9-PA5U. Following earlier work from 2013, see Michael S. Pardo & Dennis Patterson, Minds, Brains, and Law: The Conceptual Foundations of Law and Neuroscience 105–06 (2013), where he discusses five subsequent studies, all of which he finds unhelpful.
107. See discussion in Wagner, supra note 23, at 29–35. Wagner analyzes eleven peer-reviewed, published papers. Seven come from Kozel’s laboratory; three come from Langleben’s. The only exception is a paper from John Cacciopo’s group, which concludes “[A]lthough fMRI may permit
statistical techniques. And although participants in at least one experiment were invited to try to use countermeasures against being detected, no specific countermeasures were tested.
United States v. Semrau is the most significant case involving fMRI lie detection. On June 1, 2010, U.S. Magistrate Judge Tu M. Pham of the Western District of Tennessee issued an amended thirty-nine-page report and recommendation on the prosecution’s motion to exclude evidence from an fMRI-based lie-detection report by Cephos.108 The report came after a hearing on May 13–14 featuring testimony from Steve Laken, CEO of Cephos, for admission, and from two experts arguing against admission. The district judge adopted the magistrate’s report in its entirety during the trial and, in September 2012, the Sixth Circuit, in a lengthy opinion that relied heavily on the magistrate judge’s report and recommendations, affirmed that the district court had not abused its discretion in excluding the evidence under either Rule 702 or Rule 403.109
The defendant in this case, Dr. Semrau, a health professional accused of defrauding Medicare, offered as evidence a report from Cephos stating that he was being truthful when he answered a set of questions about his actions and knowledge concerning the alleged crimes.
Judge Pham first analyzed the motion under Rule 702, using the Daubert criteria. He concluded that the technique was testable and had been the subject of peer-reviewed publications. On the other hand, he concluded that the error rates for its use in realistic situations were unknown. Furthermore, he found there were no standards for its appropriate use. To the extent that the publications relied on by Cephos to establish its reliability constituted such standards, those standards had not actually been followed in the tests of the defendant. Cephos actually scanned Dr. Semrau two times on one day, asking questions about one aspect of the criminal charges during the first scan and then about another aspect in the second scan. The company’s subsequent analysis of those scans indicated that the defendant had been truthful in the first scan but deceptive in the second scan. Cephos then scanned him a third time, several days later, on the second subject but with revised questions, and concluded that he was telling the truth that time. Nothing in the publications relied upon by Cephos indicated that the third scan
investigation of the neural correlates of lying, at the moment it does not appear to provide a very accurate marker of lying that can be generalized across individuals or even perhaps across types of lies by the same individuals.” George T. Monteleone et al., Detection of Deception Using fMRI: Better Than Chance, But Well Below Perfection, 4 Soc. Neuroscience 528 (2009), https://doi.org/10.1080/17470910801903530. However, that study only looked at one brain region at a time, and it did not test combinations or patterns, which might have improved the predictive power.
108. United States v. Semrau, No. 07–10074, 2010 WL 6845092 (W.D. Tenn. June 1, 2010). The district court judge assigned to the case had a scheduling conflict on the date of the hearing on the prosecution’s motion, so the hearing was held before a magistrate judge from that district.
109. 693 F.3d 510, 523–24 (6th Cir. 2012).
was appropriate. Finally, Judge Pham found that the method was not generally accepted in the relevant scientific community as sufficiently reliable for use in court, citing several publications, including some written by the authors whose methods Cephos used.
The magistrate judge then examined the motion under Rule 403 and found that the potential prejudicial effect of the evidence outweighed its probative value. He noted that the test had been conducted without the government’s knowledge or participation, in a context where the defendant risked nothing by taking the test—a negative result would never be disclosed. He noted the jury’s central role in determining credibility and considered the likelihood that the lie-detection evidence would be a lengthy and complicated distraction from the jury’s central mission. Finally, he noted that the probative value of the evidence was greatly reduced because the report only gave a result concerning the defendant’s general truthfulness when responding to more than ten questions about the events but did not even purport to say whether the defendant was telling the truth about any particular question.110
The Sixth Circuit panel affirmed unanimously, holding that the district court did not abuse its discretion in excluding the fMRI evidence pursuant to Rule 403 in light of (1) the questions surrounding the reliability of fMRI lie-detection tests in general and as performed on Dr. Semrau, (2) the failure to give the prosecution an opportunity to participate in the testing, and (3) the test result’s inability to corroborate Dr. Semrau’s answers as to the particular offenses for which he was charged.111
In the same month as the magistrate judge’s report in Semrau, a state trial court judge in Brooklyn excluded another Cephos lie-detection report in a civil case, Wilson v. Corestaff Services L.P.112 This case involved a claim under state law by a former employee that she had been subject to retaliation for reporting sexual harassment. The plaintiff offered evidence from a Cephos report finding that
110. Interestingly, the Sixth Circuit panel stated, in a footnote, that “the prospect of introducing fMRI lie detection results into criminal trials is undoubtedly intriguing and, perhaps, a little scary.” Id. at 524 n.12. See Daniel S. Goldberg, Against Reductionism in Law & Neuroscience, 11 Hous. J. Health L. & Pol’y 321, 324 & n.6 (2011), https://perma.cc/K4E4-6J4Q (reviewing literature that “challenges the very idea that fMRI or other novel neuroimaging techniques either can or should be used as evidence in criminal proceedings”).
There may well come a time when the capabilities, reliability, and acceptance of fMRI lie detection—or even a technology not yet envisioned—advances to the point that a trial judge will conclude, as did Dr. Laken in this case: “I would subject myself to this over a jury any day.” Though we are not at that point today, we recognize that as science moves forward the balancing of Rule 403 may well lean toward finding that the probative value for some advancing technology is sufficient. Semrau, 693 F.3d at 524 n.12.
111. Id. at 524.
112. 900 N.Y.S.2d 639 (N.Y. Sup. Ct. 2010).
her main witness was truthful when he described how the defendant’s management said it would retaliate against the plaintiff.
That case did not involve an evidentiary hearing or, indeed, any expert testimony. The judge decided the lie-detection evidence was not appropriate under New York’s version of the Frye test, noting that, in New York, “courts have advised that the threshold question under Frye in passing on the admissibility of expert’s testimony is whether the testimony is ‘within the ken of the typical juror.’”113 Because credibility is a matter for the jury, the judge concluded that this kind of evidence was categorically excluded under New York’s version of Frye. He also noted that “even a cursory review of the scientific literature demonstrates that the plaintiff is unable to establish that the use of the fMRI test to determine truthfulness or deceit is accepted as reliable in the relevant scientific community.”114
Finally, also in 2012, a Maryland trial court excluded a No Lie MRI fMRI-based lie-detection report in a well-publicized murder case. The judge held three days of pretrial hearings on the evidence before concluding there was no consensus among the experts: “These are brilliant people, and they don’t agree.”115 No published opinion followed.116
After 2012, fMRI-based lie detection continued to be discussed in the press and the academic literature for several years,117 but without any other known efforts to introduce it. (Semrau, for example, has yet to be cited in any opinions involving lie detection.) Nonetheless, the science is still there and further attempts may be made.
The current fMRI-based methods of lie detection do provide one kind of protection for possible participants—they are obvious. No one is going to be put into an MRI for an hour and asked to respond, repeatedly, to questions without realizing something important is going on. Should researchers develop less obtrusive or obvious methods of neuroscience-based lie detection, we will have to deal with the possibilities of involuntary and, indeed, surreptitious lie detection.
113. Id. at 642 (citing People v. Cronin, 60 N.Y.2d 430, 433 (1983)).
114. See id.
115. Michael Laris, Debate on Brain Scans as Lie Detectors Highlighted in Maryland Murder Trial, Wash. Post, Aug. 26, 2012, https://perma.cc/X5G9-GVPT.
116. A subsequent appeal was published, Smith v. State, 98 A.3d 444 (Md. Ct. Spec. App. 2014), but the lie-detection evidence had not been a ground for that appeal and was not mentioned.
117. Notable additions to the scholarly literature, in addition to the Pardo works cited supra note 106, include Daniel D. Langleben & Jane Campbell Moriarty, Using Brain Imaging for Lie Detection: Where Science, Law, and Policy Collide, 19 Psych., Pub. Pol’y & L. 222 (2013), https://doi.org/10.1037/a0028841; Martha J. Farah et al., Functional MRI-based Lie Detection: Scientific and Societal Challenges, 15 Nature Revs. Neuroscience 123 (2014), https://doi.org/10.1038/nrn3665; Jane Campbell Moriarty, Neuroimaging Evidence in US Courts, in Law and Mind: A Survey of Law and the Cognitive Sciences 370 (Bartosz Broz·ek, Jaap Hage & Nicole A. Vincent eds., 2021), https://perma.cc/YR36-LGYU.
A different approach uses the unique neural signature in EEGs of suspects’ brains associated with the recognition, salience, or congruence of facts or events to interrogate the brain.118 While criminal suspects wear an EEG headset, they are shown images, words, or sounds during which their brainwave activity is recorded. Images from a crime scene such as a murder weapon, objects from crime scenes, the victim, or the victim’s clothing might all be triggering images. An analyst then interprets the brainwave activity to see if a characteristic EEG signal registers from the brain in response to any of the images.
The P300 wave was the first to be used for this purpose. It is an event-related potential (ERP) measurement of brain activity, where the amplitude of that waveform differs depending on whether a participant recognizes a prior fact.119 Whereas a traditional polygraph asks a person a series of yes or no questions and then looks at their physiological responses to infer if they are telling the truth or a lie, this approach is designed to reveal “recognition” of salient facts of a crime.120
Although this EEG-based technique was initially heralded as an extraordinary scientific breakthrough, the broader scientific community raised serious doubts about the veracity of the approach.121 More recently, however, some scientists have started to independently test the technique and are encouraged that it could be applied more reliably.122 In particular, Dr. Peter Rosenfeld rigorously tested this approach, including looking for countermeasures—and countermeasures to countermeasures.123
Another scientifically promising approach uses a different ERP brain response called the N400 signal, which seems to respond to congruence rather than salience. As an author of this reference guide explained:
You could show a suspect a series of faces that includes their suspected co-conspirators. Their N400 brain signals would be more negative for “incongruent” faces that didn’t belong than for “congruent” faces that did. Similarly, you could pair words together like “body” and “lake” versus “body” and “basement” to try to find out where a murder victim’s corpse is hidden.124
118. See Farahany, supra note 2.
119. Tim Stelloh, Larry Farwell Claims His Lie Detector System Can Read Your Mind. Is He a Scam Artist, or a Genius?, Medium, Jan. 6, 2021, https://perma.cc/N9S7-YU9K.
120. Farahany, supra note 43.
121. Stelloh, supra note 119.
122. Id.
123. See J. Peter Rosenfeld, P300 in Detecting Concealed Information and Deception: A Review, 57 Psychophysiology e13362 (2020), https://doi.org/10.1111/psyp.13362.
124. Farahany, supra note 2, at 82. See also K. V. Dijkstra, J. D. R. Farquhar & P. W. M. Desain, The N400 for Brain Computer Interfacing: Complexities and Opportunities, 17 J. Neural Eng’g 022001 (2020), https://doi.org/10.1088/1741-2552/ab702e.
Both the P300 and the N400 signals are detected with EEG, a cheap and easily used technology. Some of the earliest attempts in the United States to use the P300 technology, oddly referred to as “brain fingerprinting,” came from criminal defendants themselves, asking for the results of the test to be used to validate their claims of innocence.125 More commonly, it is the police and not criminal suspects who have sought out the P300 technology.126 But the practice has not become widespread in the United States, likely because criminal defendants will only rarely voluntarily submit to the test and cannot now be forced to do so. But it is finding some success overseas. P300 has been used in India since 2003. Singapore’s police services purchased the technology in 2013; and one source says police in Florida signed a contract in 2014 to use the approach.127 A firm now named Brainwave Science has been marketing this approach, calling it “iCognative,” with some success around the world. Australian counterterrorism authorities are currently looking into the potential use of brain fingerprinting to determine whether individuals returning from war zones have been illegally involved in conflict when they claim to have been participating in humanitarian work.128
Detection of pain is likely to be another area for use of neuroscience evidence, although this will be mainly in civil cases or administrative appeals (often from denial of government disability payments). No matter where an injury occurs and no matter where it seems to hurt, pain is felt in the brain.129 Without sensory nerves leading to the brain from a body region, there is usually no experience of pain. Without the brain machinery and functioning to process the signal, no pain is perceived.
Pain turns out to be complicated—even the common pain that is experienced from an acute injury to, say, an arm. Neurons near the site of the injury called nociceptors transmit the pain signal to the spinal cord, which relays it to the brain. But other neurons near the site of the injury will, over time, adapt to affect the pain signal. Cells in the spinal cord can also modulate the pain signal that is sent to the brain, making it stronger or weaker. The brain, in turn,
125. State v. Harrington, 284 N.W.2d 244 (Iowa 1979); e.g., Johnson v. State, 730 N.W.2d 209 (Table) (Iowa Ct. App. 2007); People v. Dorris, 2013 IL App (4th) 120699-U (Ill. App. Ct. 2013).
126. Jayanth Murali, Cool Tool for Police Investigation?, Deccan Chron., Sept. 2, 2018, https://perma.cc/J2YY-X6SB.
127. David Cox, Can Your Brain Reveal You Are a Liar?, BBC, Jan. 25, 2016, https://perma.cc/NWR7-PAEV.
128. Id.
129. See Brain Facts, supra note 6, at 19–21, 49–50, which includes a useful brief description of the neuroscience of pain.
sends signals down to the spinal cord that cause, or at least affect, these modulations. And the actual sensation of pain—the “ouch”—takes place in the brain.
The immediate and localized sensation is processed in the somatosensory cortex, the brain region that takes sensory inputs from different body parts (with each body part getting its own portion of the somatosensory cortex) and processes them into a perceived sensation. The added knowledge that the sensation is painful seems to require the participation of other regions of the brain. Using fMRI and other techniques, some researchers have identified what they call the “pain matrix” in the brain, regions that are activated when experimental participants, in scanners, are exposed to painful stimuli. The brain regions identified as part of the so-called pain matrix vary from researcher to researcher, but generally include the thalamus, the insula, parts of the anterior cingulate cortex, and parts of the cerebellum.130
Researchers have run experiments with participants in the scanner receiving painful or nonpainful stimuli and have attempted to find activation patterns that appear when pain is perceived and that do not appear when pain is absent. (The participants usually are given nonharmful painful stimuli such as having their skin touched with a hot metal rod or coated with a pepper-derived substance that causes a burning sensation.) Some have reported substantial success, detecting pain in more than 80% of the cases.131 Other studies have found a positive correlation between the degree of activation in the pain matrix and the degree of subjective pain, both as reported by the participants and as possibly indicated by the heat of the rod or the amount of the painful substance—the higher the temperature or the concentration of the painful substance, the greater the average activation in the pain matrix.132
Other neuroscience studies of individual pain look not at brain function during painful episodes but at brain structure. Some researchers, for example, claim that different regions of the brain have different average size and neuron densities in patients who have had long-term chronic pain than in those who have not had such pain.133
130. A good recent review of this work can be found in Maite M. van der Miesen, Martin A. Lindquist & Tor D. Wager, Neuroimaging-based Biomarkers for Pain: State of the Field and Current Directions, 4 PAIN Reps. e751 (2019), https://doi.org/10.1097/PR9.0000000000000751.
131. See, e.g., Irene Tracey, Imaging Pain, 101 Brit. J. Anaesthesia 32 (2008), https://doi.org/10.1093/bja/aen102.
132. See, e.g., Robert C. Coghill, John G. McHaffie & Yi-Fen Yen, Neural Correlates of Interindividual Differences in the Subjective Experience of Pain, 100 PNAS 8538 (2003), https://doi.org/10.1073/pnas.1430684100.
133. See, e.g., A. Vania Apkarian et al., Chronic Back Pain Is Associated with Decreased Prefrontal and Thalamic Gray Matter Density, 24 J. Neuroscience 10410 (2004), https://doi.org/10.1523/JNEUROSCI.2541-04.2004; see also Arne May, Chronic Pain May Change the Structure of the Brain, 137 Pain 7 (2008), https://doi.org/10.1016/j.pain.2008.02.034; Karen D. Davis, Recent Advances and Future Prospects in Neuroimaging of Acute and Chronic Pain, 1 Future Neurology 203 (2006), https://doi.org/10.2217/14796708.1.2.203.
Pain is clearly complicated. Placebos, distractions, or great need can sometimes cause people not to sense, or perhaps not to notice, pain that could otherwise be overwhelming. Similarly, some people can become hypersensitive to pain, reporting severe pain when the stimulus normally would be benign. Amputees with phantom pain—the feeling of pain in a limb that has been gone for years—have been scanned while reporting this phantom pain. They show activation in the pain matrix. In some fMRI studies, people who have been hypnotized to feel pain, even when there is no painful stimulus, show activation in the pain matrix.134 And in one fMRI study, participants who reported feeling emotional distress, as a result of apparently being excluded from a “game” being played among research participants, also showed, on average, statistically significant activation of the pain matrix.135
Pain also plays an enormous role in the legal system.136 The existence and extent of pain is a matter for trial in hundreds of thousands of injury cases each year. Perhaps more importantly, pain figures into uncounted workers’ compensation claims and Social Security disability claims. Pain is often difficult to prove, and the uncertainty of a jury’s response to claimed pain probably keeps much litigation alive. We know that the tests for pain currently presented to jurors, judges, and other legal decision-makers are not perfect. Anecdotes and the assessments of pain experts both are convincing that some nontrivial percentage of successful claimants are malingering and only pretending to feel pain; a much greater percentage may be exaggerating their pain.
A good test for whether a person is feeling pain, and, even better, a “scientific” way to measure the amount of that pain—at least compared to other pains felt by that individual, if not to pain as perceived by third parties—could help resolve a huge number of claims each year. If such pain detection were reliable, it would make justice both more accurate and more certain, leading to faster and cheaper resolution of many claims involving pain. The legal system, as well as the honest plaintiffs and defendants within it, would benefit.
134. Stuart W. Derbyshire et al., Cerebral Activation During Hypnotically Induced and Imagined Pain, 23 NeuroImage 392 (2004), https://doi.org/10.1016/j.neuroimage.2004.04.033.
135. Naomi I. Eisenberger et al., Does Rejection Hurt? An fMRI Study of Social Exclusion, 302 Sci. 290 (2003), https://doi.org/10.1126/science.1089134.
136. For analyses of the legal and ethical implications of using neuroimaging to detect pain, see Amanda C. Pustilnik, Status and Surveillance in the Use of Brain-Based Pain Imaging in the Law, in 1 Developments in Neuroethics & Bioethics: Pain Neuroethics and Bioethics 59, 61 (Daniel Z. Buchman & Karen D. Davis eds., 2018), https://doi.org/10.1016/bs.dnb.2018.08.004; Davis et al., supra note 72; Henry T. Greely, Neuroscience, Mindreading, and the Courts: The Example of Pain, 18 J. Health Care L. & Pol’y 171 (2015), https://perma.cc/6KXM-PQQB. The earliest, but still useful, discussion is found in Adam J. Kolber, Pain Detection and the Privacy of Subjective Experience, 33 Am. J.L. & Med. 433 (2007), https://doi.org/10.1177/009885880703300212. Kolber expands on that discussion in interesting ways in Adam J. Kolber, The Experiential Future of the Law, 60 Emory L.J. 585, 596–601 (2010), https://perma.cc/P9QQ-AYWE.
A greater understanding of pain also might lead to broader changes in the legal system. For example, emotional distress often is treated less favorably than direct physical pain. If neuroscience were to show that, in the brain, emotional distress seemed to be the same as physical pain, the law might change. Perhaps that would be more likely if neuroscience could provide assurance that sincere emotional pain could be detected and faked emotional distress would not be rewarded, the law again might change. Others have argued that even our system of criminal punishment might change if we could measure, more accurately, how much pain different punishments caused defendants, allowing judges to let the punishment fit the criminal, if not the crime.137 A “pain detector” might even change the practice of medicine in legally relevant ways, by giving physicians a more certain way to check whether their patients are seeking controlled substances to relieve their own pain or are seeking them to abuse or to sell for someone else to abuse.
In at least one case, a researcher who studies the neuroscience of pain was retained as an expert witness to testify regarding whether neuroimaging could provide evidence that a claimant was, in fact, feeling pain. The case settled before the hearing.138 In another case, a prominent neuroscientist was approached about being a witness against the admissibility of fMRI-based evidence of pain, but, before she had decided whether to take part, the party seeking to introduce the evidence changed its mind. This issue has not, as of the time of this writing, reached the courts yet, but lawyers clearly are thinking about these uses of neuroscience. (And note that in some administrative contexts, the evidentiary rules will not apply in their full rigor, possibly making the admission of such evidence more likely.)
Do either functional or structural methods of detecting pain work and, if so, how well? We do not know. These studies share many of the problems outlined above in the section titled “Issues in Interpreting Study Results.” The studies are few in number, with few participants (and usually sets of participants that are not very diverse). The experiments—usually involving giving college students a painful stimulus—are different from the experience of, for example, older people who claim to have lower back pain. Independent replication is rare, if it exists at all. The experiments almost always report that, on average, the group shows a statistically significant pattern of activation that differs depending on whether they are receiving the painful stimulus, but the group average does not in itself tell us about the sensitivity or specificity of such a test when applied to individuals. And the statistical and technical issues are daunting.
137. Adam J. Kolber, How to Improve Empirical Desert, 75 Brook. L. Rev. 433 (2009), https://perma.cc/X56D-V8UR.
138. Greg Miller, Brain Scans of Pain Raise Questions for the Law, 323 Sci. 195 (2009), https://doi.org/10.1126/science.323.5911.195.
In the area of pain, the issue of countermeasures may be the most interesting, particularly in light of the experiments conducted with hypnotized participants. Does remembered pain look the same in an fMRI scan as currently experienced pain? Does the detailed memory of kidney stone pain look any different from the present sensation of lower back pain? Can individuals effectively convince themselves that they are feeling pain and so appear to the scanner to be experiencing pain? The answer to these questions is clear: We do not yet know.
Pain detection also would raise legal questions. Could a plaintiff be forced to undergo a “pain scan”? If a plaintiff offered a pain scan in evidence, could the defendant compel the plaintiff to undergo such a scan with the defendant’s machine and expert? Would it matter if the scan were itself painful or even dangerous? Who would pay for these scans and for the experts to interpret them?
Detecting pain would be a form of neuroscience evidence with straightforward and far-reaching applications to the legal system. Whether it can be done, and, if so, how accurately, remain to be seen. So does the legal system’s reaction to this possibility.
In addition to these three areas, we want to note another topic of great legal interest where neuroscience evidence may eventually play a major role: addiction. We are not giving it substantial discussion, because the research seems further from the courtroom at this point than with the other topics, but the extent of criminal cases involving addiction, in both federal and state courts, suggests its possible value.
In 1997, Alan Leshner, then director of the National Institute on Drug Abuse, published an influential article in Science titled “Addiction Is a Brain Disease, and It Matters.”139 He argued that addiction is “a chronic, relapsing brain disorder,” and, therefore, a successful treatment would manage the illness but not be a “cure.” And he drew conclusions about both the criminal justice system (there should be less incarceration, more treatment) and society as a whole. “If the brain is the core of the problem, attending to the brain needs to be a core part of the solution.”140
This vision of addiction as a brain disease is now over twenty-five years old and remains controversial.141 Not surprisingly, neuroscience research has probed
139. Alan I. Leshner, Addiction Is a Brain Disease, and It Matters, Sci., Oct. 3, 1997, at 45, https://doi.org/10.1126/science.278.5335.45.
140. Id.
141. See, e.g., Neil Levy, Addiction Is Not a Brain Disease (and It Matters), 4 Frontiers Psychiatry 24 (2013), https://doi.org/10.3389/fpsyt.2013.00024; Markus Heilig et al., Addiction as a Brain Disease Revised: Why It Still Matters, and the Need for Consilience, 46 Neuropsychopharmacology 1715
many issues of addiction extensively, and much has been learned about pathways associated with addictive behavior, whether regarding illegal drugs, alcohol, or nicotine.142 Neuroscience has made progress in understanding how the brain acts during the three stages of addiction: binge/intoxication, withdrawal/negative affect, and preoccupation/anticipation. It has highlighted the important role of the neurotransmitter dopamine in many aspects of addiction. It has not, however, produced useful answers to predicting, preventing, or treating addiction, or to distinguishing between addiction and other kinds of uses of addictive substances. As is true with much of brain science, currently neuroscience can mainly tell us that “it is very complicated.”
That should change with increasing research, and, when it does, courts may be affected in at least two ways: “wholesale” and “retail.” The wholesale changes may be legal or policy shifts in dealing with addiction and addictive drugs, whether driven by legislatures, agencies, or the courts, similar to the changes in the constitutionality of punishment for some crimes committed by juveniles.
The U.S. Supreme Court long ago addressed some of the issues around addiction in two cases asking whether the Eighth Amendment’s prohibition on cruel and unusual punishment was violated by convictions for narcotics addiction (Robinson v. California, 370 U.S. 660 (1962)) or for being drunk in a public place (Powell v. Texas, 392 U.S 514 (1968)). A 6–2 majority answered “yes” in Robinson; six years later, five justices answered “no” in Powell, although without being able to muster a majority opinion. As to crimes of addiction, Robinson has largely become a dead letter, but new neuroscience evidence might revive that doctrine.
For most judges, the issues of applying addiction neuroscience in cases will more likely be at the “retail” level. Strong neuroscience evidence that could distinguish “addiction” from lesser attachments might prove relevant in some cases, as could powerful evidence of a significant predisposition to addiction. New, well-proven treatments for addiction to various substances could spur changes in sentencing, parole, or probation conditions. Effective methods to block the effects of addictive substances, such as anti-drug vaccines that have been studied, might also affect individual cases. At this stage, these applications of
(2021), https://doi.org/10.1038/s41386-020-00950-y; John Davies, Addiction Is Not a Brain Disease, 26 Addiction Res. & Theory 1 (2017), https://doi.org/10.1080/16066359.2017.1321741.
142. See, e.g., George R. Uhl, George F. Koob & Jennifer Cable, The Neurobiology of Addiction, 1451 Ann. N.Y. Acad. Scis. 5 (2019), https://doi.org/10.1111/nyas.13989; Nora D. Volkow, Michael Michaelides & Ruben Baler, The Neuroscience of Drug Reward and Addiction, 99 Physiological Rev. 2115 (2019), https://doi.org/10.1152/physrev.00014.2018; Nora D. Volkow & Maureen Boyle, Neuroscience of Addiction: Relevance to Prevention and Treatment, 175 Am. J. Psychiatry 729 (2018), https://doi.org/10.1176/appi.ajp.2018.17101174. Koob has been director of the NIH’s National Institute on Alcohol Abuse and Alcoholism since 2014; Volkow (irrelevantly but interestingly a great-granddaughter of Leon Trotsky) has directed the NIH’s National Institute on Drug Abuse since 2003.
addiction neuroscience remain speculative; that might change before this manual reaches its fifth edition.
Atomic physicist Niels Bohr is often credited with having said, “It is always hard to predict things, especially the future.”143 It seems highly likely that the massively increased understanding of the human brain that neuroscience is providing will have significant effects on the law and, more specifically, on the courts. Just what those effects will be cannot be accurately predicted, but we hope that this reference guide will provide some useful background to help judges cope with whatever neuroscience evidence comes their way.
143. This quotation has been attributed to many people, including Yogi Berra, but Bohr seems to be the most common nominee. Recently, one of us tracked down evidence that it actually was coined in the 1940s or perhaps 1930s by a Danish cartoonist, author, and inventor named Robert Storm Petersen. See Robert Storm Petersen, Wikipedia, https://perma.cc/FR98-A6WA. For a few more details, see Henry T. Greely, The Death of Roe and the Future of Ex Vivo Embryos, 9 J.L. & Biosciences 1, 2 & n.3 (2022), https://doi.org/10.1093/jlb/lsac019.
axon. Portion of a neuron cell that transmits messages in the form of electrical impulses to dendrites of other neurons.
BOLD response (the blood-oxygen-level-dependent response). A measure used in functional MRI (fMRI) to allow an indirect measure of brain activity by examining how such activity affects blood flow.
brain stem. One of three major parts of the brain, joining the brain to the spinal cord and regulating autonomic functions such as heart rate and digestion, and, to an extent, the neural processing of the cerebellum.
central nervous system. The brain, the spinal cord, and other nerves directly connecting to the brain.
cerebellum. A brain structure at the top of the brain stem that keeps a library of learned motor skills.
cerebrum. Largest area of the brain (composed of two hemispheres) that controls cognitive functions.
computerized axial tomography (CAT scans). A multidimensional, computer-assisted X-ray machine that reveals the structural image of the brain in three dimensions.
corpus callosum. Axions that connect the two hemispheres of the brain.
cortex (cerebral cortex). Folded surface of the cerebrum that consists of gray matter and includes subcortical structures such as the thalamus, the hypothalamus, the basal ganglia, and the amygdala.
cortisol. A hormone produced by the adrenal glands that is released in response to stress.
deep brain stimulation (DBS). An electrical lead is implanted into a specific region of the brain and an electrical pulse is generated that affects the functioning of neurons in the nearby area.
dendrite. Portion of a neuron cell that receives messages from other neurons and relays them to the cell’s nucleus.
diffusion tensor imaging (DTI). A technique that examines the way water diffuses through brain tissue to indicate the location of white matter.
electroencephalography (EEG). A technique that uses electrodes placed on the head to measure neural activity near the scalp.
endorphins. Hormones secreted by the pituitary gland in the brain that are associated with pain relief and a sense of well-being.
frontal lobe. Portion of the cerebrum near the forehead, associated with higher cognitive processes such as decision-making, reasoning, and planning.
functional near-infrared spectroscopy (fNIRS). A technique that detects changes in hemoglobin within the brain by measuring differences in the absorption of light that occurs through blood flow to different regions of the brain.
glial cells. Cells of the brain responsible for producing and maintaining the myelin sheaths that insulate axons and serve as a special immune system for the brain.
hippocampus. Area of the brain that is essential for making most kinds of memories and learning.
hypothalamus. A small structure at the base of the brain that regulates body temperature, hunger, thirst, and fatigue.
magnetic resonance imaging (MRI). The dominant neuroimaging technology, which uses a strong magnetic field to produce detailed images of the brain’s structure and function.
myelin. Fatty substance encasing longer axons that helps insulate the axon and increases the strength and efficiency of the electrical signal.
neuron. Cells of the nervous system that transmit and receive nerve impulses from one such cell to another in a complex way that is responsible for brain function.
neurotransmitters. Chemical molecules that are released from the neuron into the synaptic cleft when a nerve impulse reaches the end of an axon. Some of the best-known neurotransmitters are dopamine, serotonin, glutamate, and acetylcholine.
nucleus accumbens. A small subcortical region in each hemisphere of the cerebrum that releases dopamine in response to rewarding experiences.
occipital lobe. Portion of the cerebrum at the back of the skull primarily concerned with vision.
olfactory bulb. Area of the brain that makes new neurons and plays a huge role in our senses of smell and taste.
oxytocin. A hormone that works as a neurotransmitter and has been linked with trust and bonding.
parietal lobe. Portion of the cerebrum at the top and toward the back of the skull that is concerned with reception and processing of sensory information from the body.
peripheral nervous system. Neural cells that are not part of the central nervous system.
positron emission tomography (PET scans). A brain imaging technique based on a radioactive tracer substance that measures the density of neural receptors.
single photon emission computed tomography (SPECT scans). A technique like PET scans using a different form of radioactive tracer.
substantia nigra. Area of the brain stem that generates the neurotransmitter dopamine as part of the brain’s reward system.
synaptic cleft. The small space between two neurons where neurotransmitters are released.
temporal lobe. Portion at the sides of the cerebrum involved in hearing, language, memory storage, and emotion.
tesla (T). Measure of the strength of the magnetic field of an MRI scanner.
thalamus. A brain structure at the top of the brain stem that carries information to and from the cerebrum and other brain structures. This structure is especially important for transmission of sensory information such as vision, hearing, touch, and proprioception (one’s sense of the position of the parts of one’s body).
transcranial magnetic stimulation (TMS). A technique that generates a magnetic field near the skull that disrupts neural activity and alters brain function in a nearby region.