Digital People: From Bionic Humans to Androids (2004)

Chapter: PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems

Previous Chapter: 4 We Have Always Been Bionic
Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

Part II
How Far Along Are We?

Can machines live? The answer is yes in the virtual history of artificial beings, but we don’t yet know the answer in reality. We have progressed enormously in building artificial bodies, sensory apparatus, and brains, and the progress is accelerating. To understand where we now stand requires insight into today’s technology, but first, it requires consideration of an issue beyond engineering: What do we mean by the brain and the mind, and how do they connect to the body?

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

This page intentionally left blank.

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

5
Mind–Body Problems

In classical philosophy, there is only one Mind–Body Problem, that is important enough to be capitalized, but in the world of artificial beings there are several. The philosophical version is an old metaphysical issue, easy to state and hard to resolve: What is the nature of the mind, and how does its apparent insubstantiality relate to the materiality of the body? We know they are connected, because each of us continually experiences their interaction within our own private consciousness. Formulate in your mind the intention to pick up a glass of wine, and your hand carries out the action even as you think it; kick a wall in frustration, and your mind registers the sensation of pain. But how does the immaterial mind cause your hand to move as you desire? Why does it turn a neural signal from your foot into the feeling “it hurts”? Indeed, what is it in you that wishes to drink that wine, and directs your body to act accordingly?

For a long time this problem of consciousness was the province only of philosophers. Because of its internal, subjective nature, consciousness has seemed a difficult subject for scientific study, although some efforts were made in the nineteenth century. Writing in 1890, William James, a founder of modern psychology, concluded that con-

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

sciousness is a process requiring both memory and the selective placing of attention. But for decades after, psychology was dominated by the objective study of behavior—that is, measuring responses to stimuli—rather than the examination of inner states, and insights like James’s were not translated into programs of scientific research. Now, however, with new techniques to simulate the brain, and examine it as it thinks, we might be able to understand consciousness on a scientific basis.

The makers of artificial beings are typically neither cognitive scientists nor philosophers, but aspects of the old mind–body query appear in what they do. One issue of immediate practical importance is the actual link between mind and body. Not that the linkage is always necessary: A body alone, or a brain alone, is enough for some purposes. The designers of programmed animatronic entertainment robots need only bodies that can be fully controlled without any built-in intelligence; researchers in AI would be delighted to produce a brain that shows a high level of intelligence without bodily attachments. But a fully functional artificial creature needs both brain and body, connected so that the brain controls the body and the body informs the brain, and bionic humans need linkages between their brains and artificial limbs or other devices.

THERE ARE NO EASY ANSWERS

Connecting the mental to the physical adds a layer of complexity. The engineering of such connections is the first mind–body problem for artificial beings, and in a way, the least troubling—not that the solutions are easy, but at least there can be agreement about the need to design appropriate interfaces between artificial brain and artificial body, or between a human brain and a mechatronic system (as defined earlier, a device that combines mechanical and electronic elements) such as a prosthetic limb. The human–mechatronic interfaces are the more difficult and involve medical considerations as well, but although there are practical and ethical issues, they do not seem to represent deep philosophical divides.

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

There are, however, profound differences of opinion on two other questions about artificial creatures that are linked to the mind–body quandary. They generate considerable controversy and the answers might determine the eventual success of the entire enterprise of building artificial creatures. The questions are:

  • Can an artificial brain support a conscious artificial mind, as the human brain does human consciousness?

  • Is it necessary to embed an artificial brain in a body for the brain to become fully intelligent, functional, and perhaps conscious? As a corollary, might a synthetic body be enough to imbue an artificial mind with a high order of intelligence?

Both questions arise because in the recipe for an artificial being, which reads “one part physical, well mixed with one part mental,” we know little about the second ingredient compared to the first, and hardly know how to stir the ingredients together, because we do not know our own recipe—though we’ve sought it for a long time. One formula goes back to René Descartes in the seventeenth century. He made consciousness central when he stated, “I think, therefore I am,” and went on to reason that humans have a dual nature. People, he wrote, are like animals in that both are flesh machines built of matter, which is defined by its extension in three dimensions: but humans have an additional facet, mind, defined as the ability to think. What Descartes could not explain to anyone’s full satisfaction, however, was how the two categories interrelate, although he attempted to localize that interaction in the pineal gland.

The dualistic idea that human existence includes an intangible part still carries power in religious and spiritual traditions that hold that an immaterial soul survives the death of the body. And it carries enormous weight for each person. Each of us, looking within, feels that something is going on internally that has a different character than the physical operations of the body—call it soul, personal identity, or what you will, it is the core from which each of us gazes out into reality.

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

However, most contemporary cognitive and neural scientists would say that the mind is the result of physical processes in the brain and hence has a material basis. The Nobel Laureate Francis Crick, who codiscovered the structure of DNA with James Watson and Maurice Wilkins, represents this view. His 1994 book The Astonishing Hypothesis: The Scientific Search for the Soul opens with,

The Astonishing Hypothesis is that “You,” your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.

While scientists accept that the mind arises from the material operations of the brain, this does not solve the classic Mind–Body Problem but it does change its formulation. In modern terms, the question becomes, How can we understand consciousness in scientific terms? Or to put it more specifically, What is the exact nature of the link between physical and chemical activities in the brain and each person’s internal sense of consciousness?

This question has several answers of varying degrees of difficulty, as noted by David Chalmers, a philosopher at the University of Arizona. Some aspects of consciousness, such as the ability to choose among and react to external stimuli, are unquestionably susceptible to scientific explanation, though it will take years of effort to understand them. But the aspect that Chalmers calls the “really hard problem” is this: Why do we have a varied internal life at all? Every function of consciousness that supports the physical operations of the body would serve us equally well without these subjective experiences, and so, as Chalmers says, “it seems objectively unreasonable” that we should have them, and yet we do. No one knows why, and this is why people speak of the “mystery” of consciousness.

Although these are profound questions about our own nature, they are closely linked to AI and artificial beings because modern cognitive science is partly inspired by computational science. The exploration of machine thinking has provided significant and useful metaphors for human thought since the 1960s—not long after Alan Turing’s seminal 1950 paper—when psychologists and cognitive sci-

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

entists began using computers to model human mental processes. Conversely, those who want to build machines that think are inspired by the science of natural thought, so the problem of consciousness is deeply important for both groups.

The basic issue is that although we know a great deal about the brain, we know far less about its intangible correlate, the mind. The brain, after all, is a working physical part of the body, like the liver or heart, whose physiology and functions can be studied. In a typical adult, it is a 1.3 kg (3 lb) mass of tissue that contains about 100 billion neurons and supporting structures. Like any other part of the body, it uses energy and requires nutrients. Through dissection and other techniques, we know its complex anatomy, from the cerebrum with its two walnut-like halves, to the brain stem that exits through the lower skull to become the spinal cord. We know the general functions of its parts, and we can identify areas that control bodily movements, process visual information, deal with language, and so on. We know the structure of neurons, and something about how they communicate among themselves and their interconnections in the brain, which can change as a result of experience.

Certainly, further insight is needed. That should include, for instance, fuller knowledge of neurotransmitters—the chemicals like serotonin that carry signals among the brain’s neurons by electrochemical means—and more extensive mapping of the brain’s functions, especially those like memory that seem to integrate information from different areas. However, scientists firmly believe that their understanding of the brain will steadily grow through the use of electroencephalography (EEG) and the study of the effects of brain damage, and especially through the new techniques of functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) scanning. Both make it possible to observe something never seen before—the operations of a living, working brain.

But that’s the brain. The human mind, or human consciousness, is different. We know it is correlated with the brain, because if you cut off certain brain functions, consciousness flees, but we do not understand its nature and origins, largely because it is a subjective experi-

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

ence that is difficult to explore by the objective means that illuminate the brain. (In animals less developed than humans, “consciousness” might be limited to the ability to sense stimuli and respond directly to them. My use of the term goes beyond that baseline level to include human thinking, feeling, and self-awareness.)

The subjectivity of internal experience leads to what philosophers call the “problem of other minds”: in principle, we can never truly grasp the nature of anyone else’s inner life. In this view, though it is a chilling thought, we cannot be sure that other people have inner lives at all. They might be zombies—behaving like humans, but lacking internal experiences, including emotions and feelings. Regardless of this philosophical point, of course, we all go through life assuming that other people are much like us inside, but the idea of zombies is less far-fetched than we might think. Brain injuries can cause the loss of certain emotional reactions, and psychiatric practice recognizes zombie-like characteristics in some people, who are known as sociopaths. Their actions seem to be appropriate expressions of normal feelings, but they are only playacting, because inside they are devoid of compassion or empathy for others.

The problem of other minds illustrates the difficulty of unraveling consciousness by scientific means. As the neurologist Antonio Damasio puts it: “How can science approach interior phenomena that can be made available only to a single observer and hence are hopelessly subjective?” But now, it seems that brain activity can be made widely observable and linked to interior states through such means as PET and fMRI. We can begin to deeply explore what has been called the last frontier of neuroscience, and the philosopher John Searle, of the University of California, Berkeley, declares to be “the most important problem in the biological sciences”; namely, “How exactly do neurobiological process in the brain cause consciousness?” This question is equally important for the cognitive science of humans, and of artificial beings.

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

DUELING THEORIES

Despite much intense thinking about how we think, there is still no single theory to explain how the actions of an intricate neural array turn into the deeply felt sense of self we each carry, or that could form a blueprint for an artificial mind. Cognitive theorists, neuroscientists, psychologists, philosophers, experts in AI all have their approaches, showing that the question has yet to be answered to everyone’s or even anyone’s full satisfaction. What most theories have in common is the attempt to show how neurons work together to give unified perceptions and thought processes, leading to a coherent sense of consciousness. In visual cognition, the mind’s need to bring together different aspects of a seen object into an integrated perception is called the “binding problem.” Some theories hold that consciousness arises from a greater, more inclusive binding process. Others consider it an emergent property, meaning that although it can be traced to neuronal behavior, no single neuron is conscious, nor can a simple sum of all the neuronal properties account for consciousness.

These theories cover a wide range—from the view that the mere operation of the parts of the brain constitutes consciousness, to the belief that consciousness arises from as-yet-unknown natural phenomena, to the extreme view that the human mind can never fully understand itself. The unsettled nature of the field, and the lack of more than the beginnings of hard data, is shown by the disputes among proponents of different theories, disagreements often relying on assertions that depend on key words like “consciousness,” “intentionality,” and “meaning.” Because these words are hardly rigorously defined, the quarrels often represent no more than differences of opinion or interpretation, producing much waste heat and little useful scientific light. Nevertheless, there are nuggets of truth among these conflicting ideas.

The most startling view is that consciousness is illusory, or at least behaves very differently from our internal sense of it. Many people, whether philosophers and scientists or not, find this approach not only counterintuitive but repellent, because it violates cherished

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

beliefs about personality and free will. In his 2002 book The Illusion of Conscious Will, the psychologist Daniel Wegner gives experimental evidence about the relation between a person’s sense of volition—which leads to a bodily action like reaching for a wine glass—and the neural impulse that actually moves the hand. The unexpected result is that the decision to move does not necessarily precede the motion. As Wegner puts it, “It usually seems that we consciously will our voluntary actions, but this is an illusion…. Our sense of being a conscious agent who does things comes at a cost of being technically wrong all the time.” He goes on to argue that our experience of conscious will nevertheless makes us feel that we are beings who can make moral choices, but his results tend to undermine bedrock assumptions about human choice and responsibility for our actions.

The cognitive theorist Daniel Dennett of Tufts University takes an even stronger view of consciousness as illusion, as articulated in his 1991 book Consciousness Explained, and other writings. According to Dennett, what goes on in the brain is distributed cognition, a complex pattern of events occurring at different times and at different physical sites in the neural array. Thought dispersed temporally and spatially is a far cry from Descartes’s idea that the center of the self resides in a single location, and eliminates the idea of a physical core for consciousness. Taking the argument further, Dennett believes that there is no central core of any kind for personhood. Self-consciousness, he says,

is that special inner light, that private way that is with you that nobody else can share, something that is forever outside the bounds of computer science…. That belief, that very gripping, powerful intuition, is in the end, I think, simply an illusion of common sense … as gripping as the commonsense illusion that the earth stands still and the sun goes around the earth.

Instead, he says, “you can imagine how all that complicated slew of activity in the brain amounts to conscious experience … the way to imagine this is to think of the brain as a computer of sorts.” (Italics in the original.)

If Dennett downplays the strong internal sense of our own consciousness, the philosopher John Searle gives great weight to those

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

same internal feelings. In his 1997 book The Mystery of Consciousness, Searle takes the experience of consciousness as a core reality precisely because it is an unmistakable interior event. His rebuttal of Dennett’s ideas is curiously reminiscent of Descartes’s “I think, therefore I am.” Searle writes,

But where the existence of conscious states is concerned, you can’t make the distinction between appearance and reality, because the existence of the appearance is the reality in question. If it consciously seems to me that I am conscious, I am conscious … it is just a plain fact about me—and every other normal human being…. (Italics in original.)

Searle does not use this perspective to build a theory of consciousness, but Francis Crick explores such a theory in detail. In The Astonishing Hypothesis and elsewhere, and with his colleague Christoff Koch, he approaches the phenomenon through the binding problem in visual cognition. This particular function of mind draws on a large fraction of the brain, where certain groups of neurons deal with specific parts of what we see, such as color, movement, and the edges of objects. The mind brings these elements together to produce an integrated visual understanding that is an important part of our mode of thought. Using a variety of evidence, Crick concludes that binding of this sort is produced by neurons located in different and specific parts of the brain that fire in a synchronized way, on average 40 times a second. He does not claim that this conclusion solves the problem of consciousness, but believes that the full answer must begin with just this kind of consideration of enormous numbers of neurons operating together.

The Nobel Laureate neuroscientist Gerald Edelman, of Rockefeller University in New York City, and his colleague Giulio Tononi also consider the unified action of groups of neurons, most recently in their 2000 book A Universe of Consciousness: How Matter Becomes Imagination. Their theory draws on evolutionary development, which, they say, has formed our brains to process information more powerfully than human-made computers can. A kind of Darwinian survival of the fittest affects individual brains as well, through neuronal group selection: As a brain develops, the groups of neurons that survive are those that respond well to stimuli. They represent concep-

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

tual categories, and through the process of “reentry,” constantly trade information back and forth as if the brain were talking to itself.

Edelman and Tononi conclude that interactions between two particular structures in the brain are mostly responsible for consciousness: the cortex or gray matter—the outer layer of neurons that deals with sensory impulses and higher mental functions—and the thalamus—a part of the brain associated with emotion. Moreover, there are two levels of consciousness. Primary consciousness is perceptual awareness of the world in the present, but it is not consciousness of self. That level comes with higher-order consciousness, which depends on language and on social interactions and which has knowledge about the past and future as well as the present; it is what humans add to their primary consciousness.

The physician and historian of ideas Israel Rosenfeld also believes in the importance of coherence over time, the sense of self we maintain as a continuous internal presence throughout our lives, or at least our adult lives. As William James saw more than a century ago, this long-term coherence is a function of memory, and Rosenfield emphasizes that “consciousness and memory are in a certain sense inseparable, and understanding one requires understanding the other.” But how does this continuous memory develop? According to Rosenfield, memory is created because the brain resides in a body:

My memory emerges from the relation between my body … and my brain’s “image” of my body (an unconscious activity in which the brain creates a constantly changing generalized idea of the body …). It is this relation that creates a sense of self.

None of these approaches is a definitive explanation of consciousness that is supported by complete scientific evidence. It can be argued also that none truly confronts the hard problem of subjective experience and why we have it, at least not within the framework of what we know about the brain. Edelman and Tonioni touch on this issue when they write,

while we can construct a sensible scientific theory of consciousness … that theory cannot replace experience: Being is not describing. A scientific explanation can have predictive and explanatory power, but it cannot di-

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

rectly convey the phenomenal experience that depends on having an individual brain and body.

Some thinkers feel that explanations are beside the point anyway, believing that our mental functions—such as using categories to make sense of the world—are innate and cannot be approached by the tools of cognitive science. At least one thinker, however, believes that an explanation is possible, but only by drawing on new phenomena. That tack is taken by the Oxford University mathematical physicist Roger Penrose, as expressed in the subtitle of his 1996 book Shadows of the Mind: A Search for the Missing Science of Consciousness, and in his earlier writings.

Penrose does not deal much with neurons and neurobiology. He begins with a famous mathematical proof called Gödel’s theorem. This result, derived by the Austrian-born mathematician and logician Kurt Gödel in 1931, is of prime importance in modern mathematics. It proves that any formal system—such as the set of axioms that defines classical geometry, or a computer program—can logically generate statements that are true, but that cannot be proven within the system. Gödel’s proof implies that there are true mathematical results that cannot be derived by computers, which operate by strict logical rules, but can be derived by humans.

Thus, concludes Penrose, the human mind supplies something extra, something “noncomputable” that lies beyond what computers can do. This quality, Penrose asserts, arises from phenomena at the microscopic quantum level, where everyday laws of cause and effect are replaced by laws of probability. He suggests that a new kind of quantum behavior in the brain, perhaps “quantum gravity,” provides this essential element of noncomputability—although the details of this novel quantum physics are as yet unknown. But neurons are too big to follow the quantum laws, and so Penrose speculates that consciousness arises in smaller structures in the brain called microtubules. Because Penrose hypothesizes that consciousness comes from new natural phenomena without any evidence that these exist, his ideas have been much criticized.

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

PEOPLE THINK, BUT DO DIGITAL CREATURES?

Apart from the merits or deficiencies of Penrose’s approach, it illustrates one of two main corollaries that accompany theories of consciousness; namely, that machines can never think or be conscious in the way that people are—accompanied, of course, by the conflicting belief that machine consciousness is possible. In the early days of AI, the answer seemed simple. The pioneering AI researchers considered thinking to be the processing of information, which is, in turn, the manipulation of symbols; hence, minds are simply systems for processing symbols. As it happens, our own minds are based on brains made of neurons, but the physical nature of the processor is unimportant. Thus, whether the “brain” consists of billions of living nerve cells, a stack of silicon chips, or for that matter, one of Isaac Asimov’s positronic units, the important thing is that symbols are meaningfully manipulated. When that happens, thinking, and perhaps even consciousness, occurs.

This view is often called, semijocularly, GOFAI—“good old-fashioned artificial intelligence”—and is now recognized as falling short of a complete approach to machine intelligence. Decades ago, as computer programs began to manipulate symbols in meaningful ways such as carrying out mathematical proofs, proponents of GOFAI felt we were well on our way toward full AI. But as understanding grew, we came to realize that GOFAI omits some aspects of cognition—for instance, the sensory experience of smell—which might not be represented by words or other symbols inside our minds.

Today, with AI and cognitive science far more advanced, and theories of consciousness abounding, there is ammunition for those who believe that machines can think and for those who don’t. In Daniel Dennett’s view, a human mind that is thinking is running what amounts to a computer program that processes information. To Dennett, this scenario opens the door to machine thought. He claims that “a computer that actually passed the Turing test would be a thinker in every theoretically interesting sense,” and adds “I do think it’s possible to program self-consciousness into a computer.” But

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

Roger Penrose would insist that computers can never do all that human minds can, nor even simulate those activities. John Searle also believes that it takes more than mechanical computation to constitute thinking. He calls the belief that computation is the same as thinking “Strong AI,” and rejects this in favor of “Weak AI”—while computers can simulate human thought, the simulation of thinking is not necessarily thinking.

In 1980, Searle gave what is probably still the best-known rebuttal to Strong AI, the “Chinese Room” scenario, which emulates how a computer works. Imagine that you are asked to answer questions presented to you in Chinese, although you speak only English. You sequester yourself in a room containing many tiles marked with Chinese symbols (the database) and a book of rules written in English (the program). Questions, written in Chinese, are presented through a small slot (input). You (the CPU, central processing unit) match the incoming Chinese characters to entries in the book and then manipulate the Chinese character tiles as the book directs. That leads to new Chinese characters, the correct answers to the questions, which you present to the world through another slot (output).

The heart of Searle’s contention is that although this process enables you to obtain correct answers, in no way do you understand Chinese as you obtain those answers. The distinction is between what a computer does, which is to manipulate formal symbols like Chinese characters, and what our minds do, which is to add meaning to the symbols. Hence, concludes Searle, although his hypothetical computer passes the classic Turing test administered in Chinese, “programs are not minds,” and a computer or robot can never be conscious.

Many serious objections have been raised to the Chinese Room. One counterargument holds that, whether the person in the room understands Chinese or not, the system as a whole—database, CPU, and so on—does. Other reactions pit the scientific stance against the philosophical one, a common theme in consciousness studies (in my opinion, the answers will come from science, but the philosophical questions are invaluable in pinpointing the issues). Dennett, for instance, warns that the Chinese Room acts to dissuade people from

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

imagining in detail how proper software design could engender machine consciousness, in that it makes a flawed analogy that manipulates our intuitions. Disputes like these indicate that expectations for machine consciousness still rely more on opinion than on fact. But why should this be surprising? Apart from some conscious functions we share with animals, our best and sole model for an artificial mind operating at a high level is the human mind itself—and we do not know that very well.

To be realistic, the question “Can machines think?” is of limited pragmatic interest at this point. We are only in the earliest stages of creating machine intelligence; meanwhile, useful creatures are being created without their builders taking a stand or caring whether they are conscious or “really” think. Looking to the future, however, many researchers believe that machine consciousness will be realized. Some, such as the roboticist Hans Moravec of Carnegie Mellon University, adopt the visionary view that artificial minds will surpass ours. In his 1999 book Robot: Mere Machine to Transcendent Mind, Moravec predicts that “Fourth-Generation Universal Robots” will be available around the year 2040, with “human perceptual and motor abilities and superior reasoning powers,” and suggests that we humans are “parents [who] can gracefully retire as our mind children grow beyond our imagining.”

Whether or not this particular prediction is correct, it is true that as artificial brains and creatures become more capable and enter human society, the question of their consciousness becomes more pressing. The practical reason to be concerned is that unless the being is truly conscious rather than only seeming so, it might make faulty decisions—perhaps dangerous ones—in dealing with humans. To know that misapplication of its strength could harm a human, the artificial being might need to develop empathy, through the sharing of such human feelings as the sensation of pain; otherwise, it might become a high-tech sociopath.

From the human perspective, there is a moral issue as well, because once an entity crosses a certain threshold of sentience, we enter into a different relationship with it. No one hesitates to kick a rock,

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

but some of us balk at uprooting a plant; most people who gladly swat a fly would never hurt a cat or dog. Similarly, we would feel differently toward a machine without a shred of consciousness than toward an artificial being we know to have inner feelings.

And finally, there is a reason to pursue the possibility of artificial minds that carries broad scientific value: By contemplating what artificial consciousness means, and from attempts—however ill-defined and halting—to build creatures with minds, we learn about our own minds. In the eighteenth century, Jacques de Vaucanson hoped to build a synthetic human body so detailed that it would teach us about our own bodies. Now we have a similar possibility for our minds.

I AM, THEREFORE I THINK

In contemplating the possibility of an artificial being with an artificial mind, we must recognize that the mind is contained in a real, physical body. Many ideas and debates about machine thinking assume that it arises as a disembodied intelligence within a computer. Artificial creatures, however, are different. They need to think, yes, but that ability must be coupled to interaction with the world: sensing it in various tangible forms rather than symbolically, assessing that flow of data, and deciding how to respond with physical action. The decision can be direct and immediate, though not necessarily simple, as in a robot choosing where to put its feet so as to walk in a given direction. At higher levels, the sensory input, processing, and decision making might reach the sophistication of navigating through a complex environment, or conversing with a person in a human way—that is, passing the Turing test, not as a presence hidden behind a screen, but by actually being there, to listen and speak.

In short, artificial beings are embodied intelligences. To some researchers, that mind–body association is the key to making fully successful creatures. The difference between this approach and approaches based on disembodied intelligence remains controversial. It is why Rodney Brooks’s construction of Genghis, his legged robot that learned to walk by using a distributed, reactive intelligence rather

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

than a central symbol-oriented intellect, was revolutionary. Genghis’s success challenged approaches such as that used for Shakey, the prototype for a proposed battlefield unit, which proved an unworkable example of GOFAI. Now the idea of embodiment is at the core of one approach to the design of intelligent mobile beings.

Rodney Brooks’s experience with Genghis, Cog, and other robots has made him a leading proponent of the significant interaction between synthetic body and artificial mind. His beings are built with two central principles in mind. One is situatedness, meaning (as Brooks defines it),

the creature or robot is … embedded in the world … [it] does not deal with abstract descriptions, but through its sensors with the here and now of the world, which directly influences the behavior of the creature.

The other is embodiment, meaning that,

the creature or robot … has a physical body and experiences the world, at least in part, directly through the influence of the world on that body.

As examples, Brooks points out that a computerized airline reservation system is situated but not embodied: It deals with the outside world, but only by means of messages. An assembly-line robot that spray-paints cars, however, is embodied but not situated: It has a physical presence that accomplishes a real task, but makes no judgments about the cars it paints, and is unaffected by them, simply repeating the same actions over and over.

Brooks foresees a situated robot with a well-equipped body that could develop a conceptual understanding of the world in the same way we do. In 1994, he proposed that a humanoid robot with capabilities including vision, hearing, and speech, and the ability to physically manipulate objects, would “build on its bodily experience to accomplish progressively more abstract tasks.” This possibility is supported by ideas from cognitive science, such as Israel Rosenfeld’s approach, which gives great weight to the physical body in determining memory and consciousness.

The cognitive scientists George Lakoff and Mark Johnson are even more specific. In their 1999 book Philosophy in the Flesh: The Embodied Mind and its Challenge to Western Philosophy, they postulate

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

that the high-level functions of mind, such as language, begin as metaphors for how our bodies interact with the world. “The mind is inherently embodied,” they write, adding, “Thought is mostly unconscious. Abstract concepts are largely metaphorical.” Reason itself, they believe, is intimately connected with our physical nature:

Reason … arises from the nature of our brains, bodies, and bodily experience … the very structure of reason itself comes from the details of our embodiment. The same neural and cognitive mechanisms that allow us to perceive and move around also create our conceptual systems….

But although Genghis learned to walk, and Brooks’s robot, Cog, seemed alive when it turned toward a visitor, embodied intelligences have yet to demonstrate that they have developed higher functions of mind operating at abstract levels. Numerous questions remain about this approach. The pioneering AI researcher Marvin Minsky, for instance, has called emphasis on robots “unproductive” and “bad taste on the part of my fellow scientists,” adding,

in the 1950’s and ’60’s … we found, OK, you can build a robot and have it walk around the room and bump into things and learn not to, but we never got any profound knowledge out of those experiments.

Despite such sharp differences of opinion, researchers continue to attack the mind–body problems for artificial beings on many fronts. Some research efforts focus on the pragmatic goal of developing operational creatures; others operate on a deeper level that hopes to build fully conscious beings. Mind–body considerations apply also to bionic humans or cyborgs; for instance, the different subjective reactions that have been reported by the recipients of cochlear and brain implants, some of whom are troubled by a sense of isolation or strangeness and some who are not. There is evidence as well that neural implants cause actual changes in the brain and the way in which it perceives the body. This is a function of the brain’s plasticity, the change in its neural arrangement as a result of external influences. The changes caused by a neural implant that controls an artificial limb or external device are likely to be beneficial toward incorporating that nonliving addition into a person’s body image. Still, if altering people from fully natural to partly artificial literally changes their

Suggested Citation: "PART II: HOW FAR ALONG ARE WE?5 Mind-Body Problems." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

minds, that presents another mind–body problem, one with potentially serious ramifications.

The hard problems of consciousness remain hard. The debates over mind, thought, and consciousness might continue for a long time or might never be resolved, either for ourselves or for artificial beings. For our own constructed creatures, suggests Rodney Brooks, the only answer we might be able to trust is the one we trust for ourselves:

Perhaps we will be surprised one day when one of our robots earnestly informs us that it is conscious, and just like I take your word for your being conscious, we will have to accept its word for it. There will be no other option.

However, although the full mind–body recipe remains unknown for us and our artificial kin, a great deal of progress has been made on the bodily ingredient, as the next chapter shows.

Next Chapter: 6 Limbs, Movement, and Expression
Subscribe to Emails from the National Academies
Stay up to date on activities, publications, and events by subscribing to email updates.