The development of advanced artificial beings and of bionic humans is well under way. The pioneering efforts of roboticists, neuroscientists, and other researchers are creating a powerful cross-disciplinary technology for the coming century, a rich medical–technical environment that might lead to autonomous artificial beings and to enhanced human bodies and minds. This technology is actively driven by a variety of motives: scientific curiosity and the technological imperative, benefits for human health and longevity, and applications in areas from industry to space exploration to warfare.
At the moment, industrial robots dominate: The latest comprehensive survey World Robotics 2002, issued by the United Nations Economic Commission for Europe, puts their world population at 760,000, projected to soon reach a million. That same report, however, also predicts increased use of robots in areas such as medicine and security, and explosive growth for household and entertainment robots, with a hundredfold increase in units sold between 1999 and 2005.
Despite this growing activity, no one has yet made a completely autonomous being, or one that seems consistently and convincingly
alive, or a bionic implant that improves human strength or wit, or a true cyborg, a living brain in a mobile artificial body. But there is no doubt that existing technology will carry us further along these paths. At the physical level, the creation of walking robots has taught us a great deal about mechatronics and body construction. Devices for implementing artificial senses, from light and sound detectors to wireless receivers, are also well developed and will only get better. Many issues about the physical capabilities of artificial beings—notably, how to extend their battery-powered lifetimes so that they don’t need frequent recharging—remain, but we do have clear directions for body improvement that apply known principles without having to invent new concepts.
Neither artificial bodies nor synthetic senses can work meaningfully without guidance from a brain, a mind, or a developed cognition. Here, too, progress will come through the refinement and evolution of the existing approach, which is to program digital computer chips to simulate what the brain does. Every increase in hardware speed and capacity, and in the cleverness of the software, makes artificial beings more effective, just as the addition of a third chip to Sony’s QRIO robot enormously enhanced its speech. But a deeper understanding of our own brains, leading to the construction of better synthetic ones, might be needed to bring those silicon brains to a new plane—truly high intelligence, and possibly silicon-based consciousness.
Human–machine connections have bright prospects as well. The potential medical benefits are clear. We will see rapid progress in this area, from improved cochlear implants for the deaf to more effective visual replacements for the blind and better BMI technology for the paralyzed, perhaps leading soon to direct neural control of an artificial limb. These achievements form a basis for the next level, which would go beyond healing to extend human mental and physical abilities—for instance, by connecting a human brain directly to the Internet or to a powerful computer, or permitting the brain to directly control a complex device such as a weapon or an artificial body. Because much of the current research in these areas is funded by the Department of
Defense, it is possible that scientists have already made dramatic progress that is being kept secret, but as far as the open literature shows, we are not close to achieving these science-fictional possibilities. However, serious research along these lines is just beginning.
Neither the building of artificial beings nor the creation of hybrid humans is just a matter of getting the technology right. Even if supernatural fear of synthetic life is long gone from our psyches, we are still concerned about what this technology means for people, and we need to answer some questions that have profound implications: What is our purpose in making artificial or hybrid beings? What are our ethical responsibilities toward them, and theirs toward us? Do we have anything to fear from intelligent and powerful nonhuman beings—if not the violent overthrow of humanity portrayed in Capek’s R.U.R., then more subtle damage such as debasing human worth or causing economic harm? Is a hybrid being, part human but perhaps mostly machine, still a person, or something else, and can a fully artificial construction be a person? If we learn to enhance human health or mental ability by implantation, who should receive these benefits?
These questions have different answers for different societies; for instance, in Japan, where robots are developed primarily for civilian use, and the United States, where military applications of robotics play a large role, and so the answers, like the beings themselves, reflect light back onto our own nature. Many of these issues will not arise, however, until artificial beings become more capable than they are now, and that means becoming more intelligent.
Some researchers are confident that digital chips will eventually attain the full power of the human brain, at least as judged by a quantitative measure—making the chips operate so fast that they match the speed of the brain’s extraordinary parallel processing. As we have seen, Hans Moravec estimates that a microprocessor running at 100 million MIPS would be as capable as the brain. In The Age of Spiritual Machines, the inventor and computer visionary Ray Kurzweil considers the same
question. He differs from Moravec in his estimate of just how much processing the human brain does, but both men predict that the present rate of chip development will take us there in 20 years or less. Kurzweil puts it dramatically, predicting that by the year 2019, a mere $1,000 will be enough to buy the computing power of the human brain.
Even if this quantitative success is achieved, will it produce a brain that can sustain the full equivalent of a human mind and consciousness? To many builders of artificial beings, this is not a key question. The immediate goal of those researchers is the construction of beings that behave in ways that are or appear to be intelligent, emotional, social, and whatever else is useful, without insisting that the beings think like humans or worrying about “real” emotions within their silicon brains. Faster computation will accomplish that much, if not through evolutionary improvement, then through advances such as molecular-level or quantum computing.
No matter how rapid the computation, beings based on computer-style processing might end up thinking like…well, computers. This is not to say they won’t be effective; in fact, they might well surpass humans in many ways. If there is to be a next stage, however, where an artificial being acts with full autonomy, shows intrapersonal intelligence, or looks you in the eye and announces “I’m conscious,” we might need to consider qualitatively different methods of constructing artificial brains.
Still, the first step toward more capable beings is to extend the artificial brains we already have, which are based on programmed digital chips. These brains are showing signs of Howard Gardner’s multiple intelligences, but except for logical-mathematical intelligence, the artificial beings controlled by these brains operate mostly at the level of a young child and have yet to achieve a meaningful degree of intrapersonal intelligence. Nevertheless, there are steps we can take to make these beings more capable and more complete.
One step is the improvement of their bodily-kinesthetic intelligence. It took considerable time and work for that kind of intelligence to reach its first major success with the construction of the
walking ASIMO robot and others. The next kinesthetic goal should be further development of autonomous grasping and manipulation. Some robots perform these tasks at a simple level, but only with pincerlike hands. And although some units, such as NASA’s Robonaut, have dexterous fingers-and-thumb hands, they require a human operator. Complete hand intelligence would be an important step toward more useful beings; also, manual dexterity would give these embodied intelligences a way to explore and shape the world, developing their brains in the process. Such an extension of bodily-kinesthetic ability requires better spatial intelligence and has to incorporate object recognition, another ability that falls under spatial intelligence and is now under development.
Artificial musical intelligence is already here, and not only in the singing and dancing that the Sony QRIO robot performs—some computers are already composing music. Why not add this ability to QRIO? Of the remaining three categories of intelligence, the interpersonal type will also develop as artificial systems become better at distinguishing human emotions as expressed in the face and voice, and responding in humanlike ways. But linguistic intelligence and intrapersonal intelligence—or self awareness—raise special issues.
Linguistic intelligence is exceptionally significant in evaluating artificial beings because of its role in the Turing test. The ability to use language might lie at the pinnacle of human intellectual functions; in fact, some theorists hold it to be essential for our very thoughts. Thinking and self-awareness can be seen as a process of narration and response that we carry on inside our minds, a dialogue in an internal voice that is the core of the “I” within each of us.
The Turing test recognizes the importance of language, and so did those pioneers of AI who in the 1960s and 1970s tried to emulate important parts of human intelligence on computers. Linguistic intelligence entered in their attempts to make computers communicate in natural human language rather than programming language, and to
translate from one human language to another. It quickly become apparent that these are extremely knotty problems, largely because the meaning of words in human language is often ambiguous and depends on context. The literature is full of amusing misreadings by machines. In his book Mind Matters: Exploring the World of Artificial Intelligence, James Hogan tells how in one project in the 1960s, the metaphorical phrase “Time flies like an arrow,” perfectly clear to you and me, was sadly misunderstood by a computer; one of its interpretations, for instance, was “Time flies—a kind of fly—are fond of an arrow.”
Efforts to enable computers to be programmed in natural language and to translate human languages continue, although they are not yet perfect. With large speech databases and fast processors, machine conversation using word recognition and synthesis is becoming routine in such applications as travel booking. What works over a telephone also works in a mobile unit, and so the Sony QRIO robot has language capability. But these systems can hold only limited conversations, a far cry from the generalized and diverse humanlike response the Turing test is meant to uncover.
In 1950, Alan Turing predicted that a computer would pass his test by the end of the twentieth century, but we are still far from developing a synthetic intelligence that can persuade us of its own personhood. The best-known attempt to determine how close we are to this goal is the yearly competition sponsored by Hugh Loebner, a hardware manufacturer who developed an interest in the problem and offers a substantial prize for the computer program that best meets the Turing criterion. Rather than using voice communication, these Loebner Prize events test linguistic intelligence by using keyboards for the human-computer interaction, as Turing envisioned.
The Loebner competitions began in 1989, and initially—especially in 1991—attracted luminaries of the AI world. But the conversational ability of the artificial minds was disappointingly poor in those first years and has not improved much since; Loebner calls the level of performance “gruesome.” Some of the AI community has repudiated the competition, protesting that it is conducted in a way that renders
it scientifically useless. Some also reject the validity of the Turing test for judging intelligence at all. But the test clearly has meaning, and enormous historical, intuitive, and emotional appeal. It is hard to avoid the conclusion that if the experts could have created a worthy conversational partner, they would have done so, and happily announced it—if not through the Loebner event, then in some other venue.
So the linguistic intelligence required to pass the Turing test remains elusive. Ray Kurzweil thinks the test will be passed by the year 2029, perhaps by one of his $1,000 equivalents to the human brain, but gives few specifics. Two avenues, however, are natural to pursue. One possibility depends on the fact that verbal communication carries more information than written forms: it is the idea of reducing the ambiguity of human speech by prosodic analysis, which—as we have seen—is already under development.
The second possibility, which has implications for artificial intelligence in general, not only its linguistic component, is to enormously expand the databases an artificial being needs for intelligent conversation. One necessary database is the speech corpus, which determines how many words the being recognizes and can say; the other is a database of general knowledge, essential to converse with humanlike diversity. Both can now be established at huge sizes, terabyte upon terabyte, without storing them within every artificial being, because they could be accessed from the Internet by any being with a high-speed wireless connection.
According to some researchers, a database of general knowledge is an absolute prerequisite for artificial intelligence in its broadest sense. As Roger Schank and Lawrence Birnbaum of Northwestern University have put it,
The truth is that size is at the core of human intelligence…. In order to get machines to be intelligent they must be able to access and modify a tremendously large knowledge base. There is no intelligence without real, and changeable, knowledge.
Establishing sufficiently large databases, however, is still only the beginning: We do not yet know how to make a synthetic being hear any human comment and find among its databases a response that is rel-
evant and perhaps also even passionate or humorous; or more challenging, make the being capable of initiating and leading a conversation as well as responding to what a human says.
The closer digital beings come to passing the Turing test, the better they will communicate with us, and if language is truly central to thinking, the linguistic ability that satisfies the Turing test might also be necessary for their own self-awareness. But whether or not that inner voice is essential, the human brain remains our only model for the seat of self-awareness, and its most striking feature is its complex interconnectivity. That is shown at the physical level by the convoluted structure of the brain, which reflects stages in its evolutionary history; at the neuronal level by the multitude of connections between a given nerve cell and others; and at the operational level by the elaborate network of connections and shared functions among subsystems such as the cortex and the limbic system.
This intricate arrangement is distinctly different from the linear pipeline by which computers manipulate data, suggesting that in addition to simulating the brain by programming digital chips, we might need to emulate it by using appropriate hardware, but we cannot emulate what we do not fully understand. What Marvin Minsky wrote nearly two decades ago in Society of Mind still applies:
Most people still believe that no machine could ever be conscious, or feel ambition, jealousy, humor, or have any other mental life-experience. To be sure, we are still far from being able to create machines that do all the things people do. But this only means that we need better theories about how thinking works.
Because of new techniques such as brain scanning, we know more about the mind than we did then; even so, the unexplored territory is enormous. We strongly suspect, however, that the intricacy of the brain’s internal interactions defines the very fabric of thought and self-perception. As Minsky puts it: “a human intellect depends upon the connections in a tangled web—which simply wouldn’t work at all if it were neatly straightened out.”
Gerald Edelman’s theory and several others ascribe consciousness and the power of thought to those complex interactions among the brain’s substructures. This points to the true importance of giving an artificial brain internal interactions such as arise between rational and emotional subsystems; namely, to copy the “tangled web” that seems to make human thought what it is. If this can be done, the next generation of artificial minds might surpass computer-bound thinking by using different types of electronic brains—be they systems that carry out parallel processing or neuromorphic chips, adaptive chips, or other architectures that follow the brain’s peculiar nature.
In addition to new designs for electronic brains, we might need something more—namely, a new philosophy—to create fully capable synthetic beings. At present, an artificial brain consists of processor and memory chips whose capabilities are firmly defined and fixed, and software—also fixed—that guides the hardware. But this is not how the human mind works. Although a newborn baby has limited abilities it has one crucial set of capacities: It can observe, interact, remember, and learn about the world. These efforts change the baby’s brain through the plasticity of its neurons, and over time, the child matures into full intelligence and personhood. The child’s interaction with adults plays a large role in this because it encourages adults to react to the child and teach it, and the social contact itself is necessary for selfhood to develop. Physical interaction is equally important. Just watch a tot experiment with reality as it learns to walk or carries out the experiment of throwing food onto the floor.
Brain plasticity could be emulated by appropriate hardware, such as adaptive chips. But knowledge of the world must be fed into those chips, and so artificial beings might actually need to grow into full consciousness and personhood—or to put it another way, to develop their varied intelligences—by engaging reality and socializing with people. From the psychological viewpoint, the social part of the interaction is essential. Howard Gardner writes that “highly intelligent
computer programs” already exist, but considering the question of whether computers can develop personal intelligences, he comments:
I feel that this is a category error: One cannot have conceptions of persons in the absence of membership in a community with certain values, and it seems an undue stretch to attribute such a status to computers. However, in the future, both humans and computers may chuckle at my shortsightedness.
James Hogan makes a similar point in Mind Matters. The difference between a human telling him “I feel the same things you do,” and a machine making the same statement, is that,
When I’m talking to a human, who I know is made like me, grew up like me, and has the same kind of accumulated cultural experience as me, I have little hesitation in accepting that the person probably feels things very much they [sic] way I do. I’m less easily persuaded when none of these things apply.
Gardner’s and Hogan’s remarks suggest that the best hope for the realization of truly intelligent, self-aware beings is to design them not to operate at full mental capacity the instant the power is turned on, but rather to learn as they interact with the world. Cynthia Breazeal’s Kismet is an early example of a robot that deliberately follows the model of a child growing with the aid of encouraging adults. Physical interaction is equally important, to explore the world and learn from it. This is why Rodney Brooks thinks that an embodied artificial intelligence—that is, a synthetic brain controlling a body that deals with physical reality—can develop higher mental functions, an idea that he continues to investigate with the Cog robot.
The idea of an artificial being growing fully into itself is no recent invention. Alan Turing espoused this approach in his seminal 1950 paper “Computing Machinery and Intelligence,” where he wrote, “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain,” and goes on to propose how that education should proceed.
There are older antecedents as well. Frankenstein’s Being, you might recall, keenly felt his lack of nurturing and tells Victor: “No
father had watched my infant days, no mother had blessed me with smiles and caresses.” The Being displays a lack of social education, whereas the android Yod in Marge Piercy’s He, She and It shows the value of this kind of interaction; Yod’s connections with its maker and others give it cultural knowledge and heightened intelligence, and diminish its violent tendencies. In the film 2001 the computer Hal alludes in its “dying” speech to having been taught like a young child, although the idea is not otherwise developed. (However, although the film A. I.: Artificial Intelligence features a child android, it does not change in the course of the story, except perhaps for developing the desire to become a real boy.)
Mathematician Alan Turing, fantasy writers, and modern robotics engineers all come to a fascinating convergence here, illustrating the power of imaginative interdisciplinary thinking in the science of artificial beings. But important questions, not addressed in fantasy, remain: If a digital being can be made fully conscious only by having humans guide it as it grows up, what is the incentive to make such a creature? Could there be any value in investing time and effort for what might be a long, drawn-out process, which Turing estimated could take as much work as raising a real child?
If the artificial being starts with a newborn human child’s ability to learn, but can do so at a far faster pace, bringing up the digital baby might be a matter of weeks, not years. Marvin Minsky has put it this way: “Once we get a machine that has some of the abilities that a baby has, it may not take long to fill it up with superhuman amounts of knowledge and skill.” Still, individual mentoring seems unfeasible and uneconomic for workaday robots meant only to help around the house. The main justification for the effort would be to do everything possible to develop a truly intelligent, self-aware being, including designing a brain that knows how to learn, and committing the time and resources to giving that brain a good education.
Our future might then see two types of beings. Type I will be what we are already making, only better, with a more considerable intelligence and broader abilities, meant to assist humanity and lacking any trace of volitional behavior or consciousness. These will be
true robots within Karel Capek’s use of the word in the play R.U.R. to describe beings that are manufactured in order to work. Type II beings will exist at a higher level, designed to grow into creatures with full autonomous consciousness, using special brain hardware and human nurturance. We might ask, “Isn’t this just a hard way to raise a human being?” The answer is no, and Yod the android illustrates why. Although it became more human, elements of its initial design remained. The result was a mixture of programming and free will, a blend of machine and human. This hybrid points to an exciting possibility, appreciated by creative researchers and writers alike: that silicon nature can combine with human nurture to create a unique but companionate species—intelligent, self-aware, humanlike in some respects and able to communicate with us, but with new thoughts and attitudes to share with humanity.
Imagine now a world in which we have the two types of artificial beings: those that only act as if they are conscious, and those that are conscious. We accept the fact of consciousness for the latter group, because if they have been brought up in human society, when one of them says “I’m conscious,” we believe it. This is different from the Type I’s, which, even if humanoid, are machines, no different from an automobile or screwdriver, and with just as little need for us to have moral concerns toward them.
Type II’s, however, represent something else: a conscious spark within a synthetic body, to which we might respond by treating them like people. If this seems doubtful, consider a scenario where artificial parts can be routinely integrated into a person—let’s say, to replace a gangrenous leg with a plastic one that operates under direct neural control. After the operation, the resulting bionic human is, of course, still a person in every sense. That would be true even for people who have had major physical changes such as the replacement of three or four limbs, or the kidneys, or heart, or all those. But what if a person’s injured brain is repaired with a silicon prosthetic, or his entire brain is transferred into an artificial body? Is that being still a person, although perhaps a different one from who he was before?
From the physician’s viewpoint, the answer is utterly clear. Philip
Kennedy, the inventor of the neurotrophic electrode, says his experience with patients like Johnny Ray has made him ask “What does it mean to be human? What does it really mean?” His answer is “As long as you’ve got your brain and your personality and can think … it doesn’t matter what machinery it takes to keep you alive.”
It would be no different for an internal life based in a silicon brain and existing in a body of metal and plastic—or would it? Is there a distinction between a human who has become more artificial, and an artificial being that has become more alive as consciousness is instilled? How shall we integrate beings with varying degrees of artificiality into our world, and what is our moral obligation toward them? And even for Type I robots that lack volition and free will, there remains an issue with moral overtones: For what purposes are we making them?
Among the requirements for free will, which most of us think we have, is the ability to make moral choices. If an artificial being were to show moral judgment, that would be a strong indicator of a consciousness that humans could recognize. So far, this ability has been shown only by imagined artificial beings. When Yod the android in Marge Piercy’s He, She and It faced the predicament of being a “conscious weapon [that] doesn’t want to be a tool of destruction,” it decided to destroy its maker Avram along with itself to prevent future androids being tormented by the same conundrum—just as its human lover Shira made a moral choice when she later destroyed Yod’s plans. In Star Trek: The Next Generation, Commander Data was also capable of serious moral choices, including the decision to kill a human.
Until we have made equally sophisticated beings, however, it will remain the case that morality is not something that digital creatures bring with them, but something we give to them—through their software or hardware, as in the Three Laws imprinted in the brains of Isaac Asimov’s robots or, more subtly, through our perceptions of them as good or bad, and the uses we make of them. These perceptions are
not universal; they differ within different cultures. For instance, anyone who attended, like I did, the huge ROBODEX 2003 trade show and public exposition in Yokohama would have seen no reason why artificial creatures could ever be considered evil, or represent attempts to usurp God’s place. In display after display from corporations, governments, and research institutions, the beings were uniformly presented as helpful to people, providing services from nursing care to home protection, or were shown as amusing and entertaining, as in a soccer game played by Sony’s AIBO dogs and a quiz show featuring Honda’s ASIMO. The ASIMO quiz show was played on stage with children, and though those children were actors, it was easy to see on the faces of many families visiting ROBODEX that their children thought they had entered Disneyland, only better.
Like culture heroes such as the good robot Astro Boy, this event showcased the particularly benign Japanese attitude toward artificial beings combined with Japan’s leading position in robotics, which began when manufacturing robots took hold there in the late 1960s. According to Frederik Schodt’s book Inside the Robot Kingdom, a variety of economic and business factors sparked the initial interest: a need for traditional Japanese assembly lines to become more flexible, a labor shortage, and a corporate attitude that encouraged long-term development of this new technology. By 1988, reports the U.S. National Research Council, Japan had 176,000 industrial robots, five times as many as the United States (where the industrial robot was invented!) and exceeding the entire robot population of the rest of the world.
Japan still dominates, with just less than half the world’s population of robots. And while other nations are catching up, the dynamic Japanese style of robotics research has continued, with incentives from, for example, the government-funded Humanoid Robotics Project. Running from 1998 to 2002, with a budget of $38 million, the project combined government and corporate resources to develop a humanoid robot for tasks such as industrial plant maintenance, patient care, and operating construction machinery. The result, as I saw at ROBODEX 2003, is HRP-2, a 1.5 m (5 ft) tall, blue and silver, walk-
ing robot that can recover from a fall by standing up again, on its own—a feat matched only by the Sony QRIO unit.
Things are done differently in the United States. Rather than assemble massive focused programs, the government funds research on robots (and every other kind of science and technology) from a variety of sources. Some supporting agencies, such as NASA and the National Science Foundation, have civilian orientations. The research they fund is part of a climate where science and technology are meant to enhance society in general. However, a large fraction of U.S. robotics research has a different goal. That is the work in robotics supported by the Department of Defense (DoD), mostly through DARPA, which “pursues research and technology where risk and payoff are both very high and where success may provide dramatic advances for traditional military roles and missions.”
Developing robots and related technologies for warfare is a worthy goal if it makes the battlefield less dangerous for humans: If we must fight wars, let us fight them with machines, not people (although the morality of this stance could be compromised if the machines are self-aware, or if the use of robot soldiers encourages some nations to believe they can go to war without any human risk or cost). And apart from its military orientation, DARPA funding has led to important results that have had some direct and positive effects on society: The Internet, for example, began under DARPA auspices. Nevertheless, there is an essential difference between targeting research for military use that might have beneficial spinoffs but might also be kept secret, and specifically aiming for civilian applications, as the Japanese do, and openly disseminating the results. Rodney Brooks, inventor of Cog, notes that when he became director of the MIT Artificial Intelligence Laboratory in the late 1990s, 95 percent of the Laboratory’s research was funded by DoD. He thought that was “too much, from any perspective,” and with additional corporate sponsorship, reduced the figure to 65 percent.
The differences between U.S. and Japanese research reflect national priorities and necessities. Japan has no equivalent to the enormous U.S. defense establishment, and its government funds research
for civilian goals. Especially after the terror attacks of September 11, 2001, the United States is actively seeking methods to improve its security and military effectiveness, some of which fall within the science of artificial beings. One direction for military research is the development of autonomous or semiautonomous weapons. Other research areas involve combating terrorism with biometric technology such as face recognition. Much of the latter research had been carried out under DARPA’s Information Awareness Office (IAO), directed by Admiral John M. Poindexter. Several IAO programs, including the Total Information Awareness project proposed in 2002, raised widespread alarm over issues of civil liberties and privacy. As a result, Congress eliminated IAO funding in late 2003, although part of these operations may be shifted elsewhere—a reminder that while we develop methods to combat terrorism, we must remain alert to possible misuse of these technologies.
Possible invasion of privacy, or worse, using the same technology that gives us wondrous robots is one dark shadow that accompanies the introduction of artificial beings into our society. Another is their potential to replace human workers, first hinted at by Aristotle when he wrote of automated machinery, and now becoming a definite possibility. According to World Robotics 2002, the cost of robots is falling while the cost of labor is rising. This combination presents an economic imperative that rightfully concerns the working pool, especially older workers.
Bionic technology raises a different set of concerns. There is no question about the rightness of artificial implants for the ill and injured, but what if the technology becomes so good that perfectly healthy people can augment their abilities or their lifespans at their whim? While this possibility is far distant, we have learned something from the issues swirling around other forms of human alteration such as genetic manipulation; namely, technology that modifies people in unnatural ways or overturns old definitions of birth, life, and death raises moral and legal questions, and the earlier we consider these, the better.
The alteration of people by artificial implants shares some issues
with the biological modification of people, but also resolves some. One question is the familiar one of access: If a $100,000 implant can make one healthier or smarter, does that mean that only the rich will benefit from it? A brand-new concern comes from the mixed nature of a bionic individual. Imagine a person with so many implants that he or she is largely artificial. Especially if neural function has been modified, is this entity the same person who held, let us say, the right to vote and own property? This potential legal issue points to the need for new definitions of personhood and of being human. Yet the technology of artificiality can also resolve some troublesome situations. With workable artificial parts, the ill would no longer need to await donors of living tissue, dissolving the moral and medical issues surrounding the harvesting of human body parts. Another advantage of bionic modification is the fact that these alterations do not enter the gene pool—unlike genetic changes, the effects of which could include unforeseen long-term harm ongoing through the generations.
Important as all these factors are, they are not the only ones we project onto artificial beings. Religious or spiritual beliefs can also color our views toward synthetic beings. Some writers ascribe the positive attitude of the Japanese to Shinto, their native religion, and to Buddhism, imported to Japan from India in the sixth century. In his book The Japanese Mind, Robert Christopher comments that Buddhists take a different view of robots than do Christians because Buddhism “does not place man at the center of the universe and, in fact, makes no particular distinction between the animate and the inanimate.” Along similar lines, Schodt’s Inside the Robot Kingdom notes that Buddhism, and more especially Shinto, encompasses the belief that even inanimate things can be conscious. “Mountains, trees, even rocks are worshipped for their kami, or indwelling ‘spirit,’” he writes, and adds,
samurai swords and carpenter’s tools have “souls”…[For a] videotape on children’s robot shows, a producer writes that “people not only make friends with each other, but with animals and plants, the wind, rain, mountains, rivers, the sun and the moon. A doll [robot] in the shape of a human is therefore even more of a friend.”
Within that Japanese tradition, even a Type I robot might mean more than a piece of machinery does to non-Japanese people. Some observers take this further and say that Western religion is hostile to artificial beings, the creation of which is seen as impious or worse. In science-fiction writer Stanislaw Lem’s comment in Chapter 1, that an effort to make an artificial human is an attempt to “become equal to God,” Lem is referring to Judeo-Christian conceptions of God. Isaac Asimov has made the same point, asserting that what he calls the “Frankenstein complex” arises in societies where God is taken as the sole creator.
But according to Anne Foerst, a theologian who has studied the religious and ethical preconceptions we bring to artificial beings, Western religious attitudes are more varied than that. Jewish belief, she writes, is “ambiguous about humanoids.” On the one hand, to construct a being like the golem, as Rabbi Löw did in sixteenth-century Prague, is to praise God by exercising creativity and artisanship, which are part of God’s image. On the other hand, we face the danger that humans will turn from adoring God to adoring the golem makers. The Christian tradition in the West, adds Foerst, is less ambiguous because it is more concerned with hubris, the overstepping of human bounds that angered the ancient Greek gods and remains for Christians “sin ingrained in the social consciousness.”
One action that could be considered hubristic within the Western tradition, the attempt to create beings in God’s image—that is, as perfect androids—might never happen, and not necessarily because it violates Christian sensibilities. The Japanese roboticist Masahiro Mori, author of The Buddha in the Robot, points out another reason not to attempt the construction of perfect androids. As robots like ASIMO and QRIO become more lifelike and human, the strength of our perceived connection to them rises, and feelings of threat or strangeness diminish. However, as robots become nearly identical to humans, but in some subtle way not quite so, we feel a sense of wrongness that Mori calls the “Uncanny Valley,” which he advises roboticists to avoid as they design their beings.
Consider, too, the practical question: What is the value of artificial beings that are indistinguishable from humans? A generally humanoid shape is needed to operate in a world designed for the human form, and an expressive face that people can read facilitates communication, but there are not many applications where absolute fidelity to human actions and appearance is essential—except possibly in the entertainment industry, which might turn out to be a surprisingly important application, and perhaps for illegitimate uses such as those of the murderous androids in the Terminator films. For both psychological and pragmatic reasons, we may well find ourselves dealing, and comfortably so, with beings that look human enough rather than completely human.
In considering deadly androids and other such creatures, we might think the virtual history of artificial beings has shown us the greatest evils they could be imagined to do, but as we get closer to being able to produce highly capable beings, new and fearful possibilities arise. The poor unguided Being in Mary Shelley’s Frankenstein suddenly seems even less monstrous; he is infinitely less threatening than a semiautonomous military tank, say, that can recognize targets and fire on them—a possibility that Larry Matthies at NASA’s Jet Propulsion Laboratory, who has worked on military robotics applications as well as planetary rovers, thinks may become a reality within 20 years. But the approaching reality can also draw on the best that creative writers have given us: the beautiful dancing cyborg Deirdre, the androids Yod and Roy Batty struggling with existential truths, the lovable robot Robbie and intelligent machine minds of I, Robot, and the naively charming Commander Data with his sterling qualities of honesty and loyalty.
Like any parents, we can only hope to influence our children so that they grow up both to fulfill themselves and to contribute to the world, by giving them the best start we can. Our digital children will make valuable contributions only if individual researchers, corporations, governments, and entire cultures make wise and moral choices about their purposes and uses.
If we are not sure how our synthetic children will turn out, why should we embrace the difficulties of creating and nurturing them at all? One answer is that regardless of the outcome, the very act of making digital people helps us form a clearer image of what we really are as humans. Better scientific understanding of our bodies and minds is necessary if we are ever to bring artificial beings to their ultimate possibilities, but it cuts both ways because the methods used to make and study them also illuminate us. As we contemplate, and perhaps cross, the border between inert and unconscious on the one hand and living and conscious on the other—whether approached from the human or the artificial end of the spectrum—perhaps we can also throw light on the human spirit, which some call the soul. And as the theologian Anne Foerst comments, thinking about artificial personhood also makes us consider why we allow certain people into our communities and reject others—perhaps engendering a more inclusive acceptance across boundaries of race, religion, gender, and functionality as well as artificiality.
The most important benefit, however, might be a spiritual realization about our place in the universe. The specter of excessive human pride has reared its head more than once in the history of artificial beings, both virtual and real. It is an arrogance that is easy to come by in our scientific age, but not for the very greatest scientists, those whose wisdom encompasses a sense of wonder and humility as they strive to understand nature.
The great Spanish neuroanatomist Santiago Ramón y Cajal, whose work a hundred years ago laid the foundation for understanding the very brain we now struggle to emulate, felt that sense of awe. In 1906, Ramón y Cajal won the Nobel Prize in physiology for his research on the retina. Working with a staining technique developed by Camillo Golgi (who shared the prize with him), he showed for the first time separate neurons within the retina and their delicate interconnecting filaments. The retina is an outgrowth of the brain, and so
this research gave us our modern picture of the nervous system and the brain as made up of separate but intricately interlinked units.
As his work and personal writings show, Ramón y Cajal was a true laboratory scientist whose first priority was the reality of facts established through painstaking effort: He might have been perfectly at home as a tough-minded member of a contemporary research team seeking to understand organic brains or make artificial ones. But despite his no-nonsense approach, what he saw in the retina lifted him to another plane and filled him with wonder. As he writes in his autobiography, he was
amazed and confounded by the supreme constructive ingenuity revealed not only in the retina … but even in the meanest insect eye. There, in fine, I felt more profoundly than in any other subject of study the shuddering sensation of the unfathomable mystery of life.
Today, a century later, any person who works to artificially match or surpass what humanity is, or merely observes the effort, as I have, can only feel hubris fall away, to be replaced with awe at the complexity of what nature has wrought, humility at the difficulty of emulating it, and wonderment that we humans can yet hope to complete this astonishing journey.