Digital People: From Bionic Humans to Androids (2004)

Chapter: 3 The Real History of Artificial Beings

Previous Chapter: PART I: ARTIFICIAL BEINGS2 The Virtual History of Artificial Beings
Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

3
The Real History of Artificial Beings

Humans are an ingenious species, and our ingenuity has done more than produce a rich array of imaginary artificial beings. It has also worked to realize such creatures, to actually make them. The history of true constructed beings is shorter than the virtual version, because successful real-life engineering is often slower than the creation of fantasies. But the real history is also more complicated because it is not always easy to separate myth from reality, especially in accounts from the remote past; for instance, can there be any shred of truth, no matter how minuscule, in the rumors of talking heads made by Albertus Magnus and others?

Nevertheless, there is a chronological record of what has truly been made. It shows that the fascination with synthetic beings appeared early and in each historical era, and that in each era, artificers, engineers, and inventors simulated life with the best available technology. From the vantage point of today’s high-technology world, these early efforts might seem limited, but many were astonishingly clever. And, as is true for every kind of technology, what we can do today to create artificial beings is utterly dependent on what has gone before.

To make robots, androids, and human implants requires prowess

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

in every kind of technology, from mechanical engineering and electronics (the combination is so essential to the creation of artificial beings that is has its own name: “mechatronics”) to artificial intelligence. Even a sophisticated modern device, such as the Honda Corporation’s ASIMO walking robot, relies on the same mechanical principles the ancient Greeks applied. Other historical layers of technology that have contributed include the development of clocks and clockwork, the beginnings of electronic science in the 1920s and of practical computation in the 1930s and 1940s, and today’s development of nanotechnology and of interfaces between living nervous systems and external devices.

It would be too much to say that every inventor in each era set out to create artificial living things or to copy human beings. Some did, but others wanted only to emulate certain aspects of human behavior, or invented things that only later were seen as relevant to artificial beings. Different possibilities were emphasized at different times, the focus defined partly by the technology available and partly by culture. Through all these attempts, four main threads emerged as steps toward making a facsimile of a living being: constructing a moving body, adding a thinking mind, adding artificial senses, and simulating a natural appearance.

A fifth thread is the most recent and perhaps the most unsettling, coming closest to evoking the sense of “eeriness” that Freud discussed: the direct interfacing of machines with living beings, including humans. That, along with dramatic advances in the other four threads, is an ongoing effort, and the second half of this book deals with this fruitful contemporary period. This chapter presents the real history from classical times until the early 1990s.

BODIES IN MOTION

The ancient Greeks were among the earliest pioneers to simulate living beings through movement. Their reasons were connected to theatrical presentations because many of their plays involved the appearance of gods with divine powers. To show these and other strik-

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

ing theatrical moments, clever artisans created remarkable machinery to animate stage performances. They developed what we would now call special effects, to give the illusion of life through motion—not that large-scale movement is an absolute prerequisite. Many living things, from a rooted plant to a barnacle fixed on a rock, never budge (although there is always some internal motion, such as the movement of nutrient-filled seawater through the barnacle). But a synthetic barnacle interests nobody. For us, life is motion, and animal vitality its most obvious and fascinating indicator. So it was for the Greeks, among them Plato, who once wrote “The soul is that which can move itself.”

To generate motion, the Greek artisans needed power; movement requires energy. The energy to flex the muscles that move our human bodies comes from what we eat. But what power source could animate synthetic beings? Engines and energy sources have not been easy to come by in history (portable energy sources remain a problem; witness the current unsatisfactory state of battery power for laptop computers and electric automobiles). The first sources were domesticated beasts: oxen, mules, donkeys, and horses greatly extended human muscle power. A horse, however, is not conveniently employed on stage. Instead, the Greeks used the natural processes of moving fluids and falling objects, along with simple machines, to create controlled motion.

Two Greek artificers in particular, Philon of Byzantium (the ancient name for Turkey) and Heron of Alexandria, were especially prolific. Not much is known about Philon, born circa 280 BCE, but his treatise Mechanics includes a section called “On Automatic Theaters.” Heron (or Hero), born about 10 CE, perhaps in Alexandria, is better documented. He, too, understood mechanical principles and taught the subject at the Library in Alexandria. Three hundred years later, the mathematician Pappus of Alexandria described how Heron “thinking to imitate the movements of living things” used pressure from air, steam, or water, or strings and ropes.

Heron’s work The Automaton Theater describes theatrical constructions that move by means of weights on strings wrapped around

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

rotating drums. With this power source, Heron constructed an automatic theater that presented Nauplius, a tragic tale set in the period after the Trojan War. As (presumably) amazed playgoers watched, the doors to a miniature theater swung open, and animated figures acted out a series of dramatic events, including the repair of Ajax’s ship by nymphs wielding hammers, the Greek fleet sailing the seas accompanied by leaping dolphins, and the final destruction of Ajax by a lightning bolt hurled at him by the goddess Athena. Perhaps inspired by Hephaestus’s obedient moving tables, Heron also made wheeled stands and used an ingenious trick to move them, apparently self-animated, around the theater. A weight rested on a hopper-full of grain, which leaked out through a small hole in the bottom. As the weight gradually sank, it pulled a rope wound around an axle of the stand to turn its wheels and make it move.

Along with the power of falling weights, these figures used the basic mechanical resources of wheels, pulleys, and levers to create a variety of motion, but there were drawbacks. While a weight resting on slowly leaking grain delivers power over a relatively long period, it is not very compact, or usable on demand. And beyond repetitive actions like hammering, a system based on simple machines gives little scope for flexible and responsive motion. But better techniques to provide and control power came along, although only long after Greek times. The new power source was the coiled metal spring, and the new means of control was clockwork.

We do not know who first noted that a flexible piece of metal could store energy, but we use the method daily; for example, in the common safety pin. Early Greek artisans such as Philon and Heron understood that a “springy” material could act as a power source. Philon even designed a crossbow that used bronze springs to fling missiles. But these early springs were too weak to be useful, and it was not until the fifteenth century that good-quality coiled springs came into use.

In their time, springs played the role that electrical batteries now do in powering devices. They animated the next wave of artificial beings, once ways were found to control their stored power through their use in clocks.

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

The very earliest clocks told time in terms of how long it took flowing water to fill or empty a vessel. But water clocks were inaccurate and were replaced by mechanical versions. In Europe, these first appeared around the thirteenth century, driven by falling weights built into tall towers; for instance, at Westminster Abbey.

Portable timepieces needed a different power source. The German locksmith Peter Henlein made the first recorded spring-driven clock in 1502. This wasn’t yet a complete solution, because as a spring uncoils, its force decreases, the clock hands move slower, and the clock loses time. It required further effort to develop clockwork, the gears and other components that slowly draw off the power of a coiled spring and regulate a clock’s steady tick-tock.

By the eighteenth century, clockmakers and watchmakers were using a well-developed spring-power technology to make elaborate timepieces. These artisans began creating animated toys and other machines, and from there it was only a step to build the most intricate mechanical devices yet made, humanlike automatons.

Although these automata were made to entertain, and to display the skill of the clockmaker, they also represented a philosophical position that had been in the air since it was expressed by the great seventeenth-century thinker René Descartes. After stating his famous dictum “I think, therefore I am,” Descartes went on to conclude that animals and humans are nothing more than machines that operate by mechanical principles. Humans, however, have a dual nature because they also have “rational souls” that make them unique among living things; it is why humans alone can say, “I think, therefore I am.”

Descartes’s dualism leads to the conclusion that except for the act of reason, everything about a human being is mechanical. Indeed, in his Discourse on Method, he wrote, For we can certainly conceive of a machine so constructed that it utters words, and even utters words which correspond to bodily actions …(e.g., if you touch it in one spot it asks what you want of it…. )” although he did not believe such a machine could be made to carry on a meaningful conversation; that is, it would fail the Turing test.

Descartes might even have acted on the idea that a biological body is a machine; there is some evidence that he had plans to make

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

automata. There is also a persistent story that he took a clockwork “daughter” with him on a sea voyage to Sweden. She was supposedly made to replace his real daughter who, in the great tragedy of his life, had died at age five, much as the fictional Rotwang made his female robot to replace the lost Hel in Metropolis.

Descartes’s philosophical views were not universally accepted, of course. One contrary position held that animals are superior to humans because they are more natural. But the idea of “man as machine” was taken up by others during the Enlightenment, most spectacularly by the French physician Julien Offroy de La Mettrie. His extremely atheistic and materialistic position was so poorly received in France that he fled to Holland. There, his book L’Homme Machine (translated as Man a Machine but literally, “The Man-Machine”), published in 1747, was seized by the Church to be burned. (He fled again, this time to Prussia, where he became court physician to Frederick the Great.) Nevertheless, with the support of other Enlightenment figures, the mechanical view flourished and the power of scientific materialism grew. By the mid-nineteenth century, the Dutch physiologist and philosopher Jakob Moleschott could express a materialistic approach to living phenomena by insisting on “scientific answers to scientific questions.”

Whatever the philosophers’ opinion, so remarkable were the achievements of eighteenth-century makers of clockwork automata that they might be excused if they believed that “man is a machine.” Two of the most famous automata makers were contemporaries: the Frenchman Jacques de Vaucanson, born 1709, and the Swiss Pierre Jaquet-Droz, born 12 years later. Along with his son, Jaquet-Droz created automata that even today seem marvelous. In 1774, he made a “life-sized and lifelike figure of a boy seated at a desk, capable of writing up to [any] forty letters [of the alphabet],” which can still be seen in operation in the History Museum in Neuchâtel, Switzerland. Another artificial boy he created could draw four different pictures.

De Vaucanson was known for his automaton musicians, completed when he was 18. As related in Bruce Mazlish’s article “The Man-Machine and Artificial Intelligence,” these included

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

[a] flute player who played twelve different tunes, moving his fingers, lips and tongue, depending on the music; [a] girl who played the tambourine, [and a] mandolin player that moved his head and pretended to breathe

The flute player was the most remarkable of these, it actually played the flute by expelling air into the instrument, which struck observers as especially compelling lifelike behavior.

However, it was de Vaucanson’s synthetic duck, made in 1738, that was the talk of Europe. The duck was constructed of gold-plated copper, and contained more than 1,000 parts including a digestive tract that used tubing made of a newly discovered material—natural rubber. The copper duck could do practically everything a real duck could do except fly. It quacked, flapped its wings, drank, took in grain with a characteristic head-shake, and voided it again. (Although some of de Vaucanson’s automata were lost in the French Revolution, the duck survived in the possession of a German collector, in whose collection Johann Wolfgang von Goethe saw it. Apparently it had fallen on hard times, because Goethe reported that “The duck was like a skeleton and had digestive problems.”)

De Vaucanson’s work foresaw present thinking about artificial beings. Perhaps guided by his training in anatomy and medicine, he had a sweeping aim in mind. According to a report of an address de Vaucanson gave in 1741, his hope was to construct

an automaton figure which will imitate in its movements animal functions, the circulation of blood, respiration, digestion, the combination of muscles, tendons, nerves … [de Vaucanson] claims that by means of this automaton we will be able to … understand the different states of health of human beings and to heal their ills.

While de Vaucanson did not achieve this lofty goal, his duck was a beginning; it was equipped with openings for observing the digestive process. In his commitment to giving a complete accounting of all bodily functions including excretion, de Vaucanson also caught hold of our ambivalent fascination with life’s earthier elements. Little girls’ dolls wet their diapers and sophisticated pet robot dogs inevitably come with modes to make them lift a leg and tinkle cutely on the rug. For those of us who fear that technology is inhumanly sterile, there is

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

something hopeful in this intersection of our inescapable animal-like needs with technological cleanliness.

De Vaucanson’s efforts influenced the science of artificial beings in another way by contributing to modern computation. In appreciation of his mechanical genius, Louis XV named him director of the royal silk enterprise, in which position he invented an automated loom that used a cylindrical arrangement of punched holes to set the woven pattern. It was later refined in the Jacquard loom of 1801 that used punched cards—direct forerunners of punched computer cards. (De Vaucanson was recognized in his time by Voltaire and de La Mettrie, both of whom called him a “new Prometheus.” He also appears in the painting Une Soirée chez Madame Geoffrin, en 1755 by Anicet-Charles-Gabriel Lemonnier, which has been called the “Smile of the Enlightenment.” It shows de Vaucanson as one of 50 luminaries in an imaginary gathering including Voltaire, Rousseau, and Diderot.)

Both the cleverness and the limitations of mechanical people are apparent in the automaton said to have the largest capacity of any such device, presented in 1928 to the Franklin Institute in Philadelphia. This “Draughtsman-Writer” is a figure seated at a desk. When its springs are wound up, it moves its head down as if looking at a sheet of paper. Then its right arm, grasping a pen, inscribes two poems in French, one in English, and four elaborate drawings, including a sailing ship and a pagoda-like Chinese structure, while at the same time its eyes and left arm move.

The figure was damaged in the 1850s in a fire at a Philadelphia museum operated by the showman P.T. Barnum. Once restored to operating condition at the Franklin Institute, it revealed the name of its maker in the margin of its last drawing, where it wrote “Ecrit par L’Automate de Maillardet,” that is, “Written by the Automaton of Maillardet.” Henri Maillardet had worked with Jaquet-Droz and built this automaton around 1800. He made another one for George III of England that wrote in Chinese, as a gift for the Emperor of China.

This device celebrates the ingenuity of the eighteenth-century clockmakers, and also shows that clockwork could not provide the capacity and flexibility that are essential components of intelligence. The “Draughtsman-Writer” requires 250 pounds of brass, metal, and

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

wood to store and display its poems and drawings. Its memory, analogous to a modern read-only computer memory, is a set of 96 brass cam mechanisms.

A cam is a disk mounted on a rotating shaft. Resting on the rim of the disk is the cam follower, a finger free to move up and down as the disk rotates. If the cam is perfectly round, the finger does not move. But if the cam is an oval, a heart, or some other shape, the follower moves up and down as the cam spins. This old idea is still used in automobile engines where cams on the camshaft open and close valves to control the flow of air and fuel into the cylinders. In Maillardet’s figure, cam followers attached to the writing arm determine its motion in three dimensions. The corresponding cams are intricately shaped so that the motions trace out the letters and lines of the poems and figures, while other cams move the left hand, head, and eyes.

It is fascinating to watch the delicate movements that this arrangement imparts, as I found when I was permitted to see the Draughtsman-Writer in action at the Franklin Institute—a far more elegant, if slower, method of printing than a computer’s laser printer. Winding up the springs was no easy matter, because it takes massive coils to turn the heavy cams. When a sheet of blank paper was inserted under the hand, and it began to write with its pen, I felt a sense of anticipation as the image or lines of poetry began slowly to take shape, stroke by stroke. The results were worth waiting for: delicately drawn images with a good deal of detail and finesse, and for the words, the finest eighteenth-century copperplate script. Best of all was when the hand wrote “Ecrit par L’Automate de Maillardet,” a message sent directly from the figure’s maker two centuries ago.

We can only admire the effort and dedication it must have taken to cut brass into precisely the right shapes to form intricate lines on paper, but it is exactly the difficulty of carrying out, and later changing, mechanical programming that prevents cams and clockwork from giving truly lifelike responses. There can be no surprises as automata like the “Draughtsman-Writer” go through their paces, because a given set of cams always runs through the identical program and produces the identical motions and marks on paper. Short of bringing in

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

a brand-new set of cams, there is no way to affect the behavior of the automaton.

One other eighteenth-century mechanism worthy of special attention is the famous chess-playing automaton known as “The Turk.” Constructed by the Hungarian nobleman Wolfgang von Kempelen in 1769, it was in the form of a man dressed in Turkish costume complete with turban, and seated behind a cabinet atop which sat a chessboard. A human opponent sat opposite the Turk and the two played, with the Turk reaching out a hand to move pieces as the game progressed. For 85 years, this mechanism passed from owner to owner, eventually ending up in the possession of an American named Johann Maelzel. It was destroyed in the same fire that damaged Maillardet’s “Draughtsman-Writer.”

The Turk played excellent chess. It defeated most comers, including players of high caliber, and eminent personages of the time, such as Napoleon (according to legend, the Turk knocked the chess pieces off the board after Napoleon repeatedly attempted illegal moves); the computer pioneer Charles Babbage (who later enters this story in his own right); and Edgar Allan Poe. Supposedly the Turk’s amazing performance was due to intricate clockwork visible within the cabinet. From today’s vantage point, we should be surprised at this perception; after all, it was a major event when in 1997 the IBM computer “Deep Blue” managed to defeat world chess champion Gary Kasparov (and that only after losing five games of six the previous year).

We would be right to doubt that eighteenth-century technology mimicked the human brain, because the Turk was a hoax. A human hidden inside the cabinet manipulated the figure’s hand to move the chess pieces, as Poe and others surmised. Nevertheless, the Turk teaches us a lesson in how artificial beings affect people, because over its long history, many believed it could play a meaningful game of chess. Apparently we are willing to meet artificial beings halfway, mentally filling in the blanks between what they present and what we want to believe. Perhaps if the chess player had been displayed only as a collection of gears without a human form, viewers would have found it less believable, although the machinery might have impressed them.

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

But why should a clockwork automaton not play winning chess? Although these devices far surpassed the efforts of the early Greeks, although de Vaucanson dreamt of simulating a human body with his superb mechanical systems, they lacked the crucial capacity to change their operations on the fly—which meant they could not react to external stimuli. Any definition of intelligence includes the essential ability to adapt to the environment and new situations within it. This is the critical difference between the mechanical programming of the eighteenth and nineteenth centuries and present-day computer operations, although adaptability alone is not enough to guarantee intelligence or consciousness.

As long as the preeminent technology remained mechanical, even with the steam engine to generate power (James Watt patented the device in 1769), it was difficult or impossible to engineer that indispensable flexibility. Only the advent of electrical science in the eighteenth century brought a versatile power source that could lead to machine intelligence and perceptual abilities. Electricity brought another virtue, a semimystical connection between this physical phenomenon and the workings of living beings, giving electricity special meaning as an energy source for human-made life.

We have known about electricity at rest, called “static electricity,” since the time of the ancient Greeks. They observed that a piece of vigorously rubbed amber attracted a small object, and indeed, the word electricity comes from elektron, the Greek word for amber. By the 1740s, scientists had accumulated enough knowledge to begin building a theory of electricity. Benjamin Franklin’s idea of an electrical fluid that produced positive and negative charge was a great contribution; so were new instruments such as spark generators, and the Leyden jar, which stored electricity for use in experiments.

THE LIFE ELECTRIC

Although scientists and laypeople alike understood more and more about electricity as the eighteenth and nineteenth centuries progressed, they continued to regard it as a marvel. Demonstrations of its

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

power excited great public interest. As late as 1893, the beauty of incandescent electric light bulbs left viewers awestruck at the World’s Columbian Exposition in Chicago. The glow of a light bulb does not come from static electricity, which arises from electrical charge that is at rest and thus incapable of performing useful work, but from electric current, which is the flow of electrical charge in the form of electrons. Nearly every important application of electricity, from illumination to computation, depends on current.

Electric current is not a human invention. It flows in a lightning flash and in the animal world. Plato, Aristotle, and the Roman naturalist Pliny the Elder all wrote about the Mediterranean creature called the torpedo fish, which moves normally but makes other fish sluggish. Now we know that the torpedo fish is a natural electrical source that sends current through its victims to narcotize them.

The first observations that led to the human use of current were made in an animal; they were part of the research carried out in the 1780s by Luigi Galvani, the anatomy professor at the University of Bologna who studied how electricity made a dissected frog’s legs twitch. Galvani’s conclusion, that a form of electricity arose in the frog, inspired Alessandro Volta, a physics professor at the University of Pavia, to carry out further experiments.

Volta’s researches showed that Galvani’s belief in “animal electricity” had no basis, an important outcome in itself, and had another far-reaching effect. This was a fundamental breakthrough that Volta announced in 1800—the Voltaic pile, a stack of alternating zinc and copper disks, separated by cloth or cardboard soaked in salt water. That was the first electrical battery, a device to produce a steady flow of current. Its importance was immediately recognized. Napoleon observed Volta’s invention at a command performance in 1801, and went on to name Volta a senator and a count of the kingdom of Lombardy. Scientists quickly applied this new resource. Within a year, Humphry Davy of the Royal Institution in London attached two carbon electrodes to a massive battery and obtained an intense white glow, thus discovering the carbon arc, the earliest form of artificial electrical lighting.

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

However, it took time and a degree of controversy before it was generally accepted that electricity was not a kind of “life force” as Galvani had supposed. Volta himself described his battery as an electrical organ, because its stack of disks resembled the columnlike stack of biological cells that gave the torpedo fish its electrical powers. The supposed connection between animal vitality and electricity lingered for a time, and although the connection was scientifically disproved, the symbolic meaning of electricity as a vitalizing force remains. Electricity is the right choice to give artificial beings their motive power, the power to act, and conceptual power, the power to think.

Electricity has another special value. We now know that the neural signals that control the body, carry sensory information, and are related to thought itself, consist of electrical impulses sent from nerve cell to nerve cell. This is not a purely electrical phenomenon because the impulses are produced and passed on by chemical means, but neural activity has a strong electrical component, which is why it is possible to create physical interfaces between a living nervous system and electronic devices.

It would be a long time, however, before electricity could animate artificial beings and their brains, or electronic devices could be connected to human neurons. A whole civilization could not run on batteries alone. The broad use of electricity required the discovery of a new principle, the law of electromagnetic induction, which the English physicist Michael Faraday found in 1831. This discovery led to the construction of electrical generators that could make vast amounts of power, electric motors, and every other kind of electric device.

With widespread use, electricity drove the next wave of technology to animate artificial beings and gave the best hope to replicate human intelligence and even consciousness. Remarkably, the simplest possible electrical device, the humble on-off switch (one of which Frankenstein threw to animate his creature in the 1931 film) is the key to intelligent creatures, because such switches—banked in enormous quantities and operating at unimaginable speeds—are the heart of a digital computer. The path to that realization began thousands of years ago with the first machines that dealt with counting and num-

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

bers. Quantitative reasoning is the component of human thought that is easiest to simulate with a machine, and so thinking machines began with mathematics machines.

THINKING HUMAN

One of the earliest of these devices automatically paired objects with events in order to count them. In Roman times, military chariots had a mechanism mounted on the axle that dropped a stone into a cup each time a certain distance was covered, to keep track of total distance traveled (in Latin, a small stone or pebble is a “calculus,” and the word remains in the names of two important mathematical techniques, differential and integral calculus).

Later the more sophisticated abacus helped people do arithmetic. With forerunners dating back to 500 BCE, its present form—a wire frame on which are strung sliding beads—appeared in China around 1200 CE. The beads do not automatically perform calculations as they are moved (they do not accomplish “carries” from the “units” column to the “tens” column, and so on, as numbers are added), but only keep track of the operator’s arithmetic. Still, the device represented a conceptual advance over counting pebbles because it introduced symbolic or positional notation; some beads carry a value of “one,” whereas others are valued at “five”—an innovation that speeds up calculations and is echoed in modern computers.

The next step came much later, when seventeenth-century inventors (including two eminent mathematicians, the Frenchman Blaise Pascal and the German Gottfried Leibniz) developed automatic or semiautomatic mechanical calculators. One adding machine worked like a modern automobile odometer. Six interlocking rotating wheels, each numbered 0 to 9, represented the “units,” “tens,” and other columns of a six-digit number. Numbers were entered by turning the wheels. As values accumulated, for instance in the “units” column, and that wheel rotated through its whole range, it moved the adjoining “tens” wheel from 0 to 1, and so on. This took proper account of carries from one column to the next. Mechanical calculators contin-

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

ued to be improved through the nineteenth century and into the twentieth. Eventually they were operated by electric motors, and in 1892, William S. Burroughs developed a machine in which numbers were conveniently entered by keystrokes. Others invented calculators that printed out their numerical results. Such machines quickly became staples of business offices and scientific laboratories.

But a better alternative had been available in principle for many decades, the machine conceived by the Englishman Charles Babbage, who from 1828 to 1839 served as Lucasian Professor of Mathematics at Cambridge University, the position once held by Isaac Newton and now occupied by the physicist Stephen Hawking. Babbage was inspired to think about calculating machines because of his connection with the Royal Astronomical Society, which brought him face to face with the many errors appearing in hand-calculated tables used for astronomical observations. He is said to have blurted out “I wish to God these calculations had been performed by steam!” and, in 1834, began designing the Analytical Engine.

Babbage had earlier designed what he called Difference Engines for specialized calculations. The Analytical Engine was meant to be far more: a general-purpose computer that could deal with a wide range of mathematical problems. The power of the machine came from its capability to be programmed; that is, it could follow a predetermined set of instructions. The program steps were to be encoded and entered into the machine on punched paper cards like those pioneered in the Jacquard loom. The machine could operate on numbers 40 digits long, each represented by a column containing that many wheels. It would take three seconds to execute an addition, and two to four minutes for a multiplication or division, with final results to be printed out or set in type by the device.

The conception of a calculating device that followed a program, which—properly formulated—could solve any conceivable mathematical problem that had a solution, was a great breakthrough. (The first programmer was Augusta Ada King, Countess of Lovelace, and amusingly enough, daughter of that very same Lord Byron who had inspired Mary Shelley to write Frankenstein. She developed program-

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

ming methods for Babbage’s computer, and the contemporary computer language ADA is named in her honor). In almost all respects, Babbage’s design is remarkable in how well it foretold the methods and organization of modern electronic computers. The storage of information on cards corresponds to what we now call ROM, read-only memory, and punched cards themselves were used as a primary input medium for electronic computing well into the 1970s. What Babbage called his “store” corresponds to RAM, random-access memory, and his “mill” to the CPU, central processing unit, of modern computers.

Most remarkably, Babbage’s machine included a significant step toward the flexibility needed for machine intelligence, the seed of something extremely powerful: His computer could examine its own work and decide on its next action by means of the “conditional jump.” In the course of calculation, the machine could compare a given intermediate result to another value; for instance, to determine whether a particular outcome is a positive or a negative number. Then, depending on the answer, the machine could choose among different program paths. This capability greatly enhances computational power. A conditional jump can be used to determine when a calculation has reached a desired accuracy and can be terminated; in statistical analysis, to find the largest or smallest of a set of numbers; or, to give a modern example, to decide when to sell a stock as well as a multitude of other applications.

The deeper significance of the jump is that it introduces an element of machine choice. This is not yet free will, because the programmer must foresee every possible outcome and provide an alternative for each (if not, the computer might find itself paralyzed). Natural intelligence can always surprise us by a completely unforeseen choice, whereas a conditional jump offers only a menu of known options, one of which must be followed. Still, we do not know in advance which path the computer will select, especially for complex problems, and so the machine can surprise us as well. This kind of choice by an artificial being has a special significance because such a being, acting in response to external data, is interacting with its envi-

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

ronment, a first step toward intelligence. And in using its own internal operations to influence future actions, it is exhibiting a tiny step toward self-awareness.

If Babbage’s machine had been used in industry, science, and government, the Victorian age and our own might be different. But his concept pushed Victorian technology to its limits. The Analytical Engine would have been an overwhelming piece of machinery, with some 50,000 components occupying a space of 500 cubic feet. Between technical difficulties and Babbage’s failure to raise funds, this mechanical computer was never built (although in 1906, Babbage’s son built its mill portion and showed that it worked). So although Babbage and Lady Lovelace defined much of what a computer is and can do, it took future breakthroughs to produce practical devices.

ON AND OFF

The person responsible for the next conceptual link on the way toward machine intelligence appeared on the scene very nearly as Mary Shelley was writing Frankenstein—George Boole, born in Lincolnshire, England, in 1815. This self-taught mathematician laid a theoretical basis for the modern computer by quantifying logical thought. In 1854, his Investigation into the Laws of Thought presented a new kind of algebra, in which mathematical equations were represented by logical statements that could take only one of the two values “true” or “false.” This Boolean algebra had no immediate application, but its binary nature proved compelling when it became apparent that electricity was the preferred medium for computers: The simplest conceivable electrical device, the on-off switch, controls exactly two states—current flowing or not flowing—which can just as easily be labeled “true” or “false.”

As irony would have it, difficulties in mechanical computation like those that stymied Babbage inspired the U.S. mathematician Claude Shannon to apply Boole’s ideas. In 1936, Shannon, a graduate student at MIT, was analyzing the behavior of a mechanical computer called the Differential Analyzer—a useful machine, but one that was

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

awkward to program and maintain. Shannon concluded that its operations could be better accomplished with electricity. He had taken a course in Boolean algebra, and in a landmark 1938 paper adapted from his master’s thesis, pointed out that a collection of on-off switches arranged according to Boolean principles could carry out logical and mathematical operations. (Shannon was later to say, “It just happened that no one else was familiar with both fields at the same time.” He went on to Bell Telephone Laboratories, where in 1948 he wrote another seminal paper, “A Mathematical Theory of Communication” that laid the basis for information theory.)

The only drawback to this scheme was that it forced the computer to operate with a binary number system rather than the familiar decimal one. That was the birth of the binary digit or bit, which takes on only a value of 0 or 1. Numbers are very long in this system: for instance, the decimal number 31 becomes the binary number 11111. But the advantages of working with a two-state electrical system far outweighed this slight complication, and the computer could always be programmed to deal with input and output in the decimal form favored by humans.

Then it became a matter of engineering to implement Shannon’s ideas. The first programmable binary calculator was built in 1938 by Konrad Zuse in Berlin, as a mechanical device to illustrate the principle. This was followed in the 1940s by electric Boolean computers, some of which used electromechanical relays, on-off switches that operate by remote control. An electric current is sent through a coil of wire, producing a magnetic field that pulls a metal finger so that it makes or breaks an electrical circuit.

Relays had been highly developed for telephone networks, which require myriads of choices to route calls, and an early relay-based computer was built at Bell Labs. In 1941, Zuse built an electrical version that worked much faster than a mechanical unit, but in one way, the machine was inferior to Babbage’s ideal machine—it could not perform conditional jumps. The ultimate relay-based computer was the Harvard-IBM Automatic Sequence Controlled Calculator (“Mark I”), built at Harvard in 1943. This enormous machine, which

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

calculated gunnery data for the U.S. Navy, weighed 5 tons and was more than 50 feet long. Its miles of wiring linked together more than 500,000 electronic components, including more than 3,000 relays, but for all its massiveness, the Mark I did not support conditional jumps either.

A relay is not the only or best kind of controllable on-off switch. The same function can be performed with a vacuum tube, a device, patented in 1904, that was an outgrowth of Thomas Edison’s work with incandescent lighting. A vacuum tube is an evacuated glass envelope that contains electrodes. Without air molecules to interfere, electrons stream through space from electrode to electrode, carrying information and electrical power. This device initiated the electronic age because it could control and amplify electrical signals, making it indispensable for radio and television as well as for video and audio reproduction.

Like a relay, a vacuum tube can switch current on and off, but without mechanical parts, the tube is faster and more reliable than any relay. The first Boolean circuit with tubes was made in 1939, and in 1943, British engineers built the “Colossus.” With several thousand vacuum tubes, this special-purpose computer analyzed German military codes as part of the famous “Enigma” code-breaking effort at Bletchley Park in England. The first full-featured electronic digital computer followed in 1946: the electronic numerical integrator and computer (ENIAC) built by J. Presper Eckert and John W. Mauchly at the University of Pennsylvania. Its 18,000 vacuum tubes used many kilowatts of electrical power to determine artillery trajectories at a rate of thousands of calculations a second.

ENIAC served its military purpose and was also used for scientific calculations, but its hardware connections had to be tediously set by hand. Other machines of the era entered programs on punched paper tape and were no great advance either over the Jacquard cards that Babbage had envisioned. The idea that made computers infinitely more flexible is usually ascribed to the brilliant Hungarian-born mathematician John von Neumann, although there is evidence that Eckert, Mauchly and others entertained a smiliar approach. In 1945

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

von Neumann wrote a report describing the idea of the stored program, where the instructions are held in the computer’s memory just as data are. The instructions themselves can be manipulated, making possible, for instance, compilers—programs that convert human-language–like commands into binary-based machine language for the computer. With other features, including a central processing unit and the use of binary numbers and Boolean algebra, this von Neumann architecture is still the standard in computer design. In 1949, the first stored program computer was built by Maurice Wilkes at Cambridge University. Not long after, in 1951, computers came of practical age when Eckert and Mauchly delivered a UNIVAC (Universal Automatic Computer), the first successful commercial electronic computer, to the U.S. Bureau of the Census.

With their size and huge power consumption, these machines were hopeless as candidates for the artificial minds of mobile robots, but the idea of simulating human thinking appeared early in their history. The most significant insights came in 1950 from the British mathematician Alan Turing, whose earlier work had dealt with allied subjects. In 1937, in a paper concerned with the nature of mathematical proof, he proposed a method to break any mathematical problem into a series of steps. This is exactly how a computer program works, and so although Turing was not writing about computers per se, his process amounted to a theoretical description of a modern computer before a single one had been built.

During World War II, Turing, as one of the team of analysts working on Enigma code breaking, had an opportunity to come into contact with real computers. Although much secrecy surrounded the project, it seems likely, as Andrew Hodges notes in his book Alan Turing: the Enigma, that Turing was exposed to the capabilities of the Colossus computer. In any event, in 1950, Turing wrote the seminal paper “Computing Machinery and Intelligence,” with the opening sentence “I propose to consider the question ‘Can machines think?’ ”

Turing believed that if a computer could do any and all mathematical operations, “We may hope that machines will eventually compete with men in all purely intellectual fields,” and proposed the

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

now-famous Turing test as a meaningful measure of machine intelligence. Writing in 1950, Turing stated his belief that

in about fifty years time it will be possible to programme computers … so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning…. I believe that at the end of the century … general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

Turing estimated that a computer with a storage capacity of about 1 billion bits could pass his test. In a way, he was a twentieth-century Babbage, because that requirement was exponentially beyond the technology of the time, as were other ideas of his, for instance, that an important part of machine intelligence would arise by enabling the computer to learn.

Turing was not alone in believing that machine intelligence could be realized, or at least was worth investigating. Six years after his paper, the first study group on the subject was convened at Dartmouth College by the mathematician John McCarthy, who coined the term “artificial intelligence.” Other attendees included Claude Shannon and Marvin Minsky, who was to become a highly influential pioneer in the field at MIT. The conference manifesto read

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.

The strategy that emerged was the development of programmed simulations of important chunks of intelligent human behavior. Language skills are one extremely significant part of our thought processes, and early AI researchers worked on machine translation of language, as well as natural language processing; that is, communicating with computers in ordinary language, not special programming languages. Another chunk is the mix of logical thought and strategic planning exemplified in game playing, chess being a prime example. A third is the deductive thinking used in mathematical and geometric proofs. And finally there is visual cognition, the ability to see and give meaning to a scene—among the most challenging of higher brain functions.

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

While early AI researchers were programming machines to think intelligently in these areas—or at least, trying to—the science of artificial beings was developing in ways that became increasingly entwined with AI and computers. The first more-or-less humanoid creations appeared in the 1920s and 1930s (by then, following Carel Capek’s R.U.R, such creations were called “robots.”). One early model was displayed in London in 1928. It did not walk but could move its arms, hands, and head, rise from a seat and take a bow, and speak by way of a voice box, although what it said is no longer known. It was animated by an electric motor driving an array of cables and pulleys that the early Greeks would have recognized, with electromagnets providing additional flexibility.

A decade later, a more sophisticated example of a robot was extremely popular at the 1939 New York World’s Fair, a showplace for the technology that would supposedly improve the world. Elektro the robot was constructed by the Westinghouse corporation. This 8-foot-tall metal construction could move forward and backward, count to 10, and say 77 words. Although Elektro was a large, threatening-looking clanker, Westinghouse went out of its way to humanize the robot. It could dance, and smoke a cigarette, which at the time also seemed endearingly human. A contemporary photograph shows a woman offering Elektro’s robot dog Sparko a tidbit as the creature sits up and begs. The woman is tiny compared to Elektro, but the robot stands benevolently by and the whole scene radiates friendly technology.

More than 60 years after that World’s Fair, Elektro’s engineering details are difficult to come by, but most likely it carried out fixed routines controlled by the relays and vacuum tubes then being introduced into computers. This was the technology Isaac Asimov alluded to in I, Robot as inadequate for versatile behavior without a “positronic brain”; relays and tubes alone were not enough to support complex robotic thoughts and actions.

But as computers and AI developed, the “positronic brain” came closer to realization. First, electronic brains had to become smaller and less power-hungry if they were to be installed in robots. The march toward solid-state electronics took care of much of that. Bulky vacuum tubes gave way to tiny transistors (invented in 1947 by

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

William Shockley, John Bardeen, and Walter Brattain at Bell Labs). These new devices immediately enabled the construction of improved computers, which were soon employed to control so-called industrial robots. George Devol, an engineer, patented the first such device in 1954, and with his partner Joseph Engelberger founded a company to make and sell the UNIMATE—a programmable, one-armed manipulator for use in assembly lines and industrial processes. Engelberger saw such robots as “help[ing] the factory operator in a way that can be compared to business machines as an aid to the office worker.”

General Motors bought its first UNIMATE in 1961, but despite Engelberger’s optimism, these robots did not become widespread in the U.S. automobile industry until their economic advantages became apparent—especially in competition with Japanese industry, which began enthusiastically adopting industrial robots in the late 1960s. In 1978, GM finally installed a highly automated assembly system that used a programmable arm called PUMA (Programmable Universal Machine for Assembly), and now this type of robot is integral to automobile manufacture and other industries.

Industrial robots are not mobile autonomous mechanisms; they do not move from their bases, and they only follow a preprogrammed series of steps. They are closer to computer-controlled machine tools than to self-determining beings. Nevertheless, they have taught us a great deal about how to make artificial bodies move and how to use computers to control physical actions. The next step was to make smarter artificial minds.

That step was assisted by the advance that came after the invention of solid-state transistors, the invention of integrated circuits in 1958, which put many transistors and other circuit elements on a single tiny piece of silicon. Integrated circuits steadily grew in capacity and shrank in size, going through successive waves—LSI (large-scale integration), VLSI (very large-scale integration), and ULSI (ultralarge-scale integration)—until today a single Pentium-type computer chip contains millions of transistors and other circuit components. These changes reduced the size of computers and powerfully enhanced their speed and storage capacity.

Hopes for successful AI grew along with computer capabilities,

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

but progress was uneven. Programs that produced deductive proofs of logical or mathematical truths, such as those a human mathematician might derive, worked well, perhaps because they were more or less natural extensions of computer processing. But results in machine translation of language were discouraging; available methods could not cope with the subtleties of multiple and contextual meanings of words. And although an AI program could play a good game of checkers, chess was too much. (Even long after these early efforts, in 1997, the chess-playing Deep Blue computer depended on brute force rather than subtle AI-based strategic analysis, using its great speed and memory to examine all possible outcomes of a given move and then selecting the best one.)

In these early approaches, the idea was to put into the computer a complete model of a system in symbolic terms that the computer could incorporate and apply. But it became clear that this “top-down” or “symbolic AI” method was not necessarily the best technique for robots operating in the real world. One famous example of the symbolic approach comes from work on robotic vision carried out from 1968 to 1972 at the Stanford Research Institute (now SRI International). Funded by the Department of Defense, the plan was to make a robot that could autonomously traverse a battlefield to deliver supplies and gather fire-control information. The test unit (dubbed “Shakey” because it wobbled as it moved) consisted of a motorized wheeled platform with a computer, a TV camera for vision, a rangefinder to measure distance to an object, and a radio link to a second, remote, computer for more processing capacity. The robot was developed in an idealized environment, a set of rooms containing simple, brightly colored shapes such as cubes.

Shakey would receive a typed command such as “Find the cube-shaped block in that room and push it to the other room.” The robot would examine the room and the objects in it, identify the target, plan a route that avoided obstacles, and carry out the planned moves. Within this laborious process, Shakey displayed flashes of intelligence that combined perception, problem-solving capability, and the ability to move to the right place. In one trial, Shakey shifted a ramp so that the robot could roll up it to reach a target on a raised platform. But

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

the calculations required to accomplish such tasks took hours; even worse, the robot could not cope with changes such as a rearrangement of the objects in its special environment, let alone deal with the infinitely more complex conditions found on a battlefield. Shakey’s top-down approach could not pre-factor in every possibility, and it produced an entity far less adaptable than a human or, for that matter, a dog or cat, which knows how to avoid obstacles even in a strange environment.

In the mid-1980s, robotics researcher Rodney Brooks (then at Stanford University, now at MIT) found himself dissatisfied with this kind of limited performance and began questioning the value of the symbolic approach. Speaking of “intelligence without representation,” he proposed that robots could act intelligently without using internal symbols. The mobile units he built could be called stupid, in that their programming and computing power were less rich than Shakey’s, and instead of the “brain” being localized the processors were distributed throughout the robots to control their individual parts. Further, the sensors that detected how the robot interacted with the real world were closely tied to the motors that controlled its actions, so that the unit could respond rapidly to the data flowing in.

The result was that Brooks’s robots evolved their own behavior as they explored the world. For example, one of his early efforts, called Genghis, learned to walk. Although wheeled robots have their uses, a robot with legs manages better in rough terrain, which might be encountered when NASA sends robotic explorers to distant planets. Insectlike, Genghis had six legs, each with its own motors, processor, and sensors that registered what the leg was doing. Additional sensors detected obstacles in the robot’s path. Others reacted to heat, enabling Genghis to sense the presence of warm-blooded mammals; for example, people.

Initially Genghis’s six legs were uncoordinated and the robot could not walk. But as each leg tried different movements, Genghis learned from its mistakes through a form of behavior modification by positive and negative reinforcement. In 1990, Brooks described how the unit was programmed:

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

[E]ach of the [robot’s] behaviors independently tries to find out (i) whether it is relevant (i. e. whether it is at all correlated to positive feedback) and (ii) what the conditions are under which it becomes reliable (i.e. the conditions under which it maximizes the probability of receiving positive feedback and minimizes the probability of receiving negative feedback).

“Positive” and “negative” feedback mean that the signals to the leg motors are modified to enhance or diminish the occurrence of specific motions, depending on whether they contribute to the goal of walking—a process similar in spirit to biological evolution, which by trial and error weeds out whatever does not contribute to an organism’s survival. Little by little, the six legs coordinated themselves and Genghis became a sophisticated walker. The result is a robot that behaves in a lifelike manner as it crawls on the floor and over obstacles, and follows a human around the room when its heat sensors detect one.

In a top-down approach, a robot’s actions are motivated by the expectations that are part of the symbolic model of the world that is built into it; but Genghis gained the intelligence to walk by responding directly to stimuli, an approach often called bottom-up. Both the top-down and bottom-up methods are valuable in constructing artificial minds even if they lack bodies, but it is easy to see that the latter approach is especially meaningful for a robot that physically interacts with the real world. An autonomous robot is not useful unless it deals intelligently with its physical environment, where it has to move without collisions, manipulate objects without breaking them, and so on. If the right learning mechanisms could be found, that interaction would constantly help the robotic brain develop on the basis of experience, just as we humans learn to function in the world by doing things in it.

Brooks’s approach was one new thread in AI that began in the mid-1980s; it was not the only one. In 1986, Marvin Minsky presented an alternate approach in his book The Society of Mind. Rather than consider the human mind as a single entity responsible for all thought and behavior, which could in principle be described once and for all, he proposed that different components of the brain all “speak” at the same time. From this babble, in which some voices are

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

loud and others soft, some agree and some oppose each other, a consensus emerges that defines behavior.

The idea of distributed and even contending “voices” within the human mind might seem strange. Minsky’s model is only one among many proposed since the time of Descartes, as we consider how the physical brain and conscious mind are linked to each other, and is far from being accepted as a definitive explanation of how the mind works. But used as an AI technique, the concept of multiple voices has imparted convincingly lifelike behavior to various robotic toys.

Along with such changes in the design of artificial brains, enhancements in computer speed and capacity offered new possibilities for AI. One advance was the technique of parallel processing, which some observers considered a fifth generation in computing (the fourth generation consisting of computers using VLSI and ULSI technology). In a parallel processor, many computer chips are interconnected so that each one handles a different part of a problem at the same time, which can give impressive results. In 1987, for instance, a parallel processor called the Connection Machine operated 64,000 microprocessors simultaneously to perform two billion computer operations per second—an impressively high speed that could hardly be matched by conventional computers at the time. But programming a parallel machine so that the parts of the problem are properly parceled out is difficult, and it is unclear whether parallel processing can offer enormous advantages.

However, the idea of carrying out many “thought” operations at the same time is promising for AI because that’s how the human brain works. Each of its many billions of neurons is intricately connected with others through upward of a thousand connecting points, called synapses. The neural signals that define the brain’s operations travel through the network. Many neural events are going on at the same time, a huge benefit for processing speed. The multiply connected neural architecture also protects a brain that is partly damaged from necessarily losing an entire function such as memory, and allows replacement neural connections to be forged so that new areas can take over from damaged ones. In fact, the process of learning seems to

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

consist of recording the new knowledge by means of new connections that form among neurons. This property of the brain is known as plasticity.

The highly connected architecture of the brain is a model for another approach to AI, the neural net. Unlike a conventional computer, such a net more or less simulates real biological brains. Analogous to the web of neurons that makes up a natural brain, a neural net consists of many simple processing units interconnected so that they can trade data, with each unit operating on the data it receives. Depending on how the net is structured, the system can acquire and store knowledge through a learning process that might teach it, for instance, how to identify particular images or sounds.

In 1943, Warren McCulloch and Walter Pitts, at the University of Illinois, laid the groundwork for neural nets through pioneering research that depended on viewing the brain as a complex network of neural elements. In order to make a reading machine for the blind that turned printed material into sounded-out words, they interconnected light detectors in a way that mimicked neural connections in the brain. In 1951, Marvin Minsky and a collaborator constructed another neural net called the Stochastic Neural-Analog Reinforcement Computer (SNARC), which was trained to negotiate a maze as a rat would. Further work in neural nets focused on recognition of visual and aural patterns, but the approach languished in 1968 when Minsky and his colleague Seymour Papert pointed out limitations in the method as then understood. New insights, however, revived the technique in the mid-1980s, along with other approaches that approximate biological styles of thinking.

SENSING

Initially, AI researchers aimed to produce intelligence within a computer, not a robot. A computer interacts with other machines or humans through the abstract medium of data flow but has no direct connection to its physical environment. An operational robot is different; it must take in information from its surroundings and respond

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

in real time. It is meant to emulate what happens in a human, in whom the five senses gather information about the exterior world that is sent to the brain for analysis and response. An artificial brain in a blind and deaf robot body would be useless; it needs sensors that simulate the human ones, or go beyond them.

As electronic science and technology developed, they led to the construction of artificial sensors analogous to the human senses. Machine vision had roots in late-nineteenth-century discoveries that light could change the electrical properties of certain materials or cause them to emit a flow of electrons; that is, an electrical current (one of these phenomena, the photoelectric effect, so baffled physicists that Albert Einstein earned a Nobel Prize in 1921 for explaining it). These effects were incorporated into devices that detected light by turning it into electricity. By the 1920s, the conversion of light into electrical signals was advanced enough that television images were being broadcast on an experimental basis, and in the late 1930s, the BBC in England and the RCA Corporation in the United States began regular television broadcasting. Improved television cameras were developed in 1939. Following World War II, commercial television broadcasting became widespread, and in 1953, color television was introduced.

Within the digital revolution brought on by the growth of computation, the development of video cameras whose electrical data came as a stream of binary digits was inevitable. Like a human eye that gathers data and sends it to the brain for analysis, a computer-based camera scans a scene and presents it to the computer for further processing—but this is only the beginning of meaningful machine vision. The quantity of data involved in a pictorial representation of the world is staggering and requires extremely high levels of computational power to process. And as the Shakey robot demonstrated, gathering visual data and transmitting them to the computer is the easy part; it is extraordinarily difficult to decide how to assign meaning to an image so that the robot can act on the information.

Similarly, artificial sensing of sound and means to generate it began in the late nineteenth century. Between 1876 and 1878, Alexander Graham Bell patented the telephone, Thomas Edison patented the

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

phonograph, and the British musician David Edward Hughes invented a microphone in which the pressure from sound waves altered the electrical properties of grains of carbon. Later innovations included magnetic recording, first proposed in 1920, and high-quality sound reproduction along with stereophonic sound, which began in the late 1950s and early 1960s.

Even early in the history of computers, it was music that drove the further development of digital sound techniques. The first music synthesizer program was written at Bell Labs in 1960, and by 1984, a set of standards had been created to transmit musical information in digital form between electronic synthesizers and computers. Now digital recording and playback of music, and word recognition and synthesis, are routine functions even on small desktop computers. But as with vision, the mere ability to register or produce sounds or words under computer control does not give them meaning. In humans, the linguistic analysis that the brain performs as we speak and listen is one of our most demanding intellectual functions.

Whereas the mechanics of artificial vision and hearing have been refined by decades of development, synthetic touch, taste, and smell are less highly evolved, partly because their nature in humans and animals is not so well understood. Touch and taste have the complication of operating over large areas with many sensors, taste and smell involve varied chemical interactions, and taste also seems to depend on texture. Still, artificial versions of all these senses exist and are being steadily improved. Touch has been implemented by sensors that produce an electrical effect when they are deflected or change position in space; these serve as collision-avoiding devices and enable an artificial being to judge its bodily orientation. Finer tactile sensing, like that of the human fingertips, is also under development, as are analogues for smell and taste.

In practice, these last two senses are probably the least essential for an artificial being, which could be highly functional with only vision, hearing, and a limited sense of touch. But the sense of smell carries a special meaning for us and illustrates the complexities of simulating human behavior. Although smell does not play the central role for

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

humans that it does for many other animals, it is in a way our most fundamental sense. It connects directly to an ancient part of the brain, the limbic system, without the high-level processing that vision entails. That is why an odor can evoke mood or feeling instantly. As we’ll see later, emotional factors like these might be surprisingly relevant to the creation of intelligent artificial beings.

If there are lacks in artificial senses, there are also compensations, because the natural senses have limitations. The wavelengths of light we see are a small fraction of the range of electromagnetic wavelengths in the universe, which includes infrared, ultraviolet, and more. But there are sensors that detect radiation at these wavelengths and give artificial vision extrahuman capabilities; infrared vision, for instance, can penetrate darkness. Other possibilities abound, such as using sonar the way submarines do to probe the environment, as well as the functional equivalent of telepathy—direct mind-to-mind communication among artificial beings by radio.

The possibilities for touch, smell, and taste, and for extrahuman senses, are new enough that their further discussion belongs in the second half of this book. But another aspect of artificial creatures, their appearance, has roots that go back to mechanical automata.

LOOKING HUMAN

For all their developing mental, sensory, and physical capacities, modern digital artificial beings are inferior in one way to the eighteenth-century automata of Jaquet-Droz and de Vaucanson; they are not androids, they do not look human. The artisans who built clockwork automata took great pains to make their creations resemble people, modeling the faces and hair, dressing them well, and aiding the illusion with subtle but telling cues to humanness, such as having Henri Maillardet’s “Draughtsman-Writer” look down at the paper before starting to write.

Most modern artificial beings, however, do not look like real people. The 1939 World’s Fair clanker robot Elektro had a humanoid outline, with limbs, torso, and a head, but its size, metal body, and

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

cartoonish facial features unquestionably connoted a machine. More recent robots like Shakey, Genghis, and Cog are even less prepossessing as human stand-ins, and indeed were not created with that aim in mind. As special-purpose or test units, there was no advantage in making them humanlike. They are instead bare assemblages of wheels or legs, motors, girders, sensors and computer processors arranged for engineering convenience or to provide specific physical abilities.

In the 80 years that electrical and electronic robots have existed, we have yet to create an autonomous, human-seeming android like those that appear in the virtual history of artificial beings. But the introduction of robotics technology to the entertainment industry has brought us some way toward natural-looking artificial creatures, in the development of so-called animatronic figures. The Walt Disney Corporation introduced these three-dimensional entertainment robots in human and animal form at Disneyland in 1963, and showed them at the New York World’s Fair in 1964. The first was a tap-dancing simulacrum of the dancer and actor Buddy Ebsen. Others included Abraham Lincoln standing, speaking, and gesturing, a dinosaur diorama, and the exhibit, The World of Tomorrow.

The original animatronic robots, and current versions that appear in films, are not autonomous. They operate under remote control from human operators or, like an industrial robot, perform an unvarying computer-controlled sequence of actions. But they show how far we have come in producing artificial beings that look convincingly natural. Their development has required new styles of engineering to avoid making awkward clankers. Nick Maley, who has worked on varied animatronic “creature effects” including the character Yoda in the 1980 film The Empire Strikes Back notes subtle differences between standard methods and what is effective in replicating living beings. Engineers constructing mechanical beings, he says, tend to use

strong materials to build robust mechanisms based upon the same tried and tested mechanical principles that create cars and trains…. However, nature’s creations don’t use the same mechanical principles…. Their joints are less precise, their connections less rigid…. Their existence is usually a delicate balance of strength and weight developed to suit specific circumstances.

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

Maintaining that delicate balance is not easy. One recent animatronic model of a human head is packed to the very skin with a dense collection of components and wires, including 22 small motors that control its movements and facial features.

Along with flexible mechatronic design, these robots use materials that replace the substances once incorporated into eighteenth-century automata. Glass eyes were first made in Venice around 1579, but have been replaced by plastic versions that look more natural. Likewise, although prosthetic limbs, from wooden legs to iron hands, have a long history, they began to look convincing only with the arrival of silicone rubber, a compound with natural-feeling resilient properties. It can be used to form an artificial skin with layers that approximate the internal structure, and therefore the feel, of real skin, it can be colored as desired, and pores and hair can be added as final persuasive details.

These and other advances are leading simultaneously to improved prosthetic devices and to the possibility of androids whose internal structure is overlaid with an artificial humanlike outer layer—modern versions of eighteenth-century automata. Like the rubber in de Vauconson’s duck that simulated the digestive tract, some of the internal machinery functionally replicates what goes on in living creatures; for instance, by using “smart materials,” which change their properties under external control. One type can be made to extend and contract depending on electrical voltage, simulating how human muscles act. The result is a synthetic muscle that can give artificial limbs a smooth, natural action, rather than the jerkier motion produced by machinery.

Such humanlike flexibility opens up possibilities for convincing bodily motion and even facial expression. For instance, the Saya robot at the Tokyo University of Science has a humanlike face with silicone-rubber skin. Underlying this is a set of artificial muscles, worked by compressed air and arranged to follow human facial anatomy, that can be manipulated to display joy, anger, astonishment, and other emotions. Although the question whether artificial beings can or should experience emotions is complex, there is no doubt that the ability to simulate emotion through facial expression and body language greatly affects interactions with people.

Suggested Citation: "3 The Real History of Artificial Beings." Sidney Perkowitz. 2004. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph Henry Press. doi: 10.17226/10738.

Artificial skin and muscles are examples of how the technology of artificial creatures and research in human implants interact with each other to create replacements for body parts and organs and, potentially, improvements in human function. The medical market for implanted devices is enormous, with millions of implants performed every year. This biomedical enterprise provides a technological base for efforts to enable artificial beings to mimic human capabilities, while research in artificial beings leads to better implants.

Despite our best efforts to construct artificial beings, at the moment living organs, developed through millennia of evolutionary progress, are generally superior to their artificial counterparts. Even a cat or mouse brain, let alone a human one, functions more intelligently in the real world than the best AI-driven robot yet built. The sensitive nose of a dog detects odors beyond the capabilities of mechanical sniffers. The answer to some of the pressing problems of creating artificial creatures might be the combining of nonliving with living parts, just as the god Hephaestus used the flow of ichor—blood—to add something essential to his bronze robot, Talos. As the next chapter shows, humanity already has a surprising history of combining the living and the artificial.

Next Chapter: 4 We Have Always Been Bionic
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.