June 5, 1996


Machine Intelligence, Part II:
From Bumper Cars to Electronic Minds


Related Article Machine Intelligence, Part I: The Turing Test and Loebner Prize

SURF TURF / By ASHLEY DUNN

magine yourself in a dimly lit box with only a slit opening to the outside world. All that you have before you is paper, pencil and an enormous book filled with millions of Chinese sentences.

You have been assigned a simple task. Whenever a piece of paper with symbols is passed through the slit, you must look up the symbols in the book, copy down the appropriate response and pass the paper back through the slit.

You have absolutely no idea of what you are writing, but to the outside world, the responses are amazing. To them, you appear to be fluent in Chinese.

This scenario, which I have elaborated slightly, is known as the Chinese Room. And while it may seem a bit silly on the surface, it is part of one of the most enduring debates in the field of artificial intelligence.


John R. Searle
The room was created 16 years ago by John R. Searle, a professor of philosophy at the University of California, Berkeley, in response to the question of whether machines could possess intelligence or understanding.

His Chinese Room was meant to illustrate what is apparent to anyone who has not been ensnared by the hall-of-mirrors arguments about consciousness and thought, namely that there are vast differences between syntax and semantics, between mere grammar and meaning.

For Searle, intelligence is intrinsically linked to the physical world and to the medium through which we perceive that world, our own bodies. Because of this, Searle argues that computers can never achieve human intelligence. He goes further, asserting that computers will never be intelligent by any measure as long as they rely on knowing the world in only abstract ways through their programming.

This may seem to doom computers to an eternal state of stupidity. Certainly, machines will never understand human biological phenomena, like itching and hunger, in the same way we do.

But a group of researchers in robotics and artificial intelligence holds out some hope of expanding electronic brains into minds. They have begun experimenting with mechanized bodies and senses to help computers connect with reality in a crude way.

There is a curt reply to this strategy of creating bodies for brains -- that reality is still being manipulated in an abstract form by the robot's central processing unit. Instead of having just a brain-in-a-box, we now have a walking brain- in-a-box. Who cares?

Even if machines had greater sensory abilities, would they be intelligent? Well, livers have nerves and no one says a liver is intelligent.

But the researchers believe that there is still great power in the approach of building bodies for brains because of its potential to foster computer learning, adaptability and independence -- all fundamental aspects of biological intelligence.

There are numerous robot projects under way, but most are bogged down in their reliance on digital brains to process the complexities of the world.


Mark W. Tilden, "robobiologist"
Few of the projects, if any, have taken the approach of Mark W. Tilden, a researcher at the Los Alamos National Laboratory. He has adopted one of the strangest and most radical strategies in robotics, rejecting the creation internal world models and starting at the simplest examples of biological life.

His creations are so focused on the body and physical world that they have no brains whatsoever. They are entirely reactive creatures built out of broken Sony Walkmans, solar-powered calculators and odd radio parts.

Tilden is creating the mechanical equivalent of jellyfish or tapeworms. They have no processors or memory, only a power source (from the solar-powered calculators), legs or wheels, motors (from the Walkmans), light sensors (also from the Walkmans) and a few transistors to control the motors. Most are about the size of a bar of soap.

His creatures are, in many ways, only sophisticated bumper cars, complete with analog circuitry. But his concept of starting at the very roots of intelligence is compelling.

His theory is that by building capable bodies, he will create a platform that will ultimately lead to rudimentary minds. His robots have displayed surprisingly lifelike activity. They know how to move independently toward the light, avoid obstacles, claim territory and congregate with other similar creatures. All with only a crude nervous system.

Compared with his past experiments with robots, these "biomorphs," as Tilden calls them, are a breakthrough.

Tilden began making robots when he was three years old, although it wasn't until he got to a university in Canada that he had the money and skills to build a more capable model. His first complex robot, designed in 1982 to vacuum his apartment, used a processor scavenged from an old Atari ST.


Credit: Mark W. Tilden

A Tilden "biomorph" made from a Sony Walkman
What he soon discovered was that trying to program a robot to deal with the world is a losing proposition. In a blank room with a linoleum floor, you can accurately model the world in the robot's memory. But place one obstacle in the room and now you have to program that into memory. Throw socks on the floor, add glass tables and pets and you have a programmer's nightmare.

After dumping $5,000 and 18 months of his life into his robot, he realized that trying to model the world in the robot's memory was impossible. Small changes in the environment ultimately led to an exponential growth in code.

He had written tens of thousands of lines of code for his robot. The result was a paranoiac mess that was so overburdened by programmed rules and exceptions that eventually all it did was spin in place. He powered down his creation and sank into despair over the future of robotics.

But in 1989, Tilden attended a lecture by Professor Rodney Brooks of the Massachusetts Institute of Technology who described the possibility of building robots without powerful processors as brains, relying instead on reactive sensors attached to motors. The question was how simple could you make a device that didn't need to depend on an internal model of the world to function?

It took Tilden two weeks to build his first biomorph, a small insect-like creature about the twice the size of a cigarette package. After finishing, he flipped it over and to his surprise, it started moving on its own toward the light.

His first generation of biomorphs could independently navigate his apartment and easily avoid obstacles that had doomed his vacuum-cleaner robot a decade earlier.

He realized that the key to the robots -- and to biological life as well -- was designing for survival. He rejected Isaac Asimov's rules of robot life:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Tilden, who likes to call himself a robobiologist, came up with his own rules based on a Darwinian model instead of Asimov's principle of obedience to humans:

 

  1. Protect yourself.
  2. Feed yourself.
  3. Find your property.

Since building that first biomorph in 1989, Tilden has constructed more than 140 such creatures. They have advanced through three levels of development that roughly parallel the evolution of some biological life. First came the simple act of utilizing energy, that is, just getting up and moving. Next, the addition of limbs and the mastery of locomotion. Then, the integration of senses -- the ability to detect light, sound and heat -- to guide the robot's movements.


Credit: Mark W. Tilden

Tilden's "biomorphs" can sense their surroundings. Will senses enable them to learn?
Tilden argues that the vast majority of biological life on Earth has achieved only these three levels and still manages to survive.

Today, his tiny robots autonomously roam his apartment and a torture testing ground he calls Jurassic Park. They seek light for their solar cells, navigate mazes and perform a variety of small tasks, like cleaning windows and collecting dust off the floor.

He said that the next levels of development -- complex social behavior to foster cooperation and the ability to plan future action -- require the ability to learn. In other words, they will need a memory and a brain. He believes that the addition of a capable processor and memory could push his creations to the level of simple animals.

There are weaknesses in his approach. For example, unlike artificial-life programs, which can reproduce, mutate and improve through a process of natural selection in a virtual environment, there is no way for robots to achieve evolutionary improvement on their own. Human beings must play the stork and pull the plug on bad designs and tinker with new ones to create the next generation.

And when all is said and done, the question still remains: Will these machines and their progeny will ever be intelligent? One could argue that survival is ultimate act of intelligence and that these creatures have already shown their abilities. On the other hand, you could say that following sunlight is not survival; it's just good mechanics.

There is no question that the power of computers is advancing and has rendered some basic forms of intelligence. Computers can add like the devil, play chess better than 99.9 percent of the world and dust apartments with no supervision. Yet, at the same time, they cannot handle some of the simplest human tasks -- distinguishing a cat from a dog, navigating a New York sidewalk or conducting a decent conversation. The arguments and examples can go round and round for an eternity.

It is this circular and abysmal quality of the debate that perhaps points to a fundamental weakness in the question itself.

Can machines be intelligent? Do they have understanding? Can they possess consciousness? Will they have a soul?

These questions, in the end, will always be difficult -- and the tests always lacking -- because of the vast differences between men and machines. In many ways, the questions speak more about the anxiety of humans than about the accomplishments of their creations.

It is the Frankenstein syndrome: Create in our own image and then degrade the result.

Tilden's machines and other machines may never pass the Turing Test or the standard of understanding set by the Chinese Room. But then again, maybe they will eventually be smart enough to not care.


Related Sites
Following are links to the external Web sites mentioned in this article. These sites are not part of The New York Times on the Web, and The Times has no control over their content or availability. When you have finished visiting any of these sites, you will be able to return to this page by clicking on your Web browser's "Back" button or icon until this page reappears.

Newsgroups:

  • comp.robotics.misc
  • comp.robotics.research

    Other Sites:

  • The Chinese Room
  • John R. Searle Home Page
  • Mark Tilden's Robot Olympics
  • Los Alamos National Laboratory


    Copyright 1996 The New York Times Company

    Introduction - Table of Contents | Some History