of course they'll seem mysterious! (As Arthur Clarke has said, any technology sufficiently advanced seems like magic.) So first we'd better understand how people and computers might do the ordinary things that we all do. (Besides, those skeptics should be made to realize that their arguments imply that ordinary people can't think, either.) So let's ask if we can make computers that can use ordinary common sense; until we get a grip on that we hardly can expect to ask good questions about works of genius.
In a practical sense, computers already do much more than their programmers tell them to. I'll grant that the earliest and simplest programs were little more than simple lists and loops of commands like "Do this. Do that. Do this and that and this again until that happens." That made it hard to imagine how more could emerge from such programs than their programmers envisioned. But there's a big difference between impossible and "hard to imagine." The first is about it: the second is about you!
Most people still write programs in languages like BASIC and FORTRAN which make you write in that style let's call it "do now" programming. This forces you to imagine all the details of how your program will move from one state to another, from one moment to the next. And once you're used to thinking that way, it is hard to see how a program could do anything its programmer didn't think of because it is so hard to make that kind of program do anything very interesting. Hard, not impossible.
Then, AI researchers developed new kinds of programming. For example, the "General Problem Solver" system of Newell, Shaw and Simon lets you describe processes in terms of statements like "if you're on the wrong side of a door, go through it" or, more technically, "if the difference between what you have and what you want is of kind D, then try to change that difference by using method M."(1) Let's call this "do whenever" programming. Such programs automatically apply each rule whenever its applicable so the programmer doesn't have to anticipate when that might happen. When you write in this style, you still have to say what should happen in each "state" the process gets into but you don't have to know in advance when each state will occur.
You also could do such things with the early programming language COMIT, developed by Yngve at MIT, and the SNOBOL language that followed it. Today, that programming style is called production systems."(2) The mathematical theory of such languages is explained in my book. (3)
That General Problem Solver program of Newell and Simon was also a landmark in research on Artificial Intelligence, because it showed how to write a program to solve a problem that the programmer doesn't know how to solve. The trick is to tell the program what kinds of things to
TRY: you need not know which one actually will work. Even earlier, in 1956, Newell, Shaw, and Simon developed a computer program that was good at finding proofs of theorems in mathematical logic problems that college students found quite hard and it even found some proofs that were rather novel. (It also showed that computers could do "logical reasoning"-but this was no surprise, and since then we've found even more powerful ways to make machines do such things.) Later, I'll discuss how this relates to the problem of making programs that can do "common-sense reasoning."
Now, you might reply, "Well, everyone knows that if you try enough different things at random, of course, eventually. you can do anything. But, if it takes a million billion trillion years, like those monkeys hitting random typewriter keys, that s not intelligence at all. That's just Evolution or something."
That's quite correct except that the "GPS" system had a real difference it didn't do things randomly. To use it. you also had to add another kind of knowledge "advice" about when one problem-state is likely to be better than another. Then, instead of wandering around at random, the program can seek the better states; it sort of feels around, the way you'd climb a hill, in the dark, by always moving up the slope. This makes its "search" seem not random at all. but rather purposeful. The trouble and it's very serious is that it can get stuck on a little peak, and never make it to the real summit of the mountain.
Since then, much AI research has been aimed at finding more "global" ways to solve problems, to get around that problem of getting stuck on little peaks which are better than all the nearby spots, but worse than places that can't be reached without descending in between. We've discovered a variety of ways to do this, by making programs take larger views, plan further ahead, reformulate problems, use analogies, and so forth. No one has discovered a "completely general" way to always find the very highest peak. Well, that's too bad but it doesn't mean there's any difference here between men and machines since people, too, are almost always stuck on local peaks of every kind. That's life.
Today most Al researchers use languages like LISP, that let a programmer use "general recursion." Such languages are even more expressive than "do whenever" languages, because their programmers don't have to foresee clearly either the kinds of states that might occur or when they will occur; the program just constrains how states and structures will relate to one another. We could~call these 'constraint languages." (4)
Even with such powerful tools, we're still just beginning to make programs that can learn and can reason by analogy. We're just starting to make systems that will learn to recognize which old experiences in memory are most analogous to present problems. I like to think of this as "do something
(1) Of course, I am greatly simplifying that history.
(2) Allen Newell and Herbert Simon, Human Problem Solving.
(3) Marvin Minsky, Computation: Finite and Infinite Machines. Prentice Hall, 1967.
(4) This isn't quite true. LISP doesn't really have those "do whenevers" built into it, but programmers can learn to make such extensions. and most Al workers feel that the extra flexibility outweighs the inconvenience.
|THE AI MAGAZINE Fall 1982 4|