think about. and especially in how we think when we reason carefully.

Could a Computer Really Understand Anything?

"I see you've programmed that computer to obey verbal commands. You've probably inserted into its memory how it should respond to each command. But I don't believe the program really understands the words. in any human sense.'

This criticism is deserved by most computer systems around these days. But how does it apply to the 1965 program written by Daniel Bobrow that solves high-school algebra "word problems?"? It could solve some problems like these:

The distance from New York to Los Angeles is 3000 miles. If the average speed of a jet plane is 600 miles per hour, find the time it takes to travel from New York to Los Angeles by jet.

Bill's father's uncle is twice as old as Bill's father. Two years from now Bill's father will be three times as old as Bill. The sum of their ages is 92. Find Bill's age.

Most human students find problems like these quite hard. They find it easier to learn to solve the kinds of equations they encounter in high school algebra: that's just cook-book stuff. But to solve the word problems, you have to figure out what equations to solve. Doesn't this mean you have to understand at least something of what the words and sentences mean'

Well, to begin with. Bobrow's program used a lot of tricks. It guesses that the word "is" usually means "equals." It doesn't even try to figure out what "Bill's fathers' uncle" is, except to notice that this phrase resembles "Bill's father."(8) It doesn't know that" age" and "old" have anything to do with time, only that they're numbers to be put into, or found from equations. Given these and a couple of hundred other facts about the words. it sometimes (and by no means always) manages to get the answers right.

But dare one say that Bobrow's program really "understands" those sentences? If meaning isn't caught in several hundred different tricks might not we still imprison it in several hundred thousand tricks? Is "understand" even an idea we can ask Science to deal with?

Here's how I like to deal with such questions. I feel no obligation to define such words as "mean" and "understand," just because others tried it for five thousand years! Our words are only social things; it's great when they combine to give us good ideas. But here. I think. they only point to a maze of unproductive superstitions, that only handicapped our predecessors when they tried to figure out what "meanings" are and how they get connected to our words. It is a wrong-headed enterprise, like asking people to agree on

what is "good," without considering each person's different hopes and fears.

Fortunately, as I will show, there isn't any need to try to capture "meanings" in such rigid, public ways. In fact, that would defeat our real purposes. This is because any psychologically realistic theory of meanings needs built-in ways to deal with individual differences between the people who are to do the "knowing."

Could A Computer Know What Something Means?

We can't think very well about meaning without thinking about the meaning of something. So let's discuss what numbers mean. And we can't think about what numbers mean very well without thinking about what some particular number means. Take Five. Now, no one would claim that Bobrow's algebra program could be said to understand what numbers "really" are, or even what Five really is. It obviously knows something of arithmetic. in the sense that it can find sums like "5 plus 7 is 12." The question is ­ does it understand numbers in any other sense ­ say, what are 5 or 7 or 12 ­ or, for that matter, what are "plus" or "is"? Well, what would you say if I asked, "What is Five?" I'll argue that the secret lies in that little word "other."

Early this century, the philosophers Russell and Whitehead suggested a new way to define a number. "Five," they said, "is the set of all possible sets with five members. This set includes every set of Five ball-point pens, and every litter of Five kittens. The trouble was, this definition threatened also to include sets like "these Five words" and even "the Five things that you'd least expect." Sets like those led to so many curious inconsistencies and paradoxes that the theory had to be doctored so that these could not be expressed ­ and that made the theory, in its final form, too complicated for any practical use (except for formalizing mathematics. where it worked very well indeed). But, in my view, it offers little promise for capturing the meanings of everyday common sense. The trouble is with its basic goal: finding for each word some single rigid definition. That's fine for formalizing Mathematics. But for real life, it ignores a basic fact of mind: what something means to me depends to some extent on everything else I know ­ and no one else knows just those things in just those ways.

But, you might complain, when you give up the idea of having rigid, definitions, don't you get into hot water? Isn't ambiguity bad enough: what about the problems of "circular definitions," paradoxes, and inconsistencies? Relax! We shouldn't be that terrified of contradictions; let's face it, most of the things we people think we "know" are crocks already overflowing with contradictions; a little more won't kill us. The best we can do is just be reasonably careful ­ and make our machines careful, too ­ but still there are always chances of mistakes. That!s life.

Another kind of thing we scientists tend to hate are circular dependencies. If every meaning depends on the mind it's in-that is, on all other meanings in that mind ­ then

(8)
In fact, if there were one less equation, it would assume that they mean the same, because they're so similar.