when you stop at dumbbell theories then, most likely, you have only one idea instead of two:
The trouble with one-part theories is that they don't lead anywhere, because they can't support enough detail. Most of our culture's mental-pair distinctions are stuck just so, which handicaps our efforts to make theories of the mind. I'm especially annoyed with recent fads that see minds as divided evenly into two halves that live within the left and right-hand sides of the brain:
This is really neat. It not only supports beliefs that minds do things computers can't, but even provides a handy physical brain-location in which to put the differences!
Each half of the brain has dozens, and probably hundreds of different sections of machinery. There definitely are some differences between right and left. But these structural differences between corresponding parts of the right and left halves appear very much less than the differences within each half. Despite that flood of half-baked brain-half stories, I've heard of little evidence for systematic differences in how those left-right portions really function, in spite of all those newsstand magazines and books, and it would seems that even brain-scientists' theories about minds are just as naive as yours and mine. They're just as prone to observe" whatever distinctions they imagine. Just for fun, I'll contribute two of my own speculations on what our brain-halves do:
Either image seems to suit those popular but vague descriptions of the two "dissected personalities," right and left, that emerge when surgeons split a patient's brain in halves. The right side, say, would be better at realistic, concrete things things as they are. The left half would have specialized in long-range plans, in things that aren't yet, in short. at things we like to call "abstract."
Those age-old distinctions between Logic and Intuition. or Reason and Emotion, have been the source of many unsound arguments about machine intelligence. It was clear in AI's earliest days that logical deduction would be easy to program. Accordingly, people who imagined thinking to be mostly logical expected computers soon to do the things that people used their logic for. In that view, it ought to be much harder, perhaps impossible, to program more qualitative traits like intuition, metaphor, aesthetics or reasoning by analogy. I never liked such arguments.
In 1964, my student T.G. Evans finished a program to show that computers could actually use analogies. It did some interesting kinds of reasoning about perception of geometric structures. This made some humanistic skeptics so angry that they wrote papers about it. Some threw out the baby with the bath by seeming to argue that if machines could indeed do that kind of analogical reasoning, then, maybe that kind of reasoning can't he so important. One of them complained that Evans' program was top complicated to be the basis of an interesting psychological theory. because it used about 60,000 computer instruction-words. (That seemed like saying there wasn't any baby in the first place.)
In any case Evans' program certainly showed it was wrong to assume computers could do only logical or quantitative reasoning. Why did so many people make that mistake? I see it as a funny irony: those critics had mistaken their own personal limitations for limitations of computers! They had projected their own inability to explain how either person or machine could reason by analogy onto the outer world, to suppose that no well-defined mechanism could do such a thing. In effect, they were saying that since they could see no explanation then, surely, there could be no explanation!
Another misconception stems from confusing different senses of logic. Too many computer specialists talk as though computers are perfectly logical. and that's all. What they really mean is that they can understand using logic, how all those tiny little computer circuits work. But, just because the little circuits can be understood by logic doesn't mean at all that those circuits can only do logic! That's like thinking you could figure out what houses are for from knowing how bricks work.
Many AI workers have continued to pursue the use of logic to solve problems. This hasn't worked very well, in my opinion: logical reasoning is more appropriate for displaying or confirming the results of thinking than for the thinking itself. That is, I suspect we use it less for solving problems than we use it for explaining the solutions to other people and-much more important to ourselves. When working with the actual details of problems, it is usually too hard to package the knowledge we need into suitably logical form. So then we have to use other methods, anyway methods more suitable for the networks of meanings" that I'll discuss shortly. Still. I consider such ideas to be of great importance in making theories of how we represent the things we
|THE AI MAGAZINE Fall 1982 7|