Mental Olympics

     Mental performance depends on physical processes...things that happen in the brain. This appears to be a truism that few would now dispute. Almost everyone who studies human reasoning believes that the mental depends on the physical. However, there are various ways in which to interpret 'depends on' and over this there is considerable disagreement.

     We often make a distinction between a thing's structure and its function. For example, a stringed instrument such as an acoustic guitar has a structure that includes a resonating chamber located beneath the strings, a neck on which the fingers can be depressed over the string to change the length of the vibrating string, and attachment points which allow the strings to be anchored and stretched under considerable tension. This structure is largely determined by the guitar's intended function, namely to produce musical sounds. In general, in trying to determine the function that something performs, a good starting point or heuristic is to assume that there is a relation between its structure and its function. But, of course, this doesn't always pay off. Sometimes, the connection between structure and function is quite difficult to determine. Many times there are structural characteristics of an object for which one can find no functional explanation. And, often one can find a structural similarity which is quite unrelated to functional similarity. For example, if you look inside your car's engine you will find reservoirs and tubing which structurally are quite similar. However, the reservoirs may hold brake fluid, water, battery acid, or oil and the lines may carry oil, gasoline, water, air, or electric current. In this case, it is quite important to the integrity of your engine that you not assume that similarity of structure implies similarity of function.

     Now in the case of thinking about the mind/brain dependence, one of the simplest interpretations of the dependence between mind and brain is to assume that there are basic physical components of the brain that are composed or 'wired together' in a simple manner to yield cognitive performance. For example, think again of the acoustic guitar mentioned above. The number of physical components is really very small. Yet in the hands of a skilled musician, this guitar can be used to create the sounds of a literally infinite set of musical compositions.

     But what are the basic physical components of the brain? Well, certainly back in 1868 when Donders conducted his experiments on reaction time tasks we had no idea how to answer this question.(* ) But, maybe that didn't matter. Rather than focusing on the physical stuff of the brain, why not simply try to identify the simplest things that you could ask the mind/brain to do. For example, why not present a stimulus, and then ask the person to indicate if some stimulus was present. Not only was this a simple task, it was also one that had to be involved in a great many mental tasks. Let's just assume that this task involves a "noticing function" and a "responding function." That seems innocent enough! But now comes the reification trick...assume that the noticing function is carried out by a noticing component! Notice, in this case we knew absolutely nothing about the actual structure; we have nothing more that an intuitive analysis of function; an assertion of simplicity; and from this have concluded that there is a component of the brain that carries out the noticing function. If you buy that, then one can go ahead and develop the whole logic of the subtraction method and carry out experiments which purportedly tell you exactly how fast certain functions can be carried out by the brain/mind.

     Now, you may have surmised from the way I have presented this argument that I don't think that the conclusions follow logically. But, one can and often does stumble onto the "truth" using reasoning strategies that are only a heuristic for finding "truth", but not a guarantee. In other words, this story could be correct.

     However, one aspect of the logic of this approach to the study of the mind/brain was challenged rather effectively. The use of the subtractive method to identify the time associated with a process assumes that the process is independent of the "context" in which that process takes place. But, of course, to use the subtractive method you had to devise experiments that differed in the processes that were presumably required to perform the experiments. Thus, this assumption of independence could never be tested, but only accepted or rejected. This criticism of the subtractive method lessened the appeal of this experimental methodology.

     Then, a couple of things happened in the middle part of this century that revived the importance of reaction time measurements in cognitive research. One had to do with developments in the area of statistics which eventually resulted in a somewhat more sophisticated understanding of their potential use. In the area of cognitive psychology, Saul Sternberg was the important figure whose work on the scanning of short term memory reinvigorated the use of reaction times in the study of cognitive processes. To read more about this memory scanning research and the statistical rationale behind it click here.

     The other important development was more diffuse. By the 50's electronics was a familiar world. And, it was clear to most people that you could design electronic circuitry that would carry out functions that we intuitively thought of as "mental" functions. Computers were being designed and used. The relation between physical functions that electronic components could be designed to carry out on electrical signals and "information processing functions" required to manipulate information was becoming well-understood. For example, circuits could be built that would "carry out" logical functions, e.g., 'and', 'or', 'not' and so on. And, you could build devices that would serve as buffers to hold onto signals until some other component was ready for the signals; devices that served as a memory for information; and so on.

    This relation between hardware, the organization of electrical components, and information processing was not lost on cognitive psychologists. Perhaps, the brain was itself a particular organization of "hardware" that realized certain information processing functions. If the primitive components could be identified and experimentally isolated, then they could be studied and we could determine the basic properties of these components. For example, we might be able to determine: the speed at which a component could operate; the storage capacity of a component; the length of time items could be retained; and the type of items that could be stored by a component.

     But, again there is the question of exactly what are the components. We can distinguish two ways in which this question has been approached. One approach identifies components with the basic functions, predicates, control regimes, and data types of a programming language. Thus, the analogy is to software, and the relation to the brain is much more abstract. It is simply that there is a "language," in the sense of a programming language, of the mind which is realized using some of the physical properties of the brain. We will turn to this approach in more detail in the next section on computational approaches to cognition.

     The other approach is to identify components of the brain by analogy to hardware. The hardware analogy assumes that it is reasonable to think of the brain as the hardware of the mind. Consequently, we should expect to find components that are organized in ways that are analogous to computers. Thus, the idea of buffers, short term memory, long term memory, central processing unit, and so on. Of course, computers can be designed in many ways. Most of the computers that have been designed follow an architecture that is referred to as von Neuman machines. In fact, we don't really have any deep understanding of the space of possible computer architectures. Consequently, it may be that we shouldn't become too enamored with the analogy that is drawn between current computer architectures and the "architecture" of the brain.

     The figure below provides a rough picture of the "architecture" that has been suggested by this approach. It is rough in the sense that probably no one who espouses this approach agrees with it in every detail. The picture was developed by Newell, Card and Moran as a first order approximation to the architecture of the human information processing system. Their idea was that such an approximation could be useful to make educated guesses about human performance. Consequently, they went through the research literature that has focused on identifying components of the architecture and studying their properties. The properties that are of interest are several. For a process, the main question is how long it takes a process to carry out a single step...we refer to this below as the cycle time and it is analogous to determining the speed of the cpu in a computer. Three processes are distinguished below and depicted as oval figures. Input processes-vision and audition; a cognitive process; and a motor process. The time given in the equations is the authors' best guess at the value for that property. The figures below in brackets indicate the range of times that had been observed in the various experiments that studied this property.

     In addition to the processes, the storage components are identified. A Visual and an Auditory Buffer, a Short Term Memory, and a Long Term Memory. Three parameters or properties are associated with each of these storage components. The first, symbolized as d for decay time, is an estimate of the amount of time that an item can be held in that storage device. Notice that the decay times are quite short for the buffers indicating that items are lost after a short period of time, but that they are assumed to never be lost once they have been stored in Long Term Memory.

 

 
     The second parameter, m, refers of the capacity of the storage unit. In the case of time, we had a physical basis for the unit of measurement. For storage this is somewhat problematic. Since most of the experiments used letters of the alphabet as the stimuli, the results for the buffers are given in letters. For the Short Term Memory component things are more complicated. The capacity unit is given in chunks. It isn't entirely clear exactly what a chunk is, but the intuition is quite clear. For example, if I give you 13 letters of the alphabet in some random arrangement you will remember some subset of them--around 7 plus or minus 2 is the classic conjecture of George Miller. If I give you 13 words to remember, you will also remember some subset of them, but the number of letters in the subset of words will be much larger than 7. From these types of observations, it appears that we must allow our short term memory the cleverness to structure or chunk things in a way that affects its storage capacity.


     If you want to see an example that illustrates this chunking phenomenon, then do Experiment 1 and Experiment 2. In both of these experiments

  • be prepared to try to remember the set of letters you will be presented.
  • After the letters are presented, you will be shown some numbers. Mentally subtract 3 from each of these numbers.
  • Then, you will see a ? 
  • When you see this try to recall the letters. Simply write them down. 
       Click on the image to the left labeled Experiment 1 to start the Experiment. When you finish that experiment return to this page and then click on the image to the right labeled Experiment 2. Determine how many letters you correctly remembered in each experiment.
     Now do Experiments 3 and 4. The procedure is the same as before except that in this case you will be asked to remember a list of words.

     Now click on the image to the left labeled Experiment 3 to start the next Experiment. When you finish that experiment return to this page and then click on the image to the right labeled Experiment 4. Determine how many words you correctly remembered in each of these experiments.

 
     Now, you know why psychologists talk about chunks, even though we really aren't all that sure about what they really are or how "chunking" is to be explained.



  
   The final parameter in the picture is v, the "data-type" that the component is able to store. Finally, the red arrows indicate the presumed communication pathways between the components that the different processes serve to realize. Note, according to this view, everything must go through Short-Term Memory.

     This first order approximation, aside from illustrating the general nature of this approach to the study of the mind/brain, can also serve a practical purpose. If we believe the picture, and take the times as a lower-bound on how fast we can carry out some basic operations, then we can use this model to guesstimate how fast the human mind could accomplish some task. For example, using this type of approximation, Newell and Simon came up with an estimate that it will take roughly 10 years to become an expert at something. We will speak more of this later in the semester when we look at expert reasoning.


 

 

*     The image to the left is called a Hipp Chronoscope. It was the development of this instrument that enabled Donders, back in 1866, to measure reaction time in milliseconds. A description and more detailed pictures of this instrument can be a accessed on the Web by clicking the figure.


 

Experimental Decomposition of Mental Processes

 
 © Charles F. Schmidt