The "birth date"
of AI is generally traced to the Dartmouth Conference which was
held in the summer of 1956. Earlier attempts at using computers
to design "intelligent" systems had been attempted
within the framework of "neural-like" networks. AI
followed a more abstract road in that it sought to understand
"intelligence" and design "intelligent" systems
within the framework of symbol systems. A discussion and explicit
statement of this approach is recounted in the the Tenth
Turing Award Lecture given in 1976 by Allen Newell and Herbert
In the formal work on computation
that was previously covered we saw that symbols and the way in
which symbols were allowed to be "remembered" and "accessed"
yielded a hierarchy of machines. The machines at the top of the
hierarchy, the Turing Machines, are, it is conjectured, capable
of computing any function that is in fact computable. But should
we call a machine intelligent because it can be programmed to
compute some function?
The move to speaking about machines
and intelligence in the same breath involves presuming that the
ability to achieve goals in the face of variations, difficulties,
and complexities posed by the task environment is an essential
characteristic of intelligence. The Physical
Symbol System Hypothesis together with the idea of Heuristic
Search constitute Newell and Simon's proposal for how to
computationally realize intelligence. Click on these topics to
examine each of these ideas in turn.