Exhaustive Search Methods
We will use the hypothetical search tree exhibited below to illustrate
some of the ideas involved in the various search methods.
root node of this tree is labeled s0 to indicate
that this is the starting state for our search. The node in the
center that is three levels down and which has a g to
its right is our goal node. For purposes of illustration we will
pretend that this is the total search tree. This is, of course,
pretty unrealistic. Most search trees will have hundreds if not
millions of nodes. Our little example has only 23 nodes or 22
states that can be reached from the start node; the branching
factor from any node varies between one and three, and, the goal
node is only three moves away from the start node.
figure below uses this tree to illustrate one kind of exhaustive
systematic search which is referred to a breadth-first
| In this type
of search, the search proceeds by generating and testing each
node that is reachable from a parent node before it expands any
of these children.The test is indicated in this animation by
the appearance of a ? over the node generated. The node
and branches remain yellow in color as long as they must be retained
in memory. Search is terminated when a solution is found; that
is, the test of a state returns true. In this animation, when
the goal node is discovered, it is changed from yellow to green
to indicate this fact.
Then, the solution is traced
back to the starting node.This is also indicated by changing
this path to a green color. Finally, the paths that were searched
but were not part of the solution are changed from yellow to
gray to indicate that they no longer need to be held in memory.
In this example, 18 nodes were opened before the solution was
The search is said to be exhaustive
because the search is guaranteed to generate all reachable states
before it terminates with failure. By generating the all of the
nodes at a particular level before proceeding to the next level
of the tree (the Breadth First Strategy) this control regime
guarantees that the space of possible moves will be systematically
We usually assume that the space
that will be searched is finite. Of course, it could be quite
large and the solution may lie a thousand steps away from the
start node. Consequently, in practice one would usually specify
a depth or move distance from the starting node beyond which
you would not search. In our case the maximum depth was only
You will notice that this method
of search requires considerable memory resources. It does, however,
guarantee that if we find a solution it will be the shortest
| Another type
of exhaustive search is referred to as depth-first search
because the tree is examined to some depth d, before another
path is considered. The next animation illustrates this type
of exhaustive search. In this example, the maximum depth of our
tree is only three. When this limit is reached and if the solution
has not been found, then the search backtracks to the
previous level and explores any remaining alternatives at this
level, and so on. It is this systematic backtracking procedure
that guarantees that it will systematically and exhaustively
examine all of the possibilities.In this animation, you will
notice that nodes become gray after backtracking has occurred.
Again, this indicates that these nodes need no longer be retained
In this example, only
12 nodes were explored before the solution was found and we never
had to hold more than three nodes in memory. However, the solution
might have been the first node generated on the far right of
the tree. In this case, the breadth first procedure would have
found the solution much more quickly than the depth first procedure.
Again, if the tree is very deep
and the maximum depth searched is less that the maximum depth
of the tree, then this procedure is "exhaustive" modulo
the depth that has been set.
There is a third exhaustive search
procedure that is a blend of depth first and breadth first search
that may strike you as a bit odd. It is called Depth-First
Iterative-Deepening (DFID). Here is how it works:
First, perform a depth-first
search to depth one. Then, discarding the nodes generated in
the first search, start over and do a depth-first search to level
two. three ... until the goal state is reached.
Now what seems strange is that
this procedure is constantly generating nodes that it generated
before but threw away. But it is doing this to minimize the amount
of memory that is required at any point in the search. To say
this in a fancy way, we must recall a tiny bit of complexity
theory and look at the space complexity of Breadth-First Search,
and Depth-First Search. In Breadth-First Search, since all of
the nodes at a given depth are stored in order to generate the
nodes at the next depth, the minimum number of nodes that must
be stored to search to depth d is b**d-1,
which is O(b**d). [** is my way to indicate exponentiation;
e.g. b**d indicates that b is being raised
to the dth power. The O is simply the notation used in
complexity theory that if you don't understand, then just leave
it that way. The gist is that the number of nodes that must be
retained is increasing exponentially with the depth that we need
to search.] The average case space complexity is roughly one-half
of this, which is also O(b**d). But for Depth-First
Search; if the depth cutoff is d, then the space required
by depth-first search is only O(d). That is, it increases
only linearly; not exponentially.
So, since DFID at any given time
is performing a depth-first search, and never searches deeper
than depth d, the space it uses is O(d). Also,
since DFID expands all nodes at a given depth before expanding
any nodes at a greater depth, it is guaranteed to find a shortest
length solution. ....seems inefficient though because you are
redoing things...BUT...if you have more time than memory, this
isn't a bad idea.
we illustrate a search that is not necessarily exhaustive. This
is one kind of heuristic search, namely, hill-climbing.
Hill-Climbing is provided with an evaluation function
that can be used to estimate the distance from the current
state to the goal state. If we compute these estimates at each
choice point, then the estimates can be used as a basis for choice.
If the estimates are reasonably related to the actual distance
then this heuristic will typically allow us to find a solution
while selectively (rather than exhaustively) searching the space
of possible moves.
In hill-climbing, the control
regime is basically that of depth first search. However, at each
choice point in the depth first search all of the candidates
are generated and then evaluated using the evaluation function.
The result of this evaluation informs the decision about which
of these alternatives to pursue.In the animation, the best candidate
according to the evaluation function is depicted in blue. It
is this candidate that is chosen.
Now, after all of that click
here if you want a diversion where you just sit and watch the
Breadth-First Search , Depth-First
Search and Hill Climbing animations run in close proximity
to each other.