Hot-Link Menu in Right-Side Column

Last Updated 6-29-2010


Iterative Deepening Depth Search

Iterative Deepening Depth Search or iterative deepening depth-first search) is a general strategy, often used in combination with depth-first tree search, that finds the best depth limit. It does this by gradually increasing the limit - first 0, then 1, then 2, and so on - until a goal is found. This will occure when the depth reaches d, the depth of the shallowest goal node. The algorithm is shown in Figure DFS-15:

function Iterative-Deepening-Search(problem) returns a solution, or failure
  for depth=0 todo
    resultDepth-Limited-Search(problem, depth)
    if resultthen return result

Figure DFS-15: Iterative Deepening Search Algorithm


Iterative deepening combines the benefits of depth-first search and breadth-first search. Like depth-first search, its memory requirements are modest: O(bd) to be precise. Like breadth-first search, it is complete when the branching factor is finite and optimal when the path-cost is a non-decreasing function of the depth of the node.


Iterative deepening search may seem wasteful because states are generated multiple times. It turns out this is not too costly. The reason is that in a tree search tree with the same (or nearly the same) branching factor at each level, most of the nodes are in the bottom level, so it does not matter much that the upper levels are generated multiple times. In an iterative deepening search, tehe nodes on the bottom level (depth d) are generated once, those on the next-to-bottom level are generated twice, and so on, up to the children of the root, which are generated d times. So the total number of nodes generated in the worst case is:

N(IterativeDeepeningSearch)=(d)b + (d-1)b2 + ... + (1)b2

This gives a time complexity of O(bd) - asymptotically the same as breadth-first search. There is some extra cost for generating the upper levels multiple times, but it is not large. For example, if b=10 and d=5, the numbers are:

N(IterativeDeepeningSearch)=50+400+3,000+20,000+100,000=123,450

N ( Breadth First Search ) = 10+100+1,000+10,000+100,000=111,110


If you are really concerned about repeating the repetition, you can use a hybrid approach that runs breadth-first search until almost all of the available memory is consumed, and then runs iterative deepening from all nodes in the frontier. In general, iterative deepening is the preferred uninformed search method when the search space is large and the depth of the solution is not known.

Iterative deepening is analogous to breadth-first search in that it explores a complete layer of new nodes at each iteration before going on to the next layer. It would seem worthwhile to develop an iterative analog to uniform-cost search, inheriting the latter algorithm's optimality guarantees while avoiding its memory requirements. The idea is to use increasing path-cost limits instead of increasing depth-limits. The resulting algorithm, called iterative lengthening search. Unfortunately, it turns out that iterative lengthening incurs substantial overhead compared to uniform-cost search.


Bidirectional Search

The idea behind bidirectional search is to run two simultaneous searches - one forward from the initial state and the other backward from the goal - hoping the two searches meet in the middle. The motivation is that bd/2 + bd/2 is much less than bd.

Bidirectional search is implemented by replacing the goal test with a check to see whether the frontiers of the two frontiers intersect; if they do, a solution has been found. It is important to realize that the first such solution found may not be optimal, even if the two searches are both breadth-first; some additional search is required to make sure there isn't another short-cut across the gap. The check can be done when each node is generated or selected for expansion and, with a hash table, will take constant time. For example, if a problem has solution depth d = 6, and each direction runs breadth-first search one node at a time, then in the worst case the two searches meet when they have generated all the nodes at depth = 3. For b = 10, this means a total of 2,220 node generations compared with 1,111,110 for a standard breadth-first search. Thus, the time complexity of bidirectional search using breadth-first searches in both directions is O(bd/2). The space complexity is also O(bd/2). We can reduce this by roughly one half if one of the two searches is done by iterative deepening, but at least one of the frontiers must be kept in memory so that the intersection check can be done. The space requirement is the most significant weakness of the bidirectional search.


The reduction in time complexity makes bidirectional search attractive, but how do we search backward? This is not as easy as it sounds. Let the predecessors of a state x be all those states that have x as a successor. Bidirectional search requires a method for computing predecessors. When all the actions in the state are reversible, the predecessors of x are just its successors. Other cases may require substantial ingenuity.

Consider the question of what we mean by "the goal" in searching "backward from the goal." For the 8-puzzle and for finding a route in Romania, there is just one goal state, so the backward search is very much like the forward search. If there are several explicitly goal states - for example, the two dirt-free states in Figure Agent-3 - then we can construct a new dummy state whose immediate predecessors are all the actual goal states. But if the goal is an abstract description, such as the goal that "no queen attacks another queen" in the n-queens problem, then bidirectional search is difficult to use.


Comparing Uninformed Search Strategies

Criterion Breadth-First Uniform Cost Depth-First Depth-Limited Iterative Deepening Bidirectional
Complete Yes Yes No No Yes Yes
Time O(bd) O(b1+[C/ε]) O(bm) O(bl) O(bd) O(bd/2)
Space O(bd) O(b1+[C/ε]) O(bm) O(bl) O(bd) O(bd/2)
Optimal? Yes Yes No No Yes Yes

Figure DFS-16: Evaluation of Tree-Search Strategies


In order to continue with this guide, and start the discussion of Informed Search Strategies: Press the Button below:


Home


Artificial Intelligence

Big O Notation

NP Completeness

Intelligent Agents


Problem Definition

Formulating Problems

Toy Problems

Vacuum World


8-Puzzle

8 Queens

The Knuth Sequence

Real World Problems

Route Finding Problems

Touring Problems

Traveling Salesperson Problem

VLSI Layout

Robot Navigation

Automatic Assembly Sequencing


Searching For Solutions


Frontier Node Expansion

Search Algorithm Infrastructure

Child Node Creation

Measuring Algorithm Performance


Uninformed Search Strategies

Breadth-First Search

Breadth-First Characteristics

C++ Breadth-First Search Code

Breadth-First Data Structure


Breadth-First Main Program

Breadth-First Make Map

Breadth-First Make Neighbor

Breadth-First Point of Origin


Breadth-First Print Cities

Breadth-First Initial Listing


Breadth-First Path Record

Breadth-First Get Child City

C++ Breadth-First Search Code


Breadth-First Print Goal Path

Uniform-Cost Search


Depth-First Search

Depth-First C++ Solution

Depth-First Data Structure


Depth-First MakeMap()

Depth-First Display Data

Depth-First Initial Listing


Depth-First GetChildCity()

Depth-First Path Record


Depth-First Search Function


Depth-First PrintGoalPath()

Depth-First Main()

Depth-Limited Search


Iterative Depth Search

Bidirectional Search

Comparing Strategies


Informed Search Strategies

Greedy Best-First

A* Search


Conditions For Optimality

Optimality of A*


C++ A* Search Code

A* Search Data Structure

A* Search Data Entry


A* Search GetChildCity()

C++ Min Priority Queue


C++ A* Search Code Function

C++ A* Headers & Prototypes

C++ A* Search Main()


Normalized Information Distance