Hot-Link Menu in Right-Side Column

Last Updated 6-29-2010


Conditions for optimality: Admissability and Consistency

The first condition that we require for optimality is that h(n) be an admissable heuristic. An admissable heuristic is one that never overestimates the cost to reach the goal. Because g(n) is the actual cost to reach n along the current path, and f(n)=g(n)+h(n), we have as an immediate consequence that f(n) never overestimates the true cost of a solution along the current path through n.

Admissible heuristics are by nature optimistic because they think the cost of solving the problem is less than it actuall is. An obvious example of an admissable heuristic is the straight line distance, hSLD that we used in getting to Bucharest. Straight-line distance is admissable because the shortest path between any two points is a straight line, so the straight line cannot be an overestimate. Figure ISS-4 shows the progress of an A* tree search for Bucharest:

A* Expansion

Figure ISS-4: A* Search Expansion


The cost values for each node are given in parentheses below each node. The cost is determined by following the path from the start as it progresses through each expansion. Notice in particular, that in level 4 Bucharest, by way of Fagaras, has a lowest possible of cost of 450, but, at the same time, the node at Pitesti is only 417. A* is not ready to settle just yet, so 1 more expansion takes place uncovering the best possible cost with the Bucharest goal only costing 417. Thus the path from A* is Arad-Sibiu-Rimnicu Vilcea-Pitesti-Bucharest.

A second, slightly stronger condition called consistency, or sometimes monotonicity, is required only for applications of A* to graph search. A heuristic h(n) is consistent if, for every node n and every successor n' of n generated by any action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n':

h(n) ≤ c(n,a,n') +h (n).

This is a form of the general triangle inequality, which stipulates that each side of a triangle cannot be longer than the sum of the other two sides. Here, the triangle is formed by n,n', and the goal Gn closest to n. For an admissable heuristic, the inequality makes perfect sense: if there were a route form n to Gn via n' that was cheaper than h(n), that would violate the property that h(n) is a lower bound on the cost to reach Gn.

It is fairly easy to show that every consistent heuristic is also admissable. Consistency is therefore a stricter requirement than admissability, but one has to work quite hard to concoct heuristics that are admissable but not consistent. All the admissable heuristics we discuss in this chapter are also consistent. Consider, for example, hSLD. We know that the general triangle inequality is satisified when each side is measured by the straight-line distance and that the straight-line distance between n and n' is no greater than c(n,a,n'). Hence, hSLD is a consistent heuristic.

Optimality of A*

As we mentioned eariler, A* has the following properties: the tree-search version of A* is optimal if h(n) is admissable, while the graph-search version is optimal if h(n) is consistent.

The first step is to establish the following: if h(n) is consistent, then the values of f(n) along any path are nondecreasing. The proof follows directly from the definition of consistency:

Suppose n' is a successor of n; then g(n') = g(n) + c(n,a,n') for some action a and we have:

f(n) = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n).

The next step is to prove that whenever A* selects a node n for expansion, the optimal path to that node has been found. Were this not the case, there would have to be another frontier node n', and by the graph separation property as illustrated in the following diagram:

Graph Separation Property

Figure ISS-5: Expanded Node Graph Separation Property


where the frontier (white nodes) always separate the explored region (black nodes) from the unexplored region (gray nodes). Because f is nondecreasing along any path, n' would have lower f-cost than n and would have been selected first.

From the two preceding observations, it follows that the sequence of nodes expanded by A* using Graph-Search is in nondecreasing order of f(n). Hence, the first goal node selected for expansion must be an optimal solution because f is the true cost for goal nodes (which have h = 0) and all later goals will be at least as expensive.

The fact that f-costs are nondecreasing along any path also means that we can draw contours in the state space, just like the contours in a topographic map:

Romania Contours
Figure ISS-6: Map of Romania showing contours at f = 380, f=400, f=420, with Arad as the start state. Nodes inside a given contour have f-costs less than or equal to the contour value.

Inside the contour labeled 400, all nodes have f(n) less than or equal to 400, and so on. Then because A* expands the frontier of lowest f-cost, we can see than an A* search fans out from the start node, adding nodes in concentric bands of increasing f-cost.

With uniform-cost search (A* search using h(n) = 0), the bands will be circular around the start state. With more accurate heuristics, teh bands will stretch toward the goal state and become more narrowly focused around the optimal path. If C* is the cost of the optimal solution path, then we can say the following:

  • A* expands all nodes with f(n) < C*.
  • A* might then expand some of the nodes right on the "goal contour" (where f(n) = C*) before selecting the goal node.

Completeness requires that there be only finitely many nodes with cost less than or equal to C*, a condition that is true if all step costs exceed some finite ε and if b is finite.

Notice that A* expands no nodes with f(n) > C* - for example, Timisoara is not expanded in Figure ISS-4, even though it is a child of the root. We say that the subtree below Timisoara is pruned; because hSLD is admissable, the algorithm can safely ignore this subtree while still guaranteeing optimality. The concept of pruning - eliminating possibilities from consideration without having to examine them - is important for many areas of AI.

One final observation is that among optimal algorithms of this type - algorithms that extend search paths from the root and use the same heuristic information - A* is optimally efficient for any given consistent heuristic. That is, no other optimal algorithm is guaranteed to expand fewer nodes than A* (except possibly through tie-breaking among nodes with f(n) = C*). This is because any algorithm that does not expand all nodes with f(n) < C* runs the risk of missing the optimal solution.

That A* search is complete, optimal, and optimally efficient among all such algorithms is rather satisfying. Unfortunately, it does not mean that A* is the answer to all our searching needs. The catch is that, for most problems, the number of states within the goal contour search space is still exponential in the length of the solution. The basic results of the analysis are as follows: For problems with constant step costs, the growth in run time as a function of the optimal solution depth d is analyzed in terms of absolute error or the relative error of the heuristic. The absolute error is defined as Δ ≡ h* - h, where h* is the actual cost of getting from the root to the goal, and the relative error is defined as ε ≡ (h* - h) / h*.

The complexity results depend very strongly on the assumptions made about the state space. The simplest model studied is a state space that has a single goal and is essentially a tree with reversible actions. In this case, the time complexity of A* is exponential in the maximum absolute error, that is, O(bΔ). For constant step costs, we can write this as O(bεd), where d is the solution depth. For almost all heuristics in practical use, the absolute error is at least proportional to the path cost h*, so ε is constant or growing and the time complexity is exponential in d. We can also see the effect of a more accurate heuristic: O(bεd) = O((bε)d), so the effective branching factor is bε.

When the state space has many goal states - particularly near-optimal goal states - the search process can be led astray from the optimal path and there is an extra cost proportional to the number of goals whose cost is within a factor ε of the optimal cost. Finally, in the general case of a graph, the situation is even worse. There can be exponentially many states with f(n) < C* even if the absolute error is bounded by a constant. For example, consider a version of the vacuum world where the agent can clean up any square for unit cost without even having to visit it: in that case, squares can be cleaned in any order. With N initially dirty squares, there are 2N states where some subset has been cleaned and all of them are on an optimal solution path - and hence satisfy f(n) < C* - even if the heuristic has an error of 1.

The complexity of A* often makes it impractical to insist on finding an optimal solution. One can use variants of A* that find suboptimal solutions quickly, or once can sometimes design heuristics that are more accurate but not strictly admissable. In any case, the use of a good heuristic still provides enormous savings compared to the use of an uninformed search.

Computation time is not, however, A*'s main drawback. Because it keeps all generated nodes in memory (as do all Graph-Search algorithms), A* usually runs out of space long before it runs out of time. For this reason, A* is not practicla for many large scale-problems. There are, however, algorithms that overcome the space problem withou sacrifiing optimality or completeness, but with a small cost in execution time.

In order to continue with this guide, and start the discussion of Conditions for Optimality: Admissability and Consistency: Press the Button below:


Home


Artificial Intelligence

Big O Notation

NP Completeness

Intelligent Agents


Problem Definition

Formulating Problems

Toy Problems

Vacuum World


8-Puzzle

8 Queens

The Knuth Sequence

Real World Problems

Route Finding Problems

Touring Problems

Traveling Salesperson Problem

VLSI Layout

Robot Navigation

Automatic Assembly Sequencing


Searching For Solutions


Frontier Node Expansion

Search Algorithm Infrastructure

Child Node Creation

Measuring Algorithm Performance


Uninformed Search Strategies

Breadth-First Search

Breadth-First Characteristics

C++ Breadth-First Search Code

Breadth-First Data Structure


Breadth-First Main Program

Breadth-First Make Map

Breadth-First Make Neighbor

Breadth-First Point of Origin


Breadth-First Print Cities

Breadth-First Initial Listing


Breadth-First Path Record

Breadth-First Get Child City

C++ Breadth-First Search Code


Breadth-First Print Goal Path

Uniform-Cost Search


Depth-First Search

Depth-First C++ Solution

Depth-First Data Structure


Depth-First MakeMap()

Depth-First Display Data

Depth-First Initial Listing


Depth-First GetChildCity()

Depth-First Path Record


Depth-First Search Function


Depth-First PrintGoalPath()

Depth-First Main()

Depth-Limited Search


Iterative Depth Search

Bidirectional Search

Comparing Strategies


Informed Search Strategies

Greedy Best-First

A* Search


Conditions For Optimality

Optimality of A*


C++ A* Search Code

A* Search Data Structure

A* Search Data Entry


A* Search GetChildCity()

C++ Min Priority Queue


C++ A* Search Code Function

C++ A* Headers & Prototypes

C++ A* Search Main()


Normalized Information Distance