Last Modified 3-4-2010

IUSB

Home

AI Topic Links

Problem Solving Agents

Uninformed Search Strategies

Breadth-First Search
Uniform-Cost Search
Depth-First Search
Depth First C++ Solution
Depth Limited Search
Iterative Depth Search
Bidirectional Search
Comparing Strategies

Informed Strategies
Greedy Best-First
Conditions For Optimality
A* Search
C++ A* Search Code


Solving Problems by Searching

Problem Solving Agents

The simplest agents previously discussed were the reflex agents, which base their actions on a direct mapping from states to actions. Such agents cannot operate well in environments for which this mapping would be too large to store and would take too long to learn. Goal-based agents, on the other hand, can succeed by considering future actions and the desirability of the their outcomes.

One kind of goal-based agent is called a problem-solving agent. Problem solving agents use atomic representations. In an atomic representation each state of the world is indivisible and has no internal structure. In other words, states of the world as considered as wholes, with no internal structure visible to the problem solving algorithms.

Our discussion of problem solving agents begins with precise definitions of problems and their solutions and give several examples to illustrate these definitions. We then describe several general-purpose algorithms that can be used to solve these problems. We will see several uninformed search algorithms - algorithms that are given no information about the problem other than its definition. Although some of these algorithms can solve any solvable problem, none of them can do so efficiently. Informed search algorithms, on the other hand, can often do quite well given some idea of where to look for solutions.

Initially, we will limit ourselves to the simplest kind of task environment, for which the solution to a problem is always a fixed sequence of actions. We will use concepts from the analysis of algorithms. Specifically, asymptotic complexity, or Big 'O' notation, and NP-completeness.

Big O Notation

The analysis of algorithms and the O() notation allows us to talk about the efficiency of a particular algorithm. However, they have nothing to say about whether there could be a better algorithm for the problem at hand. The field of complexity analysis analyzes problems rather than alogorithms. The first gross division is between problems that can be solved in polynomial time and problems that cannot be solved in polynomial time, no matter what algorithm is used. The class of polynomial problems - those which can be solved in time O(nk) for some k - is called P. These are sometimes called "easy" problems, because the class contains those problems with running times like O(log n) and O(n). But it also contains those with time O(n1000), so the name "easy" should not be taken too literally.

NP Completeness

Another important class of problems is NP, the class of Nondeterministic Polynomial problems. A problem is in this class if there is some algorithm that can guess a solution and then verify whether the guess is correct in polynomial time. The idea is that if you have an arbitrarily large number of processors, so that you can try all the guesses at once, or you are very lucky and always guess right the first time, then the NP problems become P (see above) problems. One of the biggest open questions in computer science is whether the class NP is equivalent to the class P when one does not have the luxury of an infinite number of processors or omniscient guessing. Most computer scientists are convinced that P ≠ NP; that NP problems are inherently hard and have no polynomial time algorithms. But this has never been proven.

Those who are interested in deciding whether P=NP look at the subclass of NP called the NP-complete problems. The word complete is used here in the sense of Most Extreme and thus refers to the hardest problems in the class of NP. It has been proven that either all the NP-complete problems are in P or none of them are. This makes the class theoretically interesting, but the class is also of practical interest because many important problems are propositional logic, i.e. Is there any assigment of truth values to the proposition symbols of the sentence that makes it true? Unless a miracle occurs and P=NP, there can be no algorithm that solves all satisfiability problems in polynomial time. However, AI is more interested in whether there are alogorithms that perform efficiently on typical problems drawn from a predetermined distribution.

Intelligent agents are supposed to maximize their performance measure. Achieving this is sometimes simplified if the agent can adopt a goal and aim at satisfying it. Let us first look at why and how an agent might do this.

Imagine an agent in the city of Arad, Romania, enjoying a touring holiday. The agent's performance measure contains many factors: it wants to improve its suntan, improve its Romanian language, take in the sights, enjoy the Romanian nightlife, avoid hangovers, and so on. The decision problem is a complex one involving many tradeoffs and careful reading of guidebooks. Now, suppose the agent has a nonrefundable ticket to fly out of Bucharest the following day. In that case, it makes sense for the agent to adopt the goal of getting to Bucharest. Courses of action that don't reach Bucharest on time can be rejected without further consideration and the agent's decision problem is greatly simplified. Goals help organize behavior by limiting the objectives that the agent is trying to achieve and hence the actions it needs to consider. Goal formulation, based on the current situation and the agent's performance measure, is the first step in problem solving.

We will consider a goal to be a set of world states - exactly those states in which the goal is satisified. The agent's task is to find out how to act, now and in the future, so that it reaches a goal state. Before it can do this, it needs to decide (or we need to decide on its behalf) what sorts of actions and states it should consider. If it were to consider actions at the level of "move left foot forward an inch" or "turn the steering wheel one degree left," the agent would probably never find its way out of the parking lot, let alone to Bucharest, because at that level the detail there is too much uncertainty in the world and there would be too many steps in a solution. Problem formulation is the process of deciding what actions and states to consider, given a goal. Let us assume that the agent will consider actions at the level of driving from one major town to another. Each state therefore corresponds to being in a particular town.

Our agent has now adopted the goal of driving to Bucharest, and is considering where to go from Arad. There are three roads out of Arad, one toward Sibu, one to Timisoara, and one to Zerind. None of these achieves the goal, so unless the agent is very familiar with the geography of Romania, it will not know which road to follow. In other words, the agent will not know which of its possible actions is best, because it does not yet know enough about the state that results from taking each action. If the agent has no additional information - i.e., if the environment is unknown in the sense defined in Section 2.3 - then it has no choice but to try one of the actions at random.

But suppose the agent has a map of Romania. The point of a map is to provide the agent with information about the states it might get itself into, and the actions it can take. The agent can use this information to consider subsequent stages of a hypothetical journey via each of the three towns, trying to find a journey that eventually gets to Bucharest. Once it has found a path on the map from Arad to Bucharest, it can achieve its goal by carrying out the driving actions that correspond to the legs of the journey. In general, an agent with several immediate options of unknown value can decide what to do first by examining future actions that eventually lead to states of known value.

To be even more specific about what we mean by "examining future actions," we have to be more specific about properties of the environment. For now, we will assume that the environment is observable, so that the agent always knows the current state. For the agent driving in Romania, it's reasonable to suppose that each city on the map has a sign indicating its presence to arriving drivers. We will also assume the environment is discrete, so that at any given state there are only finitely many actions to choose from. This is true for navigating in Romania because each city is connected to a small number of other cities. We will assume the environment is known, so that the agent knows which states are reached by each action. Next, we assume our map is accurate. And finally, we assume that the environment is deterministic, so that each action has exactly one outcome. Under ideal conditions, this is true for the agent in Romania - it means that if it chooses to drive from Arad to Sibiu, it does end up in Sibiu.

Under these assumptions, the solution to any problem is a fixed sequence of actions. The process of looking for a sequence of actions that reaches the goal is called a search. A search algorithm takes a problem as input and returns a solution in the form of an action sequence. Once a solution is found, the actions it recommends can be carried out. This is called the execution phase. Figure Agent-1 gives us a simple formulate, search, execute design for the agent:

function Simple-Problem-Solving-Agent (percept) returns an action
  persistent:
    seq, an action sequence, initially empty
    state, some description of the current world state
    goal, a goal, initially null
    problem, a problem formulation

  state ← Update-State (state,percept)
  if seq is empty then
    goal←Formulate-Goal(state)
    problem←Formulate-Problem(state,goal)
    seq←Search(problem)
    if seq = failure then return a null action
  action ← First(seq)
  seq ← Rest(seq)
  return action
Figure Agent-1. A simple problem-solving agent. It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time. When this is complete, it formulates another goal and starts over.

After formulating a goal and a problem to solve, the agent calls a search procedure to solve it. It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do - typically the first action of the sequence - and then removing that step from the sequence. Once the solution has been executed, the agent will formulate a new goal.

Notice that while the agent is executing the solution sequence it ignores its percepts when choosing an action because it knows in advance what they will be. An agent that carries out its plans with its eyes closed, so to speak, must be quite certain of what is going on. Control theorists call this an open-loop system, because ignoring the percepts breaks the loop between agent and environment.

We first describe the process of problem formulation, and then present various algorithms for the Search function.


Well-defined problems and solutions

A problem can be defined formally by five components:

  • The initial state that the agent starts in. For example, the initial state for our agent in Romania might be described as In(Arad).

  • A description of the possible actions available to the agent. Given a particular state s, ACTIONS(s) returns the set of actions that can be executed in s. For example, from the state In(Arad), the possible actions are {Go(Sibiu), Go(Timisoara), Go(Zerind)}.

  • A description of what each action does; the formal name for this is the transition model, specified by a function Result(s,a) that returns the state that results from doing action a in state s. We will also use the term successor to refer to any state reachable from a given state by a single action. For example, we have:

    Result(In(Arad), Go(Zerind))= In(Zerind).

    Together, the initial state, actions and transition model implicitly define the state space of the problem - the set of all states reachable from the initial state by any sequence of actions. The state space forms a directed network of graph in which the nodes are states and the links between nodes are actions. (The map of Romania shown in Figure Agent-2 can be interpreted as a state space graph if we view each road as standing for two driving actions, one in each direction.) A path in the state space is a sequence of states connected by a sequence of actions.

  • Romania map

    Figure Agent-2: A simplified road map of Romania


  • The goal test, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given state is one of them. The agent's goal in Romania is the singleton set: {In(Bucharest)|. Sometimes the goal is specified by an abstract property rather than an explicitly enumerated set of states. For example, in chess, the goal is to reach a state called Checkmate, where the opponent's king is under attack and can't escape.

  • A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects its own performance measure. For the agent trying to get to Bucharest, time is of the essence, so the cost of a path might be its length in kilometers. For now, we assumethat the cost of a path can be described as the sum of the costs of the individual actions along the path. The step cost of taking action a in state s to reach state s' is denoted by c(s,a,s'). The step costs for Romania in Figure Agent-2 as route distances. We will assume that step costs are nonnegative.

The preceding elements define a problem and can be gathered together into a single data structure that is given as input to a problem-solving algorithm. A solution to a problem is an action sequence that leads from the initial state to a goal state. Solution quality is measured by the patch cost function, and an optimal solution has the lowest path cost among all solutions.


Formulating Problems

In the preceding section we proposed a formulation of the problem of getting to Bucharest in terms of the initial state, actions, transition model, goal test, and path cost. This formulation seems reasonable, but is still a model - an abstract mathematical description - and not the real thing. Compare the simple state description we have chosen, In(Arad), to an actual cross-country trip, where the state of the world includes so many things: the traveling companions, what is on the radio, the scenery out the window, whether there are any law enforcement officers nearby, how far it is to the next rest stop, the condition of the road, the weather, and so on. All these considerations are left out of our state descriptions because they are irrelevant to the problem of finding a route to Bucharest. The process of removing detail from a representation is called abstraction.

In addition to abstracting the state description, we must abstract the actions themselves. A driving action has many effects. Besides changing the location of the vehicle and its occupants, it takes up time, consumes fuel, generates pollution, and changes the agent. Our formulation takes into account only the change in location. Also, there are many actions that we will omit altogether: turning on the radio, stopping at McDonald's, getting speeding tickets, and thrown in jail for drunk driving, and so on. And of course, we don't specify turning the steering wheel 97 degrees, or screaming at the kids.

Can we be more precise about defining te appropriate level of abstraction? Probably not, but let's give it a shot anyways. Think of the abstract states and actions we have chosen as corresponding to large sets of detailed world states and detailed action sequences. Now consider a solution to the abstract problem: the path from Arad to Sibiu to Rimnicu Vilcea to Pitesti to Bucharest. This abstract solution corresponds to a large number of more detailed paths. For example, we could drink beer between Sibiu and Rimnicu Vilcea, then smoke pot for the rest of the trip, although this may add to the mileage, since we forgot where we were going, but didn't care anyways. The abstraction is valid if we can expand any abstract solution into a solution in the more detailed world; a sufficient condition is that for every detailed state that is In(Arad), there is a detailed path to some state that is In(Sibiu), and so on. The abstraction is useful if carrying out each of the actions in the solution is easier than the original problem; in this case they are easy enough that they can be carried out without further search or planning by an average driving agent. But, if we are planning on drinking beer on this trip, and we don't want to get arrested, we had better plan on having a Better than average driving agent. The choice of a good abstraction thus involves removing as much detail as possible while retaining validity and ensuring that the abstract actions are easy to carry out. Were it not for the ability to construct useful abstractions, intelligent agents would not be considered very intelligent, especially when being swamped by activities in the real world.



Example Problems

The problem-solving approach has been applied to a vast array of task environments. We list some of the best known here, distinguishing between toy and real-world problems. A toy problem is intended to illustrate or exercise various problem-solving methods. It can be given a concise, exact description and hence is usable by different researchers to compare the performance of algorithms. A real-world problem is one whose solutions people actually care about. They tend not to have a single agreed-upon description, so we will do our best describing the formulations used.


Toy Problems

Vacuum World

The first example we will examine is the vacuum world. as diagrammed in Figure Agent-3:

Vacuum World
Figure Agent-3 The state space for the vacuum world. Links denote actions: L=Left, R=Right, S=Suck.

The problem is formulated as follows:

  • States: The state is determined by both the agent location and the dirt locations. The agent is in one of two locations, each of which might or might not contain dirt. Thus there are 2 * 22 = 8 possible world states. A larger environment with n locations has n * 22 states.

  • Initial State: Any state can be designated as the initial state.

  • Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Larger environments might also include Up and Down.

  • Transition Model: The actions have their expected effects, except that moving Left in the leftmost square, moving Right in the rightmost square, and Sucking in a clean square have no effect.

  • Goal Test: This checks whether all the squares are clean.

  • Path Cost: Each steps costs 1, so the path cost is the number of steps in the path.

Compared with the real world, this toy problem has discrete locations, discrete dirt, reliable cleaning, and it never gets messed up once cleaned.


8-puzzle

The 8-puzzle, an instance of which is shown in Figure Agent-4, consists of a 3 Χ 3 board with eight numbered tiles and a blank space. A tile adjacent to the blank space can slide into the space. The object is to reach a specified goal state, such as the one shown on the right of the figure:

8-Puzzle

Figure Agent-4: A typical instance of the 8-puzzle.


The object is to reach a specified goal state, such as the one shown on the right of the figure. The standard formulation is as follows:

  • States: A state description specifies the location of each of the eight tiles and the blank in one of the nine squares.

  • Initial State: Any state can be designed as the initial state. Note that any given goal can be reached from exactly half of the possible initial states.

  • Actions: The simplest formulation defines the actions as movements of the blank space Left, Right, Up, or Down. Different subsets of these are possible depending one where the blank is.

  • Transition Model: Given a state and action, this returns the resulting state; for example if we apply Left to the start state as in Figure Agent-4, the resulting state has the 5 and the blank switched since the commands refer to the moving of the blank space.

  • Goal Test: This checks whether the state matches the goal configuration shown in Figure Agent-4.

  • Path Cost: Each step costs 1, so the path cost is the number of steps in the path.

What abstractions have we included here? The actions are abstracted to their beginning and final states, ignoring the intermediate locations where the block is sliding. We have abstracted away actions such as shaking the board when pieces get stuck, or extracting the pieces with a knife and putting them back again. We are left with a description of the rules of the puzzle, avoiding all the details of physical manipulations.

The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as test problems for new search algorithms in AI. This family is known to be NP-complete, so one does not expect to find methods significantly better in the worst case than the search algorithms which are described in this analysis. the 8-puzzle has 9!/2 = 181,440 reachable states and is easily solved. The 15-puzzle (on a 4x4 board) has around 1.3 trillion states, and random instances can be solved optimally in a few milliseconds by the best search algorithms. The 24-puzzle (on a 5x5 board) has around 1025 states, and random instances take several hours to solve optimally.


8-Queens Problem

The goal of the 8-queens problem is to place eight queens on a chessboard such that no queen attacks any other. (A queen attacks any piece in the same row, column or diagonal.) Figure Agent-5 shows an attempted solution that fails: the queen in the rightmost column is attacked by the queen at the top left.

8-Queens

Figure Agent-5: Almost a solution to the 8-Queens problem.

Although efficient special-purpose algorithms exist for this problem and for the whole N-Queens family, it remains a useful test problem for search algorithms. There are two main kinds of formulation. An incremental formulation involves operators that augment the state description, starting with an empty state; for the 8-Queens problem, this means that each action adds a queen to the state. A complete-state formulation starts with all 8 queens on the board and moves them around. In either case, the path cost is of no interest because only the final state counts. The first incremental formulation one might try is the following:

  • States: Any arrangement of 0 to 8 queens on the board is a state.
  • Initial State: No queens on the board.
  • Actions: Add a queen to any empty square.
  • Transition Model: Returns the board with a queen added to the specified square.
  • Goal Test: 8 queens are on the board, none attacked.

In this formulation we have 64 * 63 * 62 ... *57 ==> 1.8 x 1014 possible sequences to investigate. A better formulation would prohibit placing a queen in any square that is already attacked:

  • States: All possible arrangements on n Queens (0≤n≤8), one per column i the leftmost n columns, with no queen attacking another.

  • Actions: Add a queen to any square in the leftmost empty column such that it is not attacked by any other queen.

This formulation reduces the 8-Queens space from 1.8 x 1014 to just 2,057 and solutions are easy to find. On the other hand, for 100 queens the reduction is from roughly 10400 state to about 1052 states. A big improvement, but not enough to make the problem tractable.


The Knuth Sequence

Our final toy problem was devised by Donald Knuth in 1964 and illustrates how infinite spaces can arise. Knuth conjectured that one can start with the number 4, apply a sequency of factorial, square root, and floor operations, and arrive at any desired positive integer. For example:

Knuth
    The problem definition is very simple:

  • States: Positive Numbers.
  • Initial State: 4.
  • Actions: Apply factorial, square root, or floor operation. Factorial can be applied only to integers.

  • Transition Model: As given by mathematical definitions of the operations.

  • Goal Test: State is the desired positive integer.

To our knowledge there is no bound on how large a number might be constructed in the process of reaching a given target - for example, 620,448,401,733,239,439,360,000 is generated in the expression for 5 - so the state space for this problem is infinite. Such state spaces arise very frequently in tasks involving the generation of mathematical expressions, circuits, proofs, programs, and other recursively defined objects.


Real World Problems

Route Finding Problems

We have already seen how the route-finding problem is defined in terms of specified locations and transitions along links between them. Route-finding algorithms are used in a variety of applications. Some, such as Web sites and in-car systems that provide driving directions, are relatively straightforward extensions of the Romania example. Others, such as routing video streams in computer networks, military operations planning, and airline travel planning systems, involve much more complex speciifcations. Consider the airline travel problems that must be solved by a travel planning Web site:

  • States: Each state obviously includes a location, for the heck of it, let's call the location an 'airport', and the current time. Furthermore, because the cost of an action, let's be original and call this a 'flight', may depend on previous flight segments, their fare base, and perhaps whether they were domestic or international, because more regulations apply.

  • Initial State: This is specified by the user's query.

  • Actions: Take any flight from the current location, in any seat class, leaving after the current time, leaving enough time for within-airport transfer if there was a preceding flight segment.

  • Transition Model: The state resulting from taking a flight will have the flight's destination as the current location and the flight's destination as the current location and the flight's arrival time as the current time.

  • Goal Test: Are we at the final destination specified by the user?

  • Path Cost: This depends on monetary cost, waiting time, flight time, customs and immigration procedures, seat quality, time of day, type of aeroplane, frequent-flyer mileage awards, and so on, and on, and on.

Commercial travel advice systems use a problem formulation of this kind, with many additional complications to handle the byzantine fare structures that airlines impose, e.g. They want to charge me for a whole row when I forget to put on my deodorant. If it is such a big deal, then why don't they just have a community anti-perspirant, that we all could use? As any seasoned traveler knows, however, that not all air travel goes according to plan. A really good system should include contingency plans - such as backup reservations on alternate flights - to the extent that these are justified by the cost and likelihood of failure of the original plan.


Touring Problems

Touring Problems are closely related to route-finding problems, but, with an important, very important, difference. Consider, just consider, for a moment, if you will, the problem "Visit every city in Figure Agent-2 at least once, starting and ending in Bucharest." As with route finding, the actions correspond to trips between adjacent cities. The state space, however is quite different. Each state must include not just the current location but also the set of cities the agent has visited. So, the initial state would be: In(Bucharest), Visited({Bucharest}), a typical intermediate state would be: In(Vaslui), Visited({Bucharest, Urziceni, Vaslui}), and the goal test would check whether the agent is in Bucharest and all 20 cities have been visited.


Traveling Salesperson Problem

The Traveling Salesperson Problem (TSP) is a touring problem in which each city must be visited exaclty once. The aim is to find the shortest tour. The problem is known to NP-hard, but an enormous amount of effort has been expended to improve the capabilities of TSP algorithms. In addition to planning trips for traveling salespersons, these algorithms have been used for tasks such as planning movements of automatic circuit board drills and of stocking machines on shop floors.


VLSI Layout

A VLSI layout problem requires positioning millions of components and connections on a chip to minimize area, minimize circuit delays, minimize stray capacitances, and maximize manufacturing yield. The layout problem comes after the logical design phase, and is usually split into two parts: cell layout and channel routing. In cell layout, the primitive components of the circuit are grouped into cells, each of which performs some recognized function. Each cell has a fixed footprint (size and shape) and requires a certain number of connections to each of the other cells. The aim is to place the cells on the chip so that they do not overlap and so that there is room for the connecting wires to be placed between the cells. Channel routing finds a specific route for each wire through the gaps between the cells. These search algorithms are extremely complex, but definitely worth solving.


Robot Navigation

Robot Navigation is a generalization of the route-finding problem described earlier. Rather than a discrete set of routes, a robot can move in a continuous space with (in principle) an infinite set of possible actions and states. For a circular robot moving on a flat surface, the space is essentially two-dimensional. When the robot has arms and legs or wheels that must also be controlled, the search space becomes many-dimensional. Advanced techniques are required just to make the search space finite. In addition to the complexity of the problem, real robots must also deal with errors in their sensor readings and motor controls.


Automatic Assembly Sequencing

Automatic assembly sequencing of complex objects by a robot was first demonstrated by Freddy. Progress since then has been slow but sure, to the point where the assembly of intricate objects such as electric motors is economically feasible. In assembly problems, the aim is to find an order in which to assemble the parts of some object. If the wrong order is chosen, there will be no way to add some part later in the sequence without undoing some of the work already done. Checking a step in the sequence for feasibility is a difficult geometrical search problem closely related to robot navigation. Thus, the generation of legal actions is the expensive part of assembly sequencing. Any practical algorithm must avoid exploring all but a tiny fraction of the state space. Another important assembly problem is protein design, in which the goal is to find a sequence of amino acids that will fold into a three-dimensional protein with the right properties to cure some disease.

Searching 4 Solutions

Having formulated some problems, we now need to solve them. A solution is an action sequence, so search algorithms work by considering various possible actions sequences. the possible action sequences starting at the initial state from a search tree with the initial state at the root; the branches are actions and the nodes correspond to states in the state space of the problem. Figure Agent-6 shows the first few steps in growing the search tree for finding a route from Arad to Bucharest:

Arad-Bucharest Partial

Figure Agent-6: Partial search trees for finding a route from Arad to Bucharest. Nodes that have been expanded are shaded; Nodes that have been generated but not yet expanded are outlined in bold; nodes that have not yet been generated are shown in faint dashed lines.


The root node of the tree corresponds to the initial state, In(Arad). The first step is to test whether this is a goal state. (Clearly it is not, but it is important to check so that we can solve trick problems like "Starting in Arad, get to Arad"). Then we need to consider taking various actions. This is done by expanding the current state; that is, applying each legal action to the current state, thereby generating a new set of states. In this case, we add three branches from the parent node: In(Arad) leading to the three new child nodes: In(Sibiu), In(Timisoara), and In(Zerind). Now we must choose which of these three possibilities to consider further.

This is the esssence of the search - following up one option now and putting the others aside for later, in case the first choice does not lead to a solution. Suppose we choose Sibiu first. We check to see whether it is a goal state (it is not) and then expand it to get In(Arad), In(Fagaras), In(Oradea), and In(RimnicuVilcea). We can then choose any of these four, or go back and choose Timisoara or Zerind. Each of these six nodes is a leaf node, that is, a node with no children in the tree. The set of all leaf nodes available for expansion at any given point is called the frontier. (Many authors call it the open list, which is both geographically less evocative and inaccurate, as it need not be stored as a list at all.) In Figure Agent-6, the frontier of each tree consists of those nodes with bold outlines.

The process of choosing and expanding nodes in the frontier continues until either a solution is found or there are no more states to be expanded. The general Tree-Search algorithm is shown in Figure Agent-7. Search algorithms all share this basic structure; they vary primarily according to how they choose which state to expand next - the so-called search strategy.

Function Tree-Search(problem) returns a solution, or failure
  Initialize the frontier using the initial state of the problem
  loop do
    If the frontier is empty then return failure
    choose a leaf node and remove it from the frontier
    If the node contains a goal state then return the corresponding solution
    Expand the chosen node, adding the resuting nodes to the frontier.

Function Graph-Search(problem) returns a solution or a failure
  intialize the frontier using the initial state of problem
  initialize the explored set to be empty
  loop do
    If the frontier is empty then return failure
    choose a leaf node and remove it from the frontier
    If the node contains a goal state then return the corresponding solution
    add the node to the explored set
    expand the chosen node, adding the resulting nodes to the frontier
      only if not in the frontier or explored set
Figure Agent-7: An informal description of the general tree-search and graph-search algorithms. The parts of Graph-Search marked in bold italic are the additions needed to handle repeated states.

Turning back to Figure Agent-6, we can see quite a dilemma in that it is possible to return to Arad, thus potentially making this a very long journey. The official term for this is that In(Arad) is a repeated state in the search tree, generated by a loopy path, or unofficially: our case of Budweieser is going to run out long before our trip is over. Considering such loopy paths means that the complete search tree for Romania is infinite, because you can get stuck running around in circles, if you're not careful. On the other hand, the state space as they call it, or I would just say the number of towns on the map, has only 20 states. So, since this initial algorithm allows us to repeat our path, we obviously need to make some minor modifications. Fortunately, since we are trying to cut down our trip distance, we need not consider ever returning to a place we have already visited.

Loopy paths are a special case of the more general concept of redundant paths, which exist whenever there is more than one way to get from one state to another. Consider the paths Arad-Sibiu, about 140 km, and Arad-Zerind-Oradea-Sibiu, about 297 km.

Romania 1

Figure Agent-8: Two possible paths - One good - One not so good.

Officially we would say that the second path is redundant, unofficially, we might say: We better have our navigator cut back on the Bud a little bit. If you are concerned about reaching the goal, there's never any reason to keep around more than one path to any given state, because any goal state that is reachable by extending the other.

In some cases, it is possible to define the problem itself so as to eliminate redundant paths. For example, if we formulate the 8-Queens problem so that a Queen can be placed in any column, then each state with n queens can be reached by n! different paths; but if we reformulate the problem so that each new queen is placed in the leftmost empty column, then each state can be reached only through one path.

In other cases, redundant paths are unavoidable. This includes all problems where the actions are reversible, such as route-finding problems and sliding-block puzzles. Route finding on a rectangular grid is particularly important in computer games. Initially, our unexplored grid appears as follows:

Unexplored Grid

Figure Agent-9: Unexplored Rectangular Grid - Frontier Only


In such a grid, each state has four successors, so when we initialize our first state it may look like:


Rect Grid 1

Figure Agent-10: Only Root explored - Explored Node In Black - 4 Frontier Nodes in White


Expanding the frontier node above the root node gives us:


Rect Grid 2

Figure Agent-11: One Frontier Node Expanded (White==>Black)


A search tree of depth 5 has 45 or 1024 leaves, but only 2 * 5 2 (50) distinct states within 5 steps of any given state. However, if we have a 20 x 20 grid, there may be a trillion nodes but only 800 distinct states. Thus, following redundant paths can cause a tractable problem to become intractable. This can be true, even if the algorithm knows how to avoid infinite loops.

As the saying goes, algorithms that forget their history are doomed to repeat it. The way to avoid exploring redundant paths is to remember where one has been. To do this, we augment the Tree-Search algorithm with a data structure called the explored set, which remembers every expanded node. Sometimes this is called a closed list or explored set = closed list. Newly generated nodes that match previously generated nodes - ones in the explored set or the frontier - can be discarded instead of potentially being added to the frontier. The new algorithm called Graph-Search is shown in the lower section of Figure Agent-7:

When the search tree is constructed by Graph-Search in the line:add the node to the explored set the algorithm contains at most one copy of each state(city) so we can think of it as growing a tree directly on the state-space graph, as shown in Figure Agent-12:

Rom Search 1

Figure Agent-12: Initial Search Tree starting at Arad

  1. Arad ==> Explored Set
  2. Expand Arad
    • Are Zerind, Sibiu or Timisoara in the explored set?
      • If no then:
      • Are Zerind, Sibiu or Timisoara in the Frontier?
        • If no then:
        • Zerind, Sibiu or Timisoara ==> Frontier
  3. If any of the nodes haven't been expanded yet, go back to step 1 with it.
Explored Frontier
Arad Zerind
Sibiu
Timisoara

In the next step we expand each of the frontier nodes starting with Zerind, then Sibiu, and finally Timisoara. Following the rules outlined above gives us the following expansion:

Rom Search 2

Figure Agent-13: Search Tree after Second Pass

Explored Frontier
Arad
Zerind
Sibiu
Timisoara
Oradea
Fagaras
Rimnicu Vilcea
Lugoj

On the third pass, we first find out the Oradea is a dead end since Zerind and Sibiu are both now in the explored category. As we start to expand Fagaras we first check to see if Bucharest is the goal state - it is, and the search algorithm terminates, as we have found our goal.

Infrastructure for Search Algorithms

Search algorithms require a data structure to keep track of the search tree that is being constructed. For each node n of the tree, we have a structure that contains 4 components:

  1. n.State: The state in the state space to which the node corresponds.
  2. n.Parent: The node in the search tree that generated this node.
  3. n.Action: The action that was applied to the parent to generate the node.
  4. n.Path-Cost: The cost, traditionally denoted by g(n), of the path from the initial state to the goal, as indicated by parent pointers.

Given the components for a parent node, it is easy to see how to compute the necessary components for a child node. The function Child-Node takes a parent node and an action and returns the resulting child node:

Function Child-Node(problem, parent, action)returns a node
  return a node with
    State =problem.Result(parent. State, action),
    Parent =parent
    Action =action
    Path-Cost =parent.Path-Cost + problem.Step-Cost(parent. State, action)

Figure Agent-14: Pseudocode to Create a Child-Node


The node data structure is depicted in Figure Agent-15. Notice how the Parent pointers string the nodes together into a tree structure. These pointers also allow the solution path to be extracted when a goal node is found; we use the Solution function to return the sequence of actions obtained by following parent pointers back to the root.

Node Example

Figure Agent-15: Node Example

Nodes are the data structures from which teh search tree is constructed. Each has a parent, a state, and various bookkeeping fields. Arrows point from child to parent.


Up to now, we haven't been very careful to distinguish between nodes and states, but in writing detailed algorithms it's important to make that distinction. A node is a bookkeeping data structure used to represent the search tree. A state corresponds to a configuration of the world. Thus, nodes are on particular paths, as defined by Parent pointers, whereas states are not. Furthermore, two different nodes can contain the same world state if that state is generated via two different search paths.

Now that we have nodes, we need somewhere to put them. The frontier needs to be stored in such a way that the search algorithm can easily choose the next node to expand according to its preferred strategy. The appropriate data structure for this is a queue. The operations on a queue are as follows:

  • Empty? (queue) returns true only if there are no more elements in the queue.
  • Pop (queue) removes the first element of the queue and returns it.
  • Insert (element, queue) inserts an element and returns the resulting queue.

Queues are characterized by the order in which they store the inserted nodes. Three common variants are the first-in, first-out or FIFO queue, which pops the oldest element of the queue; the last-in, first-out or LIFO queue (also known as a Stack), which pops the newest element of the queue; and the priority queue, which pops the element of the queue with the highest priority according to some ordering function.

The explored set can be implemented with a hash table to allow efficient checking for repeated states. With a good implementation, insertion and lookup can be done in roughly constant time, independent of the number of states stored. One must take care to implement the hash table with the right notion of equality of states stored. One must take care to implement the hash table with the right notion of equality between states. For example, in the traveling salesperson problem, the hash table needs to know that the set of visited cities: {Bucharest, Urziceni, Vaslui} is the same as {Urziceni, Vaslui, Bucharest}. Sometimes this can be achieved most easily by insisting that the data structures for states be in some canonical form; that is, logically equivalent states should map to the same data structure. In the case of states described by sets, for example, a bit-vector representation or a sorted list without repetition would be canonical, whereas an unsorted list would not.

Measuring Problem Solving Performance

Before we get into the design of specific search algorithms, we need to consider the criteria that might be used to choose among them. We will evaluate an algorithm's performance in 4 ways:

  1. Completeness: Is the algorithm guaranteed to find a solution when there is one? Hopefully it doesn't get tired and give up.
  2. Optimality: Does the strategy find the optimal solution? i.e. lowest cost possible.
  3. Time Complexity: How long does it take to find a solution? Just a little short of never.
  4. Space Complexity: How much memory is needed to perform the search? Memory conservation is not as important as it once was.

Time and space complexity are always considered with respect to some measure of the problem difficulty. In theoretical computer science, the typical measure is the size of the state space graph, |V| + |E|, where V is the set of vertices (nodes) of the graph and E is the set of edges (links). This is appropriate when the graph is an explicit data structure that is input to the search program. (The map of Romania is an example of this.) In Artificial Intelligence, the graph is often represented implicitly by the initial state, actions and transition model and is frequently infinite. For these reasons, complexity is expressed in terms of three quantities:

  1. Branching factor: Maximum number of successors of any node.
  2. Depth: How many levels deep from the root node the tree branches go.
  3. Maximum Length: The maximum length (cost) of any of the paths in the state space.

Time is often measured in terms of the number of nodes generated during the search, and space in terms of the maximum number of nodes stored in memory. For the most part, we will describe time and space complexity for search on a tree; for a graph, the answer will depend on how "redundant" the paths in state space are.

To assess the effectiveness of a search algorithm, we can consider just the search cost - which typically depends on the time complexity but can also include a term for memory usage - or we can use the total cost, which combines the search cost and the path cost of the solution found. For the problem of finding a route from Arad to Bucharest, the search cost is the amount of time taken by the search and the solution cost is the total length of the path in kilometers. Thus, to compute the total cost, we have to add milliseconds and kilometers. There is no "official exchange rate" between the two, but, in this case, it might be reasonable to convert kilometers into milliseconds by using an estimate of the car's average speed (because time is what the agent cares about). This enables the agent to find an optimal tradeoff point at which further computation to find a shorter path becomes counterproductive.