Hot-Link Menu in Right-Side Column



Solving Problems by Searching

Problem Solving Agents

The simplest agents in the area of AI research are known as reflex agents. Reflex agents base their actions on a direct mapping from states to actions. Reflex agents have poor performance when the quantity of mapping events from known states to specific actions becomes too large to store, process and learn. Goal-based agents consider actions based on the desirability of the different possible outcomes.

One kind of goal-based agent is called a problem-solving agent. Problem solving agents use atomic representations. In an atomic representation each state of the world is indivisible and has no internal structure. In other words, states of the world are considered as wholes, with no internal structure visible to the problem solving algorithms.

Our discussion of problem-solving agents begins with precise definitions of problems and their solutions. We follow this up with several examples that illustrate these problem definitions. Then we describe several general-purpose algorithms that can be used to solve these nagging, but well defined problems. First, we will apply some uninformed search algorithms to tackle these problems. An uninformed search algorithm is given no information about the problem other than the definition of the problem. Although some of these algorithms can solve any solvable problem, none of them are very efficient. Before long, we just don't have enough processing power or memory to solve these problems in a practical time frame. Informed search algorithms, on the other hand, can often do quite well, once you give them a better idea of where to look, and not look, for the solution.

Either way you go, some extra processing power is always helpful. If you like lots of processing power at a minimal cost, check out:

Centurion Online → Computers → Desktops

in the link below.


Big O Notation & NP Completeness

Initially, we will limit ourselves to the simplest kind of task environment, for which the solution to a problem is always a fixed sequence of actions. We will use concepts from the analysis of algorithms. Specifically, asymptotic complexity, or Big 'O' notation, and NP-completeness.

Big O Notation

The analysis of algorithms and the O() notation allows us to talk about the efficiency of a particular algorithm. However, they have nothing to say about whether there could be a better algorithm for the problem at hand. The field of complexity analysis analyzes problems rather than alogorithms. The first gross division is between problems that can be solved in polynomial time and problems that cannot be solved in polynomial time, no matter what algorithm is used. The class of polynomial problems - those which can be solved in time O(nk) for some k - is called P. These are sometimes called "easy" problems, because the class contains those problems with running times like O(log n) and O(n). But it also contains those with time O(n1000), so the name "easy" is not synonomous with fast.

NP Completeness

Another important class of problems is NP, the class of Nondeterministic Polynomial problems. A problem is in this class if there is some algorithm that can guess a solution and then verify whether the guess is correct in polynomial time. The idea is that if you have an arbitrarily large number of processors, so that you can try all the guesses at once, or you are very lucky and always guess right the first time, then the NP problems become P (see above) problems. One of the biggest open questions in computer science is whether the class NP is equivalent to the class P when one does not have the luxury of an infinite number of processors or omniscient guessing. Most computer scientists are convinced that P ≠ NP; that NP problems are inherently hard and have no polynomial time algorithms. But this has never been proven.

Those who are interested in deciding whether P=NP look at the subclass of NP called the NP-complete problems. The word complete is used here in the sense of Most Extreme and thus refers to the hardest problems in the class of NP. It has been proven that either all the NP-complete problems are in P or none of them are. This makes the class theoretically interesting, but the class is also of practical interest because many important problems are propositional logic, i.e. Is there any assigment of truth values to the proposition symbols of the sentence that makes it true? Unless a miracle occurs and P=NP, there can be no algorithm that solves all satisfiability problems in polynomial time. However, AI is more interested in whether there are alogorithms that perform efficiently on typical problems drawn from a predetermined distribution.

If you are the type of person that finds these topics interesting, and wish to investigate the matter more thoroughly, you may wish to visit:

Centurion Online → Books → Computer Science Books

in the link above or the More Books button in the left hand column where these and many other fascinating topics can be found.



Intelligent Agents

Intelligent agents are supposed to maximize their performance measure. Why else would we call them intelligent? Achieving this is sometimes simplified if the agent can adopt a goal and aim at satisfying it. Let us first look at why and how an agent might do this.

Imagine an agent in the city of Arad, Romania, enjoying a touring holiday. The agent's performance measure contains many factors: it wants to improve its suntan, improve its Romanian language, take in the sights, enjoy the Romanian nightlife, avoid hangovers, and so on. The decision problem is a complex one involving many tradeoffs and careful reading of guidebooks, as well as asking that cute waitress for her opinion. Now, suppose the agent has a nonrefundable ticket to fly out of Bucharest the following day. Just to give our agent a little personality, let's call him Agent Smart. He must rendevous with Agent 99 in Bucharest, and inform the Chief Agent of Kontrol's plans to take over the world. In that case, it makes sense for Agent Smart to adopt the goal of getting to Bucharest. Courses of action that don't reach Bucharest on time can be rejected without further consideration and the Agent Smart's decision problem is greatly simplified. Goals help organize behavior by limiting the objectives that the agent is trying to achieve and hence the actions it needs to consider. Goal formulation, based on the current situation and the agent's performance measure, is the first step in problem solving.

We will consider a goal to be a set of world states - exactly those states in which the goal is satisfied. Agent Smart's task is to figure out how to act, now and in the future, so that he can reach his goal state. Before Agent Smart can do this, however, he must decide (or we need to at least give him a little nudge in the right direction) as the actions and states he must consider. If Agent Smart had to consider actions at the level of "move left foot forward one inch" or "turn the steering wheel one degree left", he may hopelessly become stuck in the parking lot, never finding his way to Bucharest, let alone in time to debrief the Chief Agent of KAOS' latest plans to take over the world. The reason for this is that in the real world, there is far too much uncertainty for such an entity, and the number and detail of the steps required to reach the solution quickly becomes overwhelming. Problem formulation, then, is the process of deciding what actions and states must be considered to reach the goal. Just to make it a little easier for Agent Smart, we will define actions at the level of driving from one major town to another. Each state, therefore, corresponds to being in a particular town.

Agent Smart has taken on the assignment of driving to Bucharest, but first must consider where to go from Arad. There are three roads out of Arad, one toward Sibiu, one to Timsoara, and one to Zerind. None of these actions achieves the goal, so unless Agent Smart is very familiar with the geography of Romania, he will not know which road is the best to follow. In other words, Agent Smart does not know which of the possible actions is best, because he still doesn't know enough about the state that results from taking each action. If Agent Smart receives no additional information - i.e. if the environment is unknown, then Agent Smart must try one of the actions at random.

Now suppose Agent Smart has a map of Romania. Will it really help him? I doubt it, but let's put our imaginations in high gear and pretend it does. The point of the map is to provide an agent with information about the states or situations that might arise and the best actions to take when the different situations arise. Agent Smart should be able to use the information on the map to consider subsequent stages of the hypothetical journey, before even leaving the parking lot from his hotel in Arad. In general, an agent with several immediate options of unknown value can decide what to do first by examining the future actions that will ultimately lead to the goal state.

To be even more specific about what we mean by "examining future actions," we have to be more specific about properties of the environment. For now, we will assume that the environment is observable, so that the agent always knows the current state. For the agent driving in Romania, it's reasonable to suppose that each city on the map has a sign indicating its presence to arriving drivers. We will also assume the environment is discrete, so that at any given state there are only finitely many actions to choose from. This is true for navigating in Romania because each city is connected to a small number of other cities. We will assume the environment is known, so that the agent knows which states are reached by each action. Next, we assume our map is accurate. And finally, we assume that the environment is deterministic, so that each action has exactly one outcome. Under ideal conditions, this is true for the agent in Romania - it means that if it chooses to drive from Arad to Sibiu, it does end up in Sibiu.

To be even more specific about what we mean by "examining future actions," we have to be more specific about properties of the environment. For now, we will assume that the environment is observable, so that the agent always knows the current state. For Agent Smart, this means he takes note of the town as he enters it, by reading the sign indicating its presence to arriving drivers. We will also assume the environment is discrete, so that there is a limited number of actions to choose from. This is true for navigating Romania, since each city is only connected to a small number of other cities. We will assume Agent Smart actually knows which city he is in, as he arrives. Next, we assume that Agent Smart's map is still accurate, even though he accidentally used it for his placemat at breakfast and spilled a bit of coffee on it. And finally, we are going to assume that the environment is deterministic

Under these assumptions, the solution to any problem is a fixed sequence of actions. The process of looking for a sequence of actions that reaches the goal is called a search. A search algorithm takes a problem as input and returns a solution in the form of an action sequence. Once a solution is found, the actions it recommends can be carried out. This is called the execution phase. Figure Agent-1 gives us a simple formulate, search, execute design for the agent:


Under these assumptions, the solution to any problem is a fixed sequence of actions. The process of looking for a sequence of actions that reaches the goal is called a search. A search algorithm takes a problem as input and returns a solution in the form of an action sequence. Once a solution is found, the actions recommended by the search algorithm can be carried out. This is called the execution phase. Figure Agent-1 gives a simple (well maybe not that simple) formulate, search, execute design for Agent Smart.

At this point, Agent Smart has 2 options. One option is that he follow the algorithm in Agent-1 and develop a coding sequence that can be executed on a modern day computer, or he could simply purchase a gps system, by several quality manufacturers such as TomTom & Garmin, as can be found at:

Centurion Online → Electronics → GPS Devices

at the link above, or at the More Electronics link in the left hand column. Unfortunately, Agent Smart has decided to take the coding option, which is what we will be dealing with in the next several sections.

Excellent Tech Support - Centurion Approved

function Simple-Problem-Solving-Agent (precept) returns an action
  persistent:
    seq, an action sequence, initially empty
    state, some description of the current world state
    goal, a goal, initially null
    problem, a problem formulation

  state ← Update-State (state,percept)
  if seq is empty then
    goal←Formulate-Goal(state)
    problem←Formulate-Problem(state,goal)
    seq←Search(problem)
    if seq = failure then return a null action
  action ← First(seq)
  seq ← Rest(seq)
  return action
Figure Agent-1. A simple problem-solving agent. It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time. When this is complete, it formulates another goal and starts over.


After formulating a goal and a problem to solve, the agent calls a search procedure to solve it. It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do - typically the first action of the sequence - and then removing that step from the sequence. Once the solution has been executed, the agent will formulate a new goal.

Notice that while the agent is executing the solution sequence it ignores its percepts when choosing an action because it knows in advance what they will be. An agent that carries out its plans with its eyes closed, so to speak, must be quite certain of what is going on. Control theorists call this an open-loop system, because ignoring the percepts breaks the loop between agent and environment.

We first describe the process of problem formulation, and then present various algorithms for the Search function.




In order to continue with this guide, and continue the discussion on Well-Defined Problems and Solutions: Press the Button below:


Home


Artificial Intelligence

Big O Notation

NP Completeness

Intelligent Agents


Problem Definition

Formulating Problems

Toy Problems

Vacuum World


8-Puzzle

8 Queens

The Knuth Sequence

Real World Problems

Route Finding Problems

Touring Problems

Traveling Salesperson Problem

VLSI Layout

Robot Navigation

Automatic Assembly Sequencing


Searching For Solutions


Frontier Node Expansion

Search Algorithm Infrastructure

Child Node Creation

Measuring Algorithm Performance


Uninformed Search Strategies

Breadth-First Search

Breadth-First Characteristics

C++ Breadth-First Search Code

Breadth-First Data Structure


Breadth-First Main Program

Breadth-First Make Map

Breadth-First Make Neighbor

Breadth-First Point of Origin


Breadth-First Print Cities

Breadth-First Initial Listing


Breadth-First Path Record

Breadth-First Get Child City

C++ Breadth-First Search Code


Breadth-First Print Goal Path

Uniform-Cost Search


Depth-First Search

Depth-First C++ Solution

Depth-First Data Structure


Depth-First MakeMap()

Depth-First Display Data

Depth-First Initial Listing


Depth-First GetChildCity()

Depth-First Path Record


Depth-First Search Function


Depth-First PrintGoalPath()

Depth-First Main()

Depth-Limited Search


Iterative Depth Search

Bidirectional Search

Comparing Strategies


Informed Search Strategies

Greedy Best-First

A* Search


Conditions For Optimality

Optimality of A*


C++ A* Search Code

A* Search Data Structure

A* Search Data Entry


A* Search GetChildCity()

C++ Min Priority Queue


C++ A* Search Code Function

C++ A* Headers & Prototypes

C++ A* Search Main()


Normalized Information Distance