Hot-Link Menu in Right-Side Column

Shop the Low Price Leader from Home:
Wal-Mart.com USA, LLC


Well-defined problems and solutions

A problem can be defined formally by five components:

  1. The initial state that Agent Smart starts in. For example, the initial state for Agent Smart in Romania might be described as In(Arad).
  2. A description of the possible actions available to Agent Smart. Given a particular state s ACTIONS(s) returns the set of actions that can be executed in s. For example, from the state In(Arad), the possible actions are: {Go(Sibiu), Go(Timisoara), Go(Zerind)}.

  3. A description of what each action does; the formal name for this the transition model, specified by a function Result(s,a) that returns the state that results from doing action a in state s. The term successor refers to any state reachable from a given state by a single action. For example:
  4. Result(In(Arad), Go(Zerind)) = In(Zerind).

    Together, the initial state, actions and transition model implicilty define the state space of the problem. The state space is the set of all states reachable from the initial state by any sequence of actions. The state space forms a directed network of graphs in which the nodes are states and the links between nodes are actions. As an example, turn your attention to the map of Romania depicted in Figure Agent-2. This map can be interpreted as a state space graph if each city is viewed as a node, and the roads are considered bidirectional (as they are, of course, in the real world).

    Romania map

    Figure Agent-2: A simplified road map of Romania


  5. The goal test, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks wheter the given state is one of them. Agent Smart's goal in Romania is the singleton set:: {In(Bucharest)}. Sometimes the goal is specified by an abstract property rather an explicitly enumerated set of states. For example, in chess, the goal is to reach a state called Checkmate, where the opponent's king is under attack and can't escape.

  6. A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that refelects its own performance measure. For Agent Smart trying to get to Bucharest, time is of the essence, since Agent Smart is in Europe, the path cost might be measured in kilometers. The path cost can be described as:
  7. PathCost = Sum(Costs of All Paths taken to reach goal).

The preceding elements define a problem and can be gathered together into a single data structure that is given as input to a problem-solving algorithm. A solution to a problem is an action sequence that leads from the initial state to a goal state. Solution quality is measured by the path cost function, and an optimal solution has the lowest path cost among all solutions.

If you would like to keep your costs inline, take a visit to:

Centurion Online → Electronics → Home Theater

at the link above to find Top Quality Products at competitive prices.


Formulating Problems

In the preceding section we proposed a formulation of the problem of getting to Bucharest in terms of the initial state, actions, transition model, goal test, and path cost. This formulation seems reasonable, but is still a model - an abstract mathematical description - and not the real thing. Compare the simple state description we have chosen, In(Arad), to an actual cross-country trip, where the state of the world includes so many things: the traveling companions, what is on the radio, the scenery out the window, whether there are any law enforcement officers nearby, how far it is to the next rest stop, the condition of the road, the weather, and so on. All these considerations are left out of our state descriptions because they are irrelevant to the problem of finding a route to Bucharest. The process of removing detail from a representation is called abstraction.

In addition to abstracting the state description, we must abstract the actions themselves. A driving action has many effects. Besides changing the location of the vehicle and its occupants, it takes up time, consumes fuel, generates pollution, and changes the agent. Our formulation takes into account only the change in location. Also, there are many actions that we will omit altogether: turning on the radio, stopping at McDonald's, getting speeding tickets, and thrown in jail for drunk driving, and so on. And of course, we don't specify turning the steering wheel 97 degrees, or screaming at the kids.

Can we be more precise about defining te appropriate level of abstraction? Probably not, but let's give it a shot anyways. Think of the abstract states and actions we have chosen as corresponding to large sets of detailed world states and detailed action sequences. Now consider a solution to the abstract problem: the path from Arad to Sibiu to Rimnicu Vilcea to Pitesti to Bucharest. This abstract solution corresponds to a large number of more detailed paths. For example, we could drink beer between Sibiu and Rimnicu Vilcea, then smoke pot for the rest of the trip, although this may add to the mileage, since we forgot where we were going, but didn't care anyways. The abstraction is valid if we can expand any abstract solution into a solution in the more detailed world; a sufficient condition is that for every detailed state that is In(Arad), there is a detailed path to some state that is In(Sibiu), and so on. The abstraction is useful if carrying out each of the actions in the solution is easier than the original problem; in this case they are easy enough that they can be carried out without further search or planning by an average driving agent. But, if we are planning on drinking beer on this trip, and we don't want to get arrested, we had better plan on having a Better than average driving agent. The choice of a good abstraction thus involves removing as much detail as possible while retaining validity and ensuring that the abstract actions are easy to carry out. Were it not for the ability to construct useful abstractions, intelligent agents would not be considered very intelligent, especially when being swamped by activities in the real world.

Excellent Tech Support - Centurion Approved


Example Problems

The problem-solving approach has been applied to a vast array of task environments. We list some of the best known here, distinguishing between toy and real-world problems. A toy problem is intended to illustrate or exercise various problem-solving methods. It can be given a concise, exact description and hence is usable by different researchers to compare the performance of algorithms. A real-world problem is one whose solutions people actually care about. They tend not to have a single agreed-upon description, so we will do our best describing the formulations used.


Toy Problems

Vacuum World

The first example we will examine is the vacuum world. as diagrammed in Figure Agent-3:

Vacuum World
Figure Agent-3 The state space for the vacuum world. Links denote actions: L=Left, R=Right, S=Suck.

The problem is formulated as follows:

  • States: The state is determined by both the agent location and the dirt locations. The agent is in one of two locations, each of which might or might not contain dirt. Thus there are 2 * 22 = 8 possible world states. A larger environment with n locations has n * 22 states.

  • Initial State: Any state can be designated as the initial state.

  • Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Larger environments might also include Up and Down.

  • Transition Model: The actions have their expected effects, except that moving Left in the leftmost square, moving Right in the rightmost square, and Sucking in a clean square have no effect.

  • Goal Test: This checks whether all the squares are clean.

  • Path Cost: Each steps costs 1, so the path cost is the number of steps in the path.

Compared with the real world, this toy problem has discrete locations, discrete dirt, reliable cleaning, and it never gets messed up once cleaned.

In order to continue with this guide, and continue the discussion on Example Problems and Solutions: Press the Button below:

Home


Artificial Intelligence

Big O Notation

NP Completeness

Intelligent Agents


Problem Definition

Formulating Problems

Toy Problems

Vacuum World


8-Puzzle

8 Queens

The Knuth Sequence

Real World Problems

Route Finding Problems

Touring Problems

Traveling Salesperson Problem

VLSI Layout

Robot Navigation

Automatic Assembly Sequencing


Searching For Solutions


Frontier Node Expansion

Search Algorithm Infrastructure

Child Node Creation

Measuring Algorithm Performance


Uninformed Search Strategies

Breadth-First Search

Breadth-First Characteristics

C++ Breadth-First Search Code

Breadth-First Data Structure


Breadth-First Main Program

Breadth-First Make Map

Breadth-First Make Neighbor

Breadth-First Point of Origin


Breadth-First Print Cities

Breadth-First Initial Listing


Breadth-First Path Record

Breadth-First Get Child City

C++ Breadth-First Search Code


Breadth-First Print Goal Path

Uniform-Cost Search


Depth-First Search

Depth-First C++ Solution

Depth-First Data Structure


Depth-First MakeMap()

Depth-First Display Data

Depth-First Initial Listing


Depth-First GetChildCity()

Depth-First Path Record


Depth-First Search Function


Depth-First PrintGoalPath()

Depth-First Main()

Depth-Limited Search


Iterative Depth Search

Bidirectional Search

Comparing Strategies


Informed Search Strategies

Greedy Best-First

A* Search


Conditions For Optimality

Optimality of A*


C++ A* Search Code

A* Search Data Structure

A* Search Data Entry


A* Search GetChildCity()

C++ Min Priority Queue


C++ A* Search Code Function

C++ A* Headers & Prototypes

C++ A* Search Main()


Normalized Information Distance