HotLink Menu in RightSide Column 
Last Updated 6292010 


BreadthFirst Print Goal PathOf course, no selfrespecting program wouldn't tell about the great work, it has just accomplished. For this reason, a little routine needs to display the path taken to arrive at our intended location:
A description of the code is given below:
Creating and running the executable gives us the following output for the BreadthFirst Search:
So far, the news about breadthfirst search has been good. The news about time and space is not so good. Imagine searching a uniform tree where every state has b successors. The root of the search tree generates b nodes at the first level, each of which generates b more nodes, for a total of b^{2} at the second level. Each of these generates b more nodes, yielding b^{3} nodes at the third level, and so on. Now suppose that the solution is at depth d. In the worst case it is the last node generated at that level. Then the total number of nodes generated is: b + b^{2} + b^{3} + ... + b^{d} = O(b^{d}). d would be expanded before the goal was detected and the time complexity would be O(b^{d+1}).). As for space complexity: for any kind of graph search, which stores every expanded node in the explored set, the space complexity is always within a factor of b of teh time complexity. For breadthfirst graph search, in particular, every node generated remains in memory. There will be O(b^{d1}) nodes in the explored set and O(b^{d}) nodes in the frontier, so the space complexity is O(b^{d}), i.e., it is dominated by the size of the frontier. Switching to a tree search would not save much space, and in a state space with many redundant paths, switching could cost a great deal of time. An exponential complexity bound such as O(b^{d}) quickly becomes unwieldy. Figure BFS19 shows why. It lists for various solutions of depth d, the time and memory required for a breadthfirst search with branching factor b = 10. The table assumes that 1 million nodes can be generated per second, and each node requires 1000 bytes of storage. Many search problems fit roughly within these assumptions, give or take a few orders of magnitude:
Two lessons can be learned from Figure BFS20. First, the memory requirements are a bigger problem for breadthfirst search than the execution time. Looking at a node depth of 12, one might wait 13 days for the solution to an important problem, but personal computers do not currently have 1 petabyte of memory it would take. Fortunately, other strategies require less memory. The second lesson is that time is still a major factor. If your problem has a solution of depth 16, then it could take 350 years for breadthfirst or most any other uniformed search strategy to find a solution. In general, exponentialcomplexity search problems cannot be solved by uninformed methods, for any but the smallest instances. UniformCost SearchWhen all step costs are equal, breadthfirst search is optimal because it always expands the shallowest unexpanded node. By a simple extension, we can find an algorithm that is optimal with any stepcost function. Instead of expanding the shallowest node, uniformcost search expands the node n the the lowest path cost g(n). This is done by storing the frontier as a priority queue ordered by g. The algorithm is shown in Figure UCS1:
The algorithm of Figure UCS1 is identical to the general search algorithm in BFS1, except for the use of a priority queue and the addition of an extra check in case a shorter path to a frontier state is discovered. The data structure for frontier needs to support efficient membership testing, so it should combine the capabilities or a priority queue and a hash table. In addition to the ordering of the queue by the pathcost, there are two other significant differences from breadthfirst search. The first is that the goal test is applied to a node when it is selected for expansion (as in the graphsearch algorithm shown in Figure BFS1) rather than when it is first generated. The reason is that the first goal node that is generated may be on a suboptimal path. The second difference is that a test is added in case a better path if found to a node currently on the frontier. Both of these modification come into play in Figure UCS2, where the problem is to get from Sibiu to Bucharest. The successors of Sibiu are Rimnicu Vilcea and Fagaras, with costs of 80 and 99, respectively. The leastcost node, Rimnicu Vilcea, is expanded next, adding Pitesti with cost 80+97=177. The leastcost node is now Fagaras (177>99), so it is expanded, adding Bucharest with cost 99+211=310. Now a goal has been generated, but, Uniform Cost Search keeps going, choosing Pitesti for expansion and adding a second path with cost: 80+97+101=278. Now the algorithm checks to see if this new path is better than the old one; it is, so the old one is discarded. Bucharest, now with gcost 278, is selected for expansion and the solution is returned:
It is easy to see that uniformcost search is optimal in general. First, we observe that whenever uniformcost search selects a node n for expansion, the optimal path to that node has been found. (Were this not the case, there would have to be another frontier node n' on the optimal path from the start to node n, by graph separation property of Figure Agent11; by definition, n' would have lower gcost than n and would have been selected first.) Then, because step costs are nonnegative, paths never get shorter as nodes are added. These two facts together imply that UniformCost Search expands nodes in order of their optimal path cost. Hence, the first goal node selected for expansion must be the optimal solution. Uniformcost search does not care about the number of steps a path has, but only about their total cost. Therefore, it will get stuck in an infinite loop if there is a path with an infinite sequence of zerocost actions  for example, a sequence of No Op actions. Completeness is guaranteed provided the cost of every step exceeds some small positive constant ε. Uniformcost search is guided by path costs rather than depths, so its complexity is not easily characterized in terms of the depth, or the number of nodes. Instead, let C* be the cost of the optimal solution, and assume that every action costs at least ε. Then the algorithm's worstcase time and space complexity is O(b^{1+[C*/ε]}), which can be much greater than b^{d}. This is because uniformcost search can explore large trees of small steps before exploring paths involving large and perhaps useful steps. When all step costs are equal, b^{1+[C*/ε]} is just b^{d+1}. When all step costs are the same, uniformcost search is similar to breadthfirst search, except that the latter stops as soon as it generates a goal, whereas uniformcost search examines all the nodes at the goal's depth to see if one has a lower cost; thus uniformcost search does strictly more work by expanding nodes at depth d unnecesarily. In order to continue with this guide, and start the discussion of DepthFirst Search and the associated C++ code to perform the operation: Press the Button below: 
Big O Notation NP Completeness Intelligent Agents Formulating Problems Toy Problems Vacuum World 8 Queens The Knuth Sequence Real World Problems Route Finding Problems Touring Problems Traveling Salesperson Problem VLSI Layout Robot Navigation Automatic Assembly Sequencing Search Algorithm Infrastructure Child Node Creation Measuring Algorithm Performance BreadthFirst Search BreadthFirst Characteristics C++ BreadthFirst Search Code BreadthFirst Data Structure BreadthFirst Make Map BreadthFirst Make Neighbor BreadthFirst Point of Origin BreadthFirst Initial Listing BreadthFirst Get Child City C++ BreadthFirst Search Code UniformCost Search DepthFirst C++ Solution DepthFirst Data Structure DepthFirst Display Data DepthFirst Initial Listing DepthFirst Path Record DepthFirst Main() DepthLimited Search Bidirectional Search Comparing Strategies Greedy BestFirst A* Search Optimality of A* A* Search Data Structure A* Search Data Entry C++ Min Priority Queue C++ A* Headers & Prototypes C++ A* Search Main() 