Jump to content

Iterative deepening A*: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Applications: mention Rubik's Cube as example
m Eliminated duplicate reference to Korf 1985.
Line 16: Line 16:
While the standard iterative deepening depth-first search uses search depth as the cutoff for each iteration, the IDA* uses the more informative <math>f(n) = g(n) + h(n)</math>, where <math>g(n)</math> is the cost to travel from the root to node <math>n</math> and <math>h(n)</math> is a problem-specific heuristic estimate of the cost to travel from <math>n</math> to the goal.
While the standard iterative deepening depth-first search uses search depth as the cutoff for each iteration, the IDA* uses the more informative <math>f(n) = g(n) + h(n)</math>, where <math>g(n)</math> is the cost to travel from the root to node <math>n</math> and <math>h(n)</math> is a problem-specific heuristic estimate of the cost to travel from <math>n</math> to the goal.


The algorithm was first described by Richard Korf in 1985.<ref>{{cite journal |last=Korf |first=Richard E.|title=Depth-first Iterative-Deepening: An Optimal Admissible Tree Search |journal=[[Artificial Intelligence (journal)|Artificial Intelligence]] |year=1985 |volume=27 |pages=97–109 |doi=10.1016/0004-3702(85)90084-0 |s2cid=10956233 |url=http://www.cse.sc.edu/~mgv/csce580f09/gradPres/korf_IDAStar_1985.pdf}}</ref>
The algorithm was first described by Richard Korf in 1985.<ref name="re1985_7">{{cite journal |last=Korf |first=Richard E.|title=Depth-first Iterative-Deepening: An Optimal Admissible Tree Search |journal=[[Artificial Intelligence (journal)|Artificial Intelligence]] |year=1985 |volume=27 |pages=97–109 |doi=10.1016/0004-3702(85)90084-0 |s2cid=10956233 |url=http://www.cse.sc.edu/~mgv/csce580f09/gradPres/korf_IDAStar_1985.pdf}}</ref>


== Description ==
== Description ==


Iterative-deepening-A* works as follows: at each iteration, perform a [[depth-first search]], cutting off a branch when its total cost <math>f(n) = g(n) + h(n)</math> exceeds a given ''threshold''. This threshold starts at the estimate of the cost at the initial state, and increases for each iteration of the algorithm. At each iteration, the threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold.<ref name="re1985_7">{{Cite journal|last=Korf|first=Richard E.|year=1985|title=Depth-firseepening|url=https://cse.sc.edu/~mgv/csce580f09/gradPres/korf_IDAStar_1985.pdf|language=English|publication-date=1985|pages=7}}</ref>
Iterative-deepening-A* works as follows: at each iteration, perform a [[depth-first search]], cutting off a branch when its total cost <math>f(n) = g(n) + h(n)</math> exceeds a given ''threshold''. This threshold starts at the estimate of the cost at the initial state, and increases for each iteration of the algorithm. At each iteration, the threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold.<ref name="re1985_7"/>


As in A*, the heuristic has to have particular properties to guarantee optimality (shortest paths). See [[#Properties|Properties]] below.
As in A*, the heuristic has to have particular properties to guarantee optimality (shortest paths). See [[#Properties|Properties]] below.

Revision as of 15:38, 26 September 2023

Iterative deepening A*
ClassSearch algorithm
Data structureTree, Graph
Worst-case space complexity

Iterative deepening A* (IDA*) is a graph traversal and path search algorithm that can find the shortest path between a designated start node and any member of a set of goal nodes in a weighted graph. It is a variant of iterative deepening depth-first search that borrows the idea to use a heuristic function to conservatively estimate the remaining cost to get to the goal from the A* search algorithm. Since it is a depth-first search algorithm, its memory usage is lower than in A*, but unlike ordinary iterative deepening search, it concentrates on exploring the most promising nodes and thus does not go to the same depth everywhere in the search tree. Unlike A*, IDA* does not utilize dynamic programming and therefore often ends up exploring the same nodes many times.

While the standard iterative deepening depth-first search uses search depth as the cutoff for each iteration, the IDA* uses the more informative , where is the cost to travel from the root to node and is a problem-specific heuristic estimate of the cost to travel from to the goal.

The algorithm was first described by Richard Korf in 1985.[1]

Description

Iterative-deepening-A* works as follows: at each iteration, perform a depth-first search, cutting off a branch when its total cost exceeds a given threshold. This threshold starts at the estimate of the cost at the initial state, and increases for each iteration of the algorithm. At each iteration, the threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold.[1]

As in A*, the heuristic has to have particular properties to guarantee optimality (shortest paths). See Properties below.

Pseudocode

path              current search path (acts like a stack)
node              current node (last node in current path)
g                 the cost to reach current node
f                 estimated cost of the cheapest path (root..node..goal)
h(node)           estimated cost of the cheapest path (node..goal)
cost(node, succ)  step cost function
is_goal(node)     goal test
successors(node)  node expanding function, expand nodes ordered by g + h(node)
ida_star(root)    return either NOT_FOUND or a pair with the best path and its cost
 
procedure ida_star(root)
    bound := h(root)
    path := [root]
    loop
        t := search(path, 0, bound)
        if t = FOUND then return (path, bound)
        if t = ∞ then return NOT_FOUND
        bound := t
    end loop
end procedure

function search(path, g, bound)
    node := path.last
    f := g + h(node)
    if f > bound then return f
    if is_goal(node) then return FOUND
    min := ∞
    for succ in successors(node) do
        if succ not in path then
            path.push(succ)
            t := search(path, g + cost(node, succ), bound)
            if t = FOUND then return FOUND
            if t < min then min := t
            path.pop()
        end if
    end for
    return min
end function

Properties

Like A*, IDA* is guaranteed to find the shortest path leading from the given start node to any goal node in the problem graph, if the heuristic function h is admissible,[1] that is

for all nodes n, where h* is the true cost of the shortest path from n to the nearest goal (the "perfect heuristic").[2]

IDA* is beneficial when the problem is memory constrained. A* search keeps a large queue of unexplored nodes that can quickly fill up memory. By contrast, because IDA* does not remember any node except the ones on the current path, it requires an amount of memory that is only linear in the length of the solution that it constructs. Its time complexity is analyzed by Korf et al. under the assumption that the heuristic cost estimate h is consistent, meaning that

for all nodes n and all neighbors n' of n; they conclude that compared to a brute-force tree search over an exponential-sized problem, IDA* achieves a smaller search depth (by a constant factor), but not a smaller branching factor.[3]

Recursive best-first search is another memory-constrained version of A* search that can be faster in practice than IDA*, since it requires less regenerating of nodes.[2]: 282–289 

Applications

Applications of IDA* are found in such problems as planning.[4] Solving the Rubik's Cube is an example of a planning problem that is amenable to solving with IDA*.[5]

References

  1. ^ a b c Korf, Richard E. (1985). "Depth-first Iterative-Deepening: An Optimal Admissible Tree Search" (PDF). Artificial Intelligence. 27: 97–109. doi:10.1016/0004-3702(85)90084-0. S2CID 10956233.
  2. ^ a b Bratko, Ivan (2001). Prolog Programming for Artificial Intelligence. Pearson Education.
  3. ^ Korf, Richard E.; Reid, Michael; Edelkamp, Stefan (2001). "Time complexity of iterative-deepening-A∗". Artificial Intelligence. 129 (1–2): 199–218. doi:10.1016/S0004-3702(01)00094-7.
  4. ^ Bonet, Blai; Geffner, Héctor C. (2001). "Planning as heuristic search". Artificial Intelligence. 129 (1–2): 5–33. doi:10.1016/S0004-3702(01)00108-4. hdl:10230/36325.
  5. ^ Richard Korf (1997). "Finding Optimal Solutions to Rubik's Cube Using Pattern Databases" (PDF).