Jump to content

Backtracking: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
fix typo
m WP:ADOPTYPO boolean -> Boolean
 
(8 intermediate revisions by 8 users not shown)
Line 1: Line 1:
{{Use dmy dates|date=September 2022}}
{{Short description|Search algorithm}}
{{Short description|Search algorithm}}
{{for|the [[line search]] algorithm used in [[Mathematical optimization|unconstrained optimization]]|Backtracking line search}}
{{for|the [[line search]] algorithm used in [[Mathematical optimization|unconstrained optimization]]|Backtracking line search}}
{{Use dmy dates|date=September 2022}}
{{graph search algorithm}}

'''Backtracking''' is a class of [[algorithm]]s for finding solutions to some [[computational problem]]s, notably [[constraint satisfaction problem]]s, that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.<ref>{{cite web | url=http://www.cse.ohio-state.edu/~gurari/course/cis680/cis680Ch19.html | title=CIS 680: DATA STRUCTURES: Chapter 19: Backtracking Algorithms | year=1999 |last=Gurari|first=Eitan|archive-url= https://web.archive.org/web/20070317015632/http://www.cse.ohio-state.edu/~gurari/course/cis680/cis680Ch19.html#QQ1-51-128|archive-date=17 March 2007}}</ref>
'''Backtracking''' is a class of [[algorithm]]s for finding solutions to some [[computational problem]]s, notably [[constraint satisfaction problem]]s, that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.<ref>{{cite web | url=http://www.cse.ohio-state.edu/~gurari/course/cis680/cis680Ch19.html | title=CIS 680: DATA STRUCTURES: Chapter 19: Backtracking Algorithms | year=1999 |last=Gurari|first=Eitan|archive-url= https://web.archive.org/web/20070317015632/http://www.cse.ohio-state.edu/~gurari/course/cis680/cis680Ch19.html#QQ1-51-128|archive-date=17 March 2007}}</ref>


Line 9: Line 9:
Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution. It is useless, for example, for locating a given value in an unordered table. When it is applicable, however, backtracking is often much faster than [[brute-force search|brute-force enumeration]] of all complete candidates, since it can eliminate many candidates with a single test.
Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution. It is useless, for example, for locating a given value in an unordered table. When it is applicable, however, backtracking is often much faster than [[brute-force search|brute-force enumeration]] of all complete candidates, since it can eliminate many candidates with a single test.


Backtracking is an important tool for solving [[constraint satisfaction problem]]s,<ref name="BiereHeule2009">{{cite book |first1=A. |last1=Biere |first2=M. |last2=Heule |first3=H. |last3=van Maaren|title=Handbook of Satisfiability|url=https://books.google.com/books?id=shLvAgAAQBAJ&q=backtracking|date=29 January 2009|publisher=IOS Press|isbn=978-1-60750-376-7}}</ref> such as [[crosswords]], [[verbal arithmetic]], [[Algorithmics of sudoku|Sudoku]], and many other puzzles. It is often the most convenient technique for [[parsing]],<ref name="Watson2017">{{cite book|first=Des |last=Watson|title=A Practical Approach to Compiler Construction|url=https://books.google.com/books?id=05B0DgAAQBAJ&q=backtracking|date=22 March 2017|publisher=Springer|isbn=978-3-319-52789-5}}</ref> for the [[knapsack problem]] and other [[combinatorial optimization]] problems. It is also the basis of the so-called [[logic programming]] languages such as [[Icon programming language|Icon]], [[Planner programming language|Planner]] and [[Prolog]].
Backtracking is an important tool for solving [[constraint satisfaction problem]]s,<ref name="BiereHeule2009">{{cite book |first1=A. |last1=Biere |first2=M. |last2=Heule |first3=H. |last3=van Maaren|title=Handbook of Satisfiability|url=https://books.google.com/books?id=shLvAgAAQBAJ&q=backtracking|date=29 January 2009|publisher=IOS Press|isbn=978-1-60750-376-7}}</ref> such as [[crosswords]], [[verbal arithmetic]], [[Algorithmics of sudoku|Sudoku]], and many other puzzles. It is often the most convenient technique for [[parsing]],<ref name="Watson2017">{{cite book|first=Des |last=Watson|title=A Practical Approach to Compiler Construction|url=https://books.google.com/books?id=05B0DgAAQBAJ&q=backtracking|date=22 March 2017|publisher=Springer|isbn=978-3-319-52789-5}}</ref> for the [[knapsack problem]] and other [[combinatorial optimization]] problems. It is also the program execution strategy used in the programming languages [[Icon programming language|Icon]], [[Planner programming language|Planner]] and [[Prolog]].


Backtracking depends on user-given "[[procedural parameter|black box procedure]]s" that define the problem to be solved, the nature of the partial candidates, and how they are extended into complete candidates. It is therefore a [[metaheuristic]] rather than a specific algorithm&nbsp;– although, unlike many other meta-heuristics, it is guaranteed to find all solutions to a finite problem in a bounded amount of time.
Backtracking depends on user-given "[[procedural parameter|black box procedure]]s" that define the problem to be solved, the nature of the partial candidates, and how they are extended into complete candidates. It is therefore a [[metaheuristic]] rather than a specific algorithm&nbsp;– although, unlike many other meta-heuristics, it is guaranteed to find all solutions to a finite problem in a bounded amount of time.
Line 51: Line 51:
# ''output''(''P'',''c''): use the solution ''c'' of ''P'', as appropriate to the application.
# ''output''(''P'',''c''): use the solution ''c'' of ''P'', as appropriate to the application.


The backtracking algorithm reduces the problem to the call ''backtrack''(''root''(''P'')), where ''backtrack'' is the following recursive procedure:
The backtracking algorithm reduces the problem to the call ''backtrack''(''P'', ''root''(''P'')), where ''backtrack'' is the following recursive procedure:


'''procedure''' backtrack(P, c) '''is'''
'''procedure''' backtrack(P, c) '''is'''
Line 62: Line 62:


===Usage considerations===
===Usage considerations===
The ''reject'' procedure should be a [[boolean-valued function]] that returns ''true'' only if it is certain that no possible extension of ''c'' is a valid solution for ''P''. If the procedure cannot reach a definite conclusion, it should return ''false''. An incorrect ''true'' result may cause the ''backtrack'' procedure to miss some valid solutions. The procedure may assume that ''reject''(''P'',''t'') returned ''false'' for every ancestor ''t'' of ''c'' in the search tree.
The ''reject'' procedure should be a [[Boolean-valued function]] that returns ''true'' only if it is certain that no possible extension of ''c'' is a valid solution for ''P''. If the procedure cannot reach a definite conclusion, it should return ''false''. An incorrect ''true'' result may cause the ''backtrack'' procedure to miss some valid solutions. The procedure may assume that ''reject''(''P'',''t'') returned ''false'' for every ancestor ''t'' of ''c'' in the search tree.


On the other hand, the efficiency of the backtracking algorithm depends on ''reject'' returning ''true'' for candidates that are as close to the root as possible. If ''reject'' always returns ''false'', the algorithm will still find all solutions, but it will be equivalent to a brute-force search.
On the other hand, the efficiency of the backtracking algorithm depends on ''reject'' returning ''true'' for candidates that are as close to the root as possible. If ''reject'' always returns ''false'', the algorithm will still find all solutions, but it will be equivalent to a brute-force search.
Line 78: Line 78:


==Examples==
==Examples==
[[File:Sudoku solved by bactracking.gif|thumb|A [[Sudoku]] solved by backtracking.]]
[[File:Sudoku solved by bactracking.gif|thumb|A [[Sudoku]] solved by backtracking]]
Examples where backtracking can be used to solve puzzles or problems include:
Examples where backtracking can be used to solve puzzles or problems include:
* [[Puzzle]]s such as [[eight queens puzzle]], [[crosswords]], [[verbal arithmetic]], [[Algorithmics of sudoku|Sudoku]]{{refn|group=nb|See [[Sudoku solving algorithms]].}}, and [[Peg solitaire|Peg Solitaire]].
* [[Puzzle]]s such as [[eight queens puzzle]], [[crosswords]], [[verbal arithmetic]], [[Algorithmics of sudoku|Sudoku]]{{refn|group=nb|See [[Sudoku solving algorithms]].}}, and [[Peg solitaire|Peg Solitaire]].
* [[Combinatorial optimization]] problems such as [[parsing]] and the [[knapsack problem]].
* [[Combinatorial optimization]] problems such as [[parsing]] and the [[knapsack problem]].
* [[Logic programming]] languages such as [[Icon programming language|Icon]], [[Planner programming language|Planner]] and [[Prolog]], which use backtracking internally to generate answers.
* Goal-directed programming languages such as [[Icon programming language|Icon]], [[Planner programming language|Planner]] and [[Prolog]], which use backtracking internally to generate answers.
* The [[DPLL algorithm]] for solving the [[Boolean satisfiability problem]].


The following is an example where backtracking is used for the [[constraint satisfaction problem]]:
The following is an example where backtracking is used for the [[constraint satisfaction problem]]:


===Constraint satisfaction===
===Constraint satisfaction===
The general [[constraint satisfaction problem]] consists in finding a list of integers {{nowrap|''x'' {{=}} (''x''[1], ''x''[2], …, ''x''[''n''])}}, each in some range {{nowrap|{1, 2, …, ''m''}}}, that satisfies some arbitrary constraint (boolean function) ''F''.
The general [[constraint satisfaction problem]] consists in finding a list of integers {{nowrap|''x'' {{=}} (''x''[1], ''x''[2], …, ''x''[''n''])}}, each in some range {{nowrap|{1, 2, …, ''m''}}}, that satisfies some arbitrary constraint (Boolean function) ''F''.


For this class of problems, the instance data ''P'' would be the integers ''m'' and ''n'', and the predicate ''F''. In a typical backtracking solution to this problem, one could define a partial candidate as a list of integers {{nowrap|''c'' {{=}} (''c''[1], ''c''[2], …, ''c''[k])}}, for any ''k'' between 0 and ''n'', that are to be assigned to the first ''k'' variables {{nowrap|''x''[1], ''x''[2], …, ''x''[''k'']}}. The root candidate would then be the empty list (). The ''first'' and ''next'' procedures would then be
For this class of problems, the instance data ''P'' would be the integers ''m'' and ''n'', and the predicate ''F''. In a typical backtracking solution to this problem, one could define a partial candidate as a list of integers {{nowrap|''c'' {{=}} (''c''[1], ''c''[2], …, ''c''[k])}}, for any ''k'' between 0 and ''n'', that are to be assigned to the first ''k'' variables {{nowrap|''x''[1], ''x''[2], …, ''x''[''k'']}}. The root candidate would then be the empty list (). The ''first'' and ''next'' procedures would then be
Line 96: Line 97:
'''return''' NULL
'''return''' NULL
'''else'''
'''else'''
'''return''' (c[1], c[2], , c[k], 1)
'''return''' (c[1], c[2], ..., c[k], 1)


'''function''' next(P, s) '''is'''
'''function''' next(P, s) '''is'''
Line 103: Line 104:
'''return''' NULL
'''return''' NULL
'''else'''
'''else'''
'''return''' (s[1], s[2], , s[k − 1], 1 + s[k])
'''return''' (s[1], s[2], ..., s[k − 1], 1 + s[k])


Here ''length''(''c'') is the number of elements in the list ''c''.
Here ''length''(''c'') is the number of elements in the list ''c''.
Line 109: Line 110:
The call ''reject''(''P'', ''c'') should return ''true'' if the constraint ''F'' cannot be satisfied by any list of ''n'' integers that begins with the ''k'' elements of ''c''. For backtracking to be effective, there must be a way to detect this situation, at least for some candidates ''c'', without enumerating all those ''m''<sup>''n'' − ''k''</sup> ''n''-tuples.
The call ''reject''(''P'', ''c'') should return ''true'' if the constraint ''F'' cannot be satisfied by any list of ''n'' integers that begins with the ''k'' elements of ''c''. For backtracking to be effective, there must be a way to detect this situation, at least for some candidates ''c'', without enumerating all those ''m''<sup>''n'' − ''k''</sup> ''n''-tuples.


For example, if ''F'' is the [[Logical conjunction|conjunction]] of several boolean predicates, {{nowrap|''F'' {{=}} ''F''[1] ∧ ''F''[2] ∧ … ∧ ''F''[''p'']}}, and each ''F''[''i''] depends only on a small subset of the variables {{nowrap|''x''[1], …, ''x''[''n'']}}, then the ''reject'' procedure could simply check the terms ''F''[''i''] that depend only on variables {{nowrap|''x''[1], …, ''x''[''k'']}}, and return ''true'' if any of those terms returns ''false''. In fact, ''reject'' needs only check those terms that do depend on ''x''[''k''], since the terms that depend only on {{nowrap|''x''[1], …, ''x''[''k'' − 1]}} will have been tested further up in the search tree.
For example, if ''F'' is the [[Logical conjunction|conjunction]] of several Boolean predicates, {{nowrap|''F'' {{=}} ''F''[1] ∧ ''F''[2] ∧ … ∧ ''F''[''p'']}}, and each ''F''[''i''] depends only on a small subset of the variables {{nowrap|''x''[1], …, ''x''[''n'']}}, then the ''reject'' procedure could simply check the terms ''F''[''i''] that depend only on variables {{nowrap|''x''[1], …, ''x''[''k'']}}, and return ''true'' if any of those terms returns ''false''. In fact, ''reject'' needs only check those terms that do depend on ''x''[''k''], since the terms that depend only on {{nowrap|''x''[1], …, ''x''[''k'' − 1]}} will have been tested further up in the search tree.


Assuming that ''reject'' is implemented as above, then ''accept''(''P'', ''c'') needs only check whether ''c'' is complete, that is, whether it has ''n'' elements.
Assuming that ''reject'' is implemented as above, then ''accept''(''P'', ''c'') needs only check whether ''c'' is complete, that is, whether it has ''n'' elements.
Line 122: Line 123:


==See also==
==See also==

*[[Ariadne's thread (logic)]]
* {{Annotated link |Ariadne's thread (logic)}}
*[[Backjumping]]
* {{Annotated link |Backjumping}}
*[[Backward chaining]]
* {{Annotated link |Backward chaining}}
*[[Enumeration algorithm]]
* {{Annotated link |Enumeration algorithm}}
*[[Sudoku solving algorithms]]
* {{Annotated link |Sudoku solving algorithms}}


==Notes==
==Notes==
Line 148: Line 150:


==External links==
==External links==
*[http://www.hbmeyer.de/backtrack/backtren.htm HBmeyer.de], Interactive animation of a backtracking algorithm
*[http://www.hbmeyer.de/backtrack/backtr.htm HBmeyer.de], Interactive animation of a backtracking algorithm
*[http://www.drdobbs.com/cpp/solving-combinatorial-problems-with-stl/184401194 Solving Combinatorial Problems with STL and Backtracking], Article and C++ source code for a generic implementation of backtracking
*[http://www.drdobbs.com/cpp/solving-combinatorial-problems-with-stl/184401194 Solving Combinatorial Problems with STL and Backtracking], Article and C++ source code for a generic implementation of backtracking


{{Parsers}}{{Algorithmic paradigms}}
{{Parsers}}{{Algorithmic paradigms}}

[[Category:Pattern matching]]
[[Category:Pattern matching]]
[[Category:Search algorithms]]
[[Category:Search algorithms]]

Latest revision as of 14:56, 21 September 2024

Backtracking is a class of algorithms for finding solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.[1]

The classic textbook example of the use of backtracking is the eight queens puzzle, that asks for all arrangements of eight chess queens on a standard chessboard so that no queen attacks any other. In the common backtracking approach, the partial candidates are arrangements of k queens in the first k rows of the board, all in different rows and columns. Any partial solution that contains two mutually attacking queens can be abandoned.

Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution. It is useless, for example, for locating a given value in an unordered table. When it is applicable, however, backtracking is often much faster than brute-force enumeration of all complete candidates, since it can eliminate many candidates with a single test.

Backtracking is an important tool for solving constraint satisfaction problems,[2] such as crosswords, verbal arithmetic, Sudoku, and many other puzzles. It is often the most convenient technique for parsing,[3] for the knapsack problem and other combinatorial optimization problems. It is also the program execution strategy used in the programming languages Icon, Planner and Prolog.

Backtracking depends on user-given "black box procedures" that define the problem to be solved, the nature of the partial candidates, and how they are extended into complete candidates. It is therefore a metaheuristic rather than a specific algorithm – although, unlike many other meta-heuristics, it is guaranteed to find all solutions to a finite problem in a bounded amount of time.

The term "backtrack" was coined by American mathematician D. H. Lehmer in the 1950s.[4] The pioneer string-processing language SNOBOL (1962) may have been the first to provide a built-in general backtracking facility.

Description of the method

[edit]

The backtracking algorithm enumerates a set of partial candidates that, in principle, could be completed in various ways to give all the possible solutions to the given problem. The completion is done incrementally, by a sequence of candidate extension steps.

Conceptually, the partial candidates are represented as the nodes of a tree structure, the potential search tree. Each partial candidate is the parent of the candidates that differ from it by a single extension step; the leaves of the tree are the partial candidates that cannot be extended any further.

The backtracking algorithm traverses this search tree recursively, from the root down, in depth-first order. At each node c, the algorithm checks whether c can be completed to a valid solution. If it cannot, the whole sub-tree rooted at c is skipped (pruned). Otherwise, the algorithm (1) checks whether c itself is a valid solution, and if so reports it to the user; and (2) recursively enumerates all sub-trees of c. The two tests and the children of each node are defined by user-given procedures.

Therefore, the actual search tree that is traversed by the algorithm is only a part of the potential tree. The total cost of the algorithm is the number of nodes of the actual tree times the cost of obtaining and processing each node. This fact should be considered when choosing the potential search tree and implementing the pruning test.

Pseudocode

[edit]

In order to apply backtracking to a specific class of problems, one must provide the data P for the particular instance of the problem that is to be solved, and six procedural parameters, root, reject, accept, first, next, and output. These procedures should take the instance data P as a parameter and should do the following:

  1. root(P): return the partial candidate at the root of the search tree.
  2. reject(P,c): return true only if the partial candidate c is not worth completing.
  3. accept(P,c): return true if c is a solution of P, and false otherwise.
  4. first(P,c): generate the first extension of candidate c.
  5. next(P,s): generate the next alternative extension of a candidate, after the extension s.
  6. output(P,c): use the solution c of P, as appropriate to the application.

The backtracking algorithm reduces the problem to the call backtrack(P, root(P)), where backtrack is the following recursive procedure:

procedure backtrack(P, c) is
    if reject(P, c) then return
    if accept(P, c) then output(P, c)
    s ← first(P, c)
    while s ≠ NULL do
        backtrack(P, s)
        s ← next(P, s)

Usage considerations

[edit]

The reject procedure should be a Boolean-valued function that returns true only if it is certain that no possible extension of c is a valid solution for P. If the procedure cannot reach a definite conclusion, it should return false. An incorrect true result may cause the backtrack procedure to miss some valid solutions. The procedure may assume that reject(P,t) returned false for every ancestor t of c in the search tree.

On the other hand, the efficiency of the backtracking algorithm depends on reject returning true for candidates that are as close to the root as possible. If reject always returns false, the algorithm will still find all solutions, but it will be equivalent to a brute-force search.

The accept procedure should return true if c is a complete and valid solution for the problem instance P, and false otherwise. It may assume that the partial candidate c and all its ancestors in the tree have passed the reject test.

The general pseudo-code above does not assume that the valid solutions are always leaves of the potential search tree. In other words, it admits the possibility that a valid solution for P can be further extended to yield other valid solutions.

The first and next procedures are used by the backtracking algorithm to enumerate the children of a node c of the tree, that is, the candidates that differ from c by a single extension step. The call first(P,c) should yield the first child of c, in some order; and the call next(P,s) should return the next sibling of node s, in that order. Both functions should return a distinctive "NULL" candidate, if the requested child does not exist.

Together, the root, first, and next functions define the set of partial candidates and the potential search tree. They should be chosen so that every solution of P occurs somewhere in the tree, and no partial candidate occurs more than once. Moreover, they should admit an efficient and effective reject predicate.

Early stopping variants

[edit]

The pseudo-code above will call output for all candidates that are a solution to the given instance P. The algorithm can be modified to stop after finding the first solution, or a specified number of solutions; or after testing a specified number of partial candidates, or after spending a given amount of CPU time.

Examples

[edit]
A Sudoku solved by backtracking

Examples where backtracking can be used to solve puzzles or problems include:

The following is an example where backtracking is used for the constraint satisfaction problem:

Constraint satisfaction

[edit]

The general constraint satisfaction problem consists in finding a list of integers x = (x[1], x[2], …, x[n]), each in some range {1, 2, …, m}, that satisfies some arbitrary constraint (Boolean function) F.

For this class of problems, the instance data P would be the integers m and n, and the predicate F. In a typical backtracking solution to this problem, one could define a partial candidate as a list of integers c = (c[1], c[2], …, c[k]), for any k between 0 and n, that are to be assigned to the first k variables x[1], x[2], …, x[k]. The root candidate would then be the empty list (). The first and next procedures would then be

function first(P, c) is
    k ← length(c)
    if k = n then
        return NULL
    else
        return (c[1], c[2], ..., c[k], 1)
function next(P, s) is
    k ← length(s)
    if s[k] = m then
        return NULL
    else
        return (s[1], s[2], ..., s[k − 1], 1 + s[k])

Here length(c) is the number of elements in the list c.

The call reject(P, c) should return true if the constraint F cannot be satisfied by any list of n integers that begins with the k elements of c. For backtracking to be effective, there must be a way to detect this situation, at least for some candidates c, without enumerating all those mnk n-tuples.

For example, if F is the conjunction of several Boolean predicates, F = F[1] ∧ F[2] ∧ … ∧ F[p], and each F[i] depends only on a small subset of the variables x[1], …, x[n], then the reject procedure could simply check the terms F[i] that depend only on variables x[1], …, x[k], and return true if any of those terms returns false. In fact, reject needs only check those terms that do depend on x[k], since the terms that depend only on x[1], …, x[k − 1] will have been tested further up in the search tree.

Assuming that reject is implemented as above, then accept(P, c) needs only check whether c is complete, that is, whether it has n elements.

It is generally better to order the list of variables so that it begins with the most critical ones (i.e. the ones with fewest value options, or which have a greater impact on subsequent choices).

One could also allow the next function to choose which variable should be assigned when extending a partial candidate, based on the values of the variables already assigned by it. Further improvements can be obtained by the technique of constraint propagation.

In addition to retaining minimal recovery values used in backing up, backtracking implementations commonly keep a variable trail, to record value change history. An efficient implementation will avoid creating a variable trail entry between two successive changes when there is no choice point, as the backtracking will erase all of the changes as a single operation.

An alternative to the variable trail is to keep a timestamp of when the last change was made to the variable. The timestamp is compared to the timestamp of a choice point. If the choice point has an associated time later than that of the variable, it is unnecessary to revert the variable when the choice point is backtracked, as it was changed before the choice point occurred.

See also

[edit]

Notes

[edit]

References

[edit]
  1. ^ Gurari, Eitan (1999). "CIS 680: DATA STRUCTURES: Chapter 19: Backtracking Algorithms". Archived from the original on 17 March 2007.
  2. ^ Biere, A.; Heule, M.; van Maaren, H. (29 January 2009). Handbook of Satisfiability. IOS Press. ISBN 978-1-60750-376-7.
  3. ^ Watson, Des (22 March 2017). A Practical Approach to Compiler Construction. Springer. ISBN 978-3-319-52789-5.
  4. ^ Rossi, Francesca; van Beek, Peter; Walsh, Toby (August 2006). "Constraint Satisfaction: An Emerging Paradigm". Handbook of Constraint Programming. Amsterdam: Elsevier. p. 14. ISBN 978-0-444-52726-4. Retrieved 30 December 2008.

Further reading

[edit]
[edit]