Jump to content

MTD(f): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
what negascout does isn’t terribly relevant here
Tags: Mobile edit Mobile app edit iOS app edit
negascout not relevant here
Tags: Mobile edit Mobile app edit iOS app edit
Line 33: Line 33:


== Performance ==
== Performance ==
MTD(f) calls the zero-window searches from the root of the tree. Implementations of the MTD(f) algorithm have been shown to be more efficient (search fewer nodes) in practice than other search algorithms (e.g. NegaScout) in games such as chess [http://portal.acm.org/citation.cfm?id=240998&coll=GUIDE&dl=GUIDE&CFID=50049687&CFTOKEN=16181537], checkers, and Othello. For search algorithms such as NegaScout or MTD(f) to perform efficiently, the [[transposition table]] must work well. Otherwise, for example when a hash-collision occurs, a subtree will be re-expanded. When MTD(f) is used in programs suffering from a pronounced odd-even effect, where the score at the root is higher for even search depths and lower for odd search depths, it is advisable to use separate values for f to start the search as close as possible to the minimax value. Otherwise, the search would take more iterations to converge on the minimax value, especially for fine grained evaluation functions.
MTD(f) calls the zero-window searches from the root of the tree. Implementations of the MTD(f) algorithm have been shown to be more efficient (search fewer nodes) in practice than other search algorithms (e.g. NegaScout) in games such as chess [http://portal.acm.org/citation.cfm?id=240998&coll=GUIDE&dl=GUIDE&CFID=50049687&CFTOKEN=16181537], checkers, and Othello. For MTD(f) to perform efficiently, the [[transposition table]] must work well. Otherwise, for example when a hash-collision occurs, a subtree will be re-expanded. When MTD(f) is used in programs suffering from a pronounced odd-even effect, where the score at the root is higher for even search depths and lower for odd search depths, it is advisable to use separate values for f to start the search as close as possible to the minimax value. Otherwise, the search would take more iterations to converge on the minimax value, especially for fine grained evaluation functions.


Zero-window searches hit a cut-off sooner than wide-window searches. They are therefore more efficient, but, in some sense, also less forgiving, than wide-window searches. Because MTD(f) only uses zero-window searches, while [[Alpha-beta pruning|Alpha-Beta]] and NegaScout also use wide window searches, MTD(f) is more efficient. However, wider search windows are more forgiving for engines with large odd/even swings and fine-grained evaluation functions. For this reason some [[chess engine]]s have not switched to MTD(f).
Zero-window searches hit a cut-off sooner than wide-window searches. They are therefore more efficient, but, in some sense, also less forgiving, than wide-window searches. Because MTD(f) only uses zero-window searches, while [[Alpha-beta pruning|Alpha-Beta]] and NegaScout also use wide window searches, MTD(f) is more efficient. However, wider search windows are more forgiving for engines with large odd/even swings and fine-grained evaluation functions. For this reason some [[chess engine]]s have not switched to MTD(f).

Revision as of 01:21, 14 June 2021

MTD(f) is an alpha-beta game tree search algorithm modified to use ‘zero-window’ initial search bounds, and memory (usually a transposition table) to reuse intermediate search results. MTD(f) is a shortened form of MTD(n,f) which stands for Memory-enhanced Test Driver with node ‘n’ and value ‘f’.[1] The efficacy of this paradigm depends on a good initial guess, and the supposition that the final minimax value lies in a narrow window around the guess (which becones an upper/lower bound for the search from root). Some memory structure is used to save an initial guess determined elsewhere.

MTD(f) was introduced in 1994 and largely supplanted NegaScout (PVS), the previously dominant search paradigm for chess, checkers, othello and other game automatons.

Origin

MTD(f) was first described in a University of Alberta Technical Report authored by Aske Plaat, Jonathan Schaeffer, Wim Pijls, and Arie de Bruin,[2] which would later receive the ICCA Novag Best Computer Chess Publication award for 1994/1995. The algorithm MTD(f) was created out of a research effort to understand the SSS* algorithm, a best-first search algorithm invented by George Stockman in 1979.[3] SSS* was found to be equivalent to a series of alpha-beta calls, provided that alpha-beta used storage, such as a well-functioning transposition table.

The name MTD(f) stands for Memory-enhanced Test Driver, referencing Judea Pearl's Test algorithm, which performs Zero-Window Searches. MTD(f) is described in depth in Aske Plaat's 1996 PhD thesis.

Zero-window searches

MTD(f) derives its efficiency by only performing zero-window alpha-beta searches, with a "good" bound (variable beta). In NegaScout, the search is called with a wide search window, as in AlphaBeta(root, −INFINITY, +INFINITY, depth), so the return value lies between the value of alpha and beta in one call. In MTD(f), AlphaBeta fails high or low, returning a lower bound or an upper bound on the minimax value, respectively. Zero-window calls cause more cutoffs, but return less information - only a bound on the minimax value. To find the minimax value, MTD(f) calls AlphaBeta a number of times, converging towards it and eventually finding the exact value. A transposition table stores and retrieves the previously searched portions of the tree in memory to reduce the overhead of re-exploring parts of the search tree.[4]

Pseudocode

function MTDF(root, f, d) is
    g := f
    upperBound := +∞
    lowerBound := −∞

    while lowerBound < upperBound do
        β := max(g, lowerBound + 1)
        g := AlphaBetaWithMemory(root, β − 1, β, d)
        if g < β then
            upperBound := g 
        else
            lowerBound := g

    return g
f
First guess for best value. The better the quicker the algorithm converges. Could be 0 for first call.
d
Depth to loop for. An iterative deepening depth-first search could be done by calling MTDF() multiple times with incrementing d and providing the best previous result in f.[5]

AlphaBetaWithMemory is a variation of Alpha Beta Search that caches previous results.

Performance

MTD(f) calls the zero-window searches from the root of the tree. Implementations of the MTD(f) algorithm have been shown to be more efficient (search fewer nodes) in practice than other search algorithms (e.g. NegaScout) in games such as chess [1], checkers, and Othello. For MTD(f) to perform efficiently, the transposition table must work well. Otherwise, for example when a hash-collision occurs, a subtree will be re-expanded. When MTD(f) is used in programs suffering from a pronounced odd-even effect, where the score at the root is higher for even search depths and lower for odd search depths, it is advisable to use separate values for f to start the search as close as possible to the minimax value. Otherwise, the search would take more iterations to converge on the minimax value, especially for fine grained evaluation functions.

Zero-window searches hit a cut-off sooner than wide-window searches. They are therefore more efficient, but, in some sense, also less forgiving, than wide-window searches. Because MTD(f) only uses zero-window searches, while Alpha-Beta and NegaScout also use wide window searches, MTD(f) is more efficient. However, wider search windows are more forgiving for engines with large odd/even swings and fine-grained evaluation functions. For this reason some chess engines have not switched to MTD(f). In tests with tournament-quality programs such as Chinook (checkers), Phoenix (chess), and Keyano (Othello), the MTD(f) algorithm outperformed all other search algorithms.[4][6]

Recent algorithms like Best Node Search are suggested to outperform MTD(f).

References

  1. ^ Johannes Fürnkranz; Miroslav Kubat (2001). Machines that Learn to Play Games. Nova Publishers. pp. 95–. ISBN 978-1-59033-021-0.
  2. ^ "Adaptive Strategies of MTD-f for Actual Games". Tokyo University of Agriculture and Technology. K SHIBAHARA et al
  3. ^ Teofilo Gonzalez; Jorge Diaz-Herrera; Allen Tucker (7 May 2014). Computing Handbook, Third Edition: Computer Science and Software Engineering. CRC Press. pp. 38–. ISBN 978-1-4398-9853-6.
  4. ^ a b Plaat, Aske; Jonathan Schaeffer; Wim Pijls; Arie de Bruin (November 1996). "Best-first Fixed-depth Minimax Algorithms". Artificial Intelligence. 87 (1–2): 255–293. doi:10.1016/0004-3702(95)00126-3.
  5. ^ https://people.csail.mit.edu/plaat/mtdf.html
  6. ^ Plaat, Aske; Jonathan Schaeffer; Wim Pijls; Arie de Bruin (November 1996). "Best-first Fixed-depth Minimax Algorithms". Artificial Intelligence. 87 (1–2): 255–293. doi:10.1016/0004-3702(95)00126-3.