Levenshtein distance: Difference between revisions
github and bitbucket links are not reliable sources |
Undid revision 1242741022 by 157.86.24.1 (talk) No, I think you read it backwards |
||
(43 intermediate revisions by 33 users not shown) | |||
Line 1: | Line 1: | ||
{{short description|Computer science metric for string similarity}} |
{{short description|Computer science metric for string similarity}} |
||
{{Infobox algorithm |
|||
⚫ | In [[information theory]], [[linguistics]], and [[computer science]], the '''Levenshtein distance''' is a [[string metric]] for measuring the difference between two sequences. |
||
⚫ | |||
|class = measuring the difference between two sequences |
|||
|image = Levenshtein distance animation.gif |
|||
|caption = Edit distance matrix for two words using cost of substitution as 1 and cost of deletion or insertion as 0.5 |
|||
|data = |
|||
|time = <!-- Worst time big-O notation --> |
|||
|best-time = |
|||
|average-time = |
|||
|space = <!-- Worst-case space complexity; auxiliary space |
|||
(excluding input) if not specified --> |
|||
}} |
|||
⚫ | In [[information theory]], [[linguistics]], and [[computer science]], the '''Levenshtein distance''' is a [[string metric]] for measuring the difference between two sequences. The Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. It is named after Soviet mathematician [[Vladimir Levenshtein]], who defined the metric in 1965.<ref>{{cite journal |author=В. И. Левенштейн |script-title=ru:Двоичные коды с исправлением выпадений, вставок и замещений символов |language=ru |trans-title=Binary codes capable of correcting deletions, insertions, and reversals |journal=Доклады Академии Наук СССР |volume=163 |issue=4 |pages=845–848 |year=1965 |url=http://mi.mathnet.ru/dan31411}} Appeared in English as: {{cite journal |author=Levenshtein, Vladimir I. |title=Binary codes capable of correcting deletions, insertions, and reversals |journal=Soviet Physics Doklady |volume=10 |number=8 |pages=707–710 |date=February 1966 |url=<!--http://profs.sci.univr.it/~liptak/ALBioinfo/files/levenshtein66.pdf right to publish copy of journal unclear: see http://www.sherpa.ac.uk/romeo/search.php?issn=1028-3358&type=issn&la=en/&fIDnum=%7C&mode=simple ; in any event, liptak does not appear to be the author or the translator -->|bibcode=1966SPhD...10..707L }}</ref> |
||
Levenshtein distance may also be referred to as ''edit distance'', although that term may also denote a larger family of distance metrics known collectively as [[edit distance]].<ref name="navarro">{{Cite journal |last1=Navarro |first1=Gonzalo |doi=10.1145/375360.375365 |title=A guided tour to approximate string matching |journal=ACM Computing Surveys |volume=33 |issue=1 |pages=31–88 |year=2001 |url=http://repositorio.uchile.cl/bitstream/handle/2250/126168/Navarro_Gonzalo_Guided_tour.pdf|citeseerx=10.1.1.452.6317 |s2cid=207551224 }}</ref>{{rp|32}} It is closely related to [[Sequence alignment#Pairwise alignment|pairwise string alignments]]. |
Levenshtein distance may also be referred to as ''edit distance'', although that term may also denote a larger family of distance metrics known collectively as [[edit distance]].<ref name="navarro">{{Cite journal |last1=Navarro |first1=Gonzalo |doi=10.1145/375360.375365 |title=A guided tour to approximate string matching |journal=ACM Computing Surveys |volume=33 |issue=1 |pages=31–88 |year=2001 |url=http://repositorio.uchile.cl/bitstream/handle/2250/126168/Navarro_Gonzalo_Guided_tour.pdf|citeseerx=10.1.1.452.6317 |s2cid=207551224 }}</ref>{{rp|32}} It is closely related to [[Sequence alignment#Pairwise alignment|pairwise string alignments]]. |
||
Line 10: | Line 22: | ||
|a| & \text{ if } |b| = 0, \\ |
|a| & \text{ if } |b| = 0, \\ |
||
|b| & \text{ if } |a| = 0, \\ |
|b| & \text{ if } |a| = 0, \\ |
||
\operatorname{lev}\big(\operatorname{tail}(a),\operatorname{tail}(b)\big) & \text{ if } a |
\operatorname{lev}\big(\operatorname{tail}(a),\operatorname{tail}(b)\big) & \text{ if } \operatorname{head}(a)= \operatorname{head}(b), \\ |
||
1 + \min \begin{cases} |
1 + \min \begin{cases} |
||
\operatorname{lev}\big(\operatorname{tail}(a), b\big) \\ |
\operatorname{lev}\big(\operatorname{tail}(a), b\big) \\ |
||
\operatorname{lev}\big(a, \operatorname{tail}(b)\big) \\ |
\operatorname{lev}\big(a, \operatorname{tail}(b)\big) \\ |
||
\operatorname{lev}\big(\operatorname{tail}(a), \operatorname{tail}(b)\big) \\ |
\operatorname{lev}\big(\operatorname{tail}(a), \operatorname{tail}(b)\big) \\ |
||
\end{cases} & \text{ otherwise |
\end{cases} & \text{ otherwise} |
||
\end{cases}</math> |
\end{cases}</math> |
||
where the <math>\operatorname{tail}</math> of some string <math>x</math> is a string of all but the first character of <math>x</math>, and <math>x[n]</math> is the <math>n</math>th character of the string <math>x</math>, [[Zero-based numbering|counting from 0]]. |
where the <math>\operatorname{tail}</math> of some string <math>x</math> is a string of all but the first character of <math>x</math> (i.e. <math>\operatorname{tail}(x_0x_1 \dots x_n)=x_1x_2 \dots x_n</math>), and <math>\operatorname{head}(x)</math> is the first character of <math>x</math> (i.e. <math>\operatorname{head}(x_0x_1 \dots x_n)=x_0</math>). Either the notation <math>x[n]</math> or <math>x_n</math> is used to refer the <math>n</math>th character of the string <math>x</math>, [[Zero-based numbering|counting from 0]], thus <math>\operatorname{head}(x)=x_0=x[0]</math>. |
||
The first element in the minimum corresponds to deletion (from <math>a</math> to <math>b</math>), the second to insertion and the third to replacement. |
|||
This definition corresponds directly to [[Levenshtein distance#Recursive|the naive recursive implementation]]. |
This definition corresponds directly to [[Levenshtein distance#Recursive|the naive recursive implementation]]. |
||
Line 25: | Line 37: | ||
=== Example === |
=== Example === |
||
[[File:Levenshtein distance animation.gif|thumb|right|400px| Edit distance matrix for two words using cost of substitution as 1 and cost of deletion or insertion as 0.5 |
[[File:Levenshtein distance animation.gif|thumb|right|400px| Edit distance matrix for two words using cost of substitution as 1 and cost of deletion or insertion as 0.5]] |
||
For example, the Levenshtein distance between "kitten" and "sitting" is 3, since the following 3 edits change one into the other, and there is no way to do it with fewer than 3 edits: |
For example, the Levenshtein distance between "kitten" and "sitting" is 3, since the following 3 edits change one into the other, and there is no way to do it with fewer than 3 edits: |
||
Line 31: | Line 43: | ||
# sitt'''e'''n → sitt'''i'''n (substitution of "i" for "e"), |
# sitt'''e'''n → sitt'''i'''n (substitution of "i" for "e"), |
||
# sittin → sittin'''g''' (insertion of "g" at the end). |
# sittin → sittin'''g''' (insertion of "g" at the end). |
||
A simple example of a deletion can be seen with "uninformed" and "uniformed" which have a distance of 1: |
|||
# uni'''n'''formed → uniformed (deletion of "n"). |
|||
===Upper and lower bounds=== |
===Upper and lower bounds=== |
||
The Levenshtein distance has several simple upper and lower bounds. These include: |
The Levenshtein distance has several simple upper and lower bounds. These include: |
||
* It is at least the difference of the sizes of the two strings. |
* It is at least the absolute value of the difference of the sizes of the two strings. |
||
* It is at most the length of the longer string. |
* It is at most the length of the longer string. |
||
* It is zero if and only if the strings are equal. |
* It is zero if and only if the strings are equal. |
||
Line 47: | Line 62: | ||
The Levenshtein distance can also be computed between two longer strings, but the cost to compute it, which is roughly proportional to the product of the two string lengths, makes this impractical. Thus, when used to aid in [[fuzzy string searching]] in applications such as [[record linkage]], the compared strings are usually short to help improve speed of comparisons.{{citation needed|date=January 2019}} |
The Levenshtein distance can also be computed between two longer strings, but the cost to compute it, which is roughly proportional to the product of the two string lengths, makes this impractical. Thus, when used to aid in [[fuzzy string searching]] in applications such as [[record linkage]], the compared strings are usually short to help improve speed of comparisons.{{citation needed|date=January 2019}} |
||
In linguistics, the Levenshtein distance is used as a metric to quantify the [[linguistic distance]], or how different two languages are from one another.<ref name="ref05xubej">{{Citation | title=Receptive multilingualism: linguistic analyses, language policies, and didactic concepts |author1=Jan D. ten Thije |author2=Ludger Zeevaert | publisher=John Benjamins Publishing Company |
In linguistics, the Levenshtein distance is used as a metric to quantify the [[linguistic distance]], or how different two languages are from one another.<ref name="ref05xubej">{{Citation | title=Receptive multilingualism: linguistic analyses, language policies, and didactic concepts |author1=Jan D. ten Thije |author2=Ludger Zeevaert | publisher=John Benjamins Publishing Company | isbn=978-90-272-1926-8 | url=https://books.google.com/books?id=8gIEN068J3gC&q=Levenshtein | quote=Assuming that intelligibility is inversely related to linguistic distance ... the content words the percentage of cognates (related directly or via a synonym) ... lexical relatedness ... grammatical relatedness |date=2007-01-01 }}.</ref> It is related to [[mutual intelligibility]]: the higher the linguistic distance, the lower the mutual intelligibility, and the lower the linguistic distance, the higher the mutual intelligibility. |
||
==Relationship with other edit distance metrics== |
==Relationship with other edit distance metrics== |
||
Line 59: | Line 74: | ||
[[Edit distance]] is usually defined as a parameterizable metric calculated with a specific set of allowed edit operations, and each operation is assigned a cost (possibly infinite). This is further generalized by DNA [[sequence alignment]] algorithms such as the [[Smith–Waterman algorithm]], which make an operation's cost depend on where it is applied. |
[[Edit distance]] is usually defined as a parameterizable metric calculated with a specific set of allowed edit operations, and each operation is assigned a cost (possibly infinite). This is further generalized by DNA [[sequence alignment]] algorithms such as the [[Smith–Waterman algorithm]], which make an operation's cost depend on where it is applied. |
||
==Computation== |
|||
⚫ | |||
===Recursive=== |
===Recursive=== |
||
This is a straightforward, but inefficient, recursive [[Haskell (programming language)|Haskell]] implementation of a <code>lDistance</code> function that takes two strings, ''s'' and ''t'', together with their lengths, and returns the Levenshtein distance between them: |
This is a straightforward, but inefficient, recursive [[Haskell (programming language)|Haskell]] implementation of a <code>lDistance</code> function that takes two strings, ''s'' and ''t'', together with their lengths, and returns the Levenshtein distance between them: |
||
Line 86: | Line 101: | ||
This implementation is very inefficient because it recomputes the Levenshtein distance of the same substrings many times. |
This implementation is very inefficient because it recomputes the Levenshtein distance of the same substrings many times. |
||
A more efficient method would never repeat the same distance calculation. For example, the Levenshtein distance of all possible |
A more efficient method would never repeat the same distance calculation. For example, the Levenshtein distance of all possible suffixes might be stored in an array <math>M</math>, where <math>M[i][j]</math> is the distance between the last <math>i</math> characters of string <code>s</code> and the last <math>j</math> characters of string <code>t</code>. The table is easy to construct one row at a time starting with row 0. When the entire table has been built, the desired distance is in the table in the last row and column, representing the distance between all of the characters in <code>s</code> and all the characters in <code>t</code>. |
||
===Iterative with full matrix=== |
===Iterative with full matrix=== |
||
{{main|Wagner–Fischer algorithm}} |
{{main|Wagner–Fischer algorithm}} |
||
(Note: This section uses 1-based strings instead of 0-based strings.) |
|||
This section uses 1-based strings rather than 0-based strings. If ''m'' is a matrix, <math>m[i,j]</math> is the ''i''th row and the ''j''th column of the matrix, with the first row having index 0 and the first column having index 0. |
|||
Computing the Levenshtein distance is based on the observation that if we reserve a [[Matrix (mathematics)|matrix]] to hold the Levenshtein distances between all [[prefix (computer science)|prefixes]] of the first string and all prefixes of the second, then we can compute the values in the matrix in a [[dynamic programming]] fashion, and thus find the distance between the two full strings as the last value computed. |
Computing the Levenshtein distance is based on the observation that if we reserve a [[Matrix (mathematics)|matrix]] to hold the Levenshtein distances between all [[prefix (computer science)|prefixes]] of the first string and all prefixes of the second, then we can compute the values in the matrix in a [[dynamic programming]] fashion, and thus find the distance between the two full strings as the last value computed. |
||
This algorithm, an example of bottom-up [[dynamic programming]], is discussed, with variants, in the 1974 article ''The [[String-to-string correction problem]]'' by Robert A. Wagner and Michael J. Fischer.<ref>{{citation |first1=Robert A. |last1=Wagner |first2=Michael J. |last2=Fischer |author2-link=Michael J. Fischer |title=The String-to-String Correction Problem |journal=Journal of the ACM |volume=21 |issue=1 |year=1974 |pages=168–173 |doi=10.1145/321796.321811|s2cid=13381535 }}</ref> |
This algorithm, an example of bottom-up [[dynamic programming]], is discussed, with variants, in the 1974 article ''The [[String-to-string correction problem]]'' by Robert A. Wagner and Michael J. Fischer.<ref>{{citation |first1=Robert A. |last1=Wagner |first2=Michael J. |last2=Fischer |author2-link=Michael J. Fischer |title=The String-to-String Correction Problem |journal=Journal of the ACM |volume=21 |issue=1 |year=1974 |pages=168–173 |doi=10.1145/321796.321811|s2cid=13381535 |doi-access=free }}</ref> |
||
This is a straightforward [[pseudocode]] implementation for a function <code>LevenshteinDistance</code> that takes two strings, ''s'' of length ''m'', and ''t'' of length ''n'', and returns the Levenshtein distance between them: |
This is a straightforward [[pseudocode]] implementation for a function <code>LevenshteinDistance</code> that takes two strings, ''s'' of length ''m'', and ''t'' of length ''n'', and returns the Levenshtein distance between them: |
||
Line 162: | Line 178: | ||
|- |
|- |
||
! s |
! s |
||
| 1 ||{{ |
| 1 ||{{tooltip|2=substitution of 'k' for 's'|1}} ||2 ||3 ||4 ||5 ||6 |
||
|- |
|- |
||
! i |
! i |
||
| 2 ||2 ||{{ |
| 2 ||2 ||{{tooltip|2='i' equals 'i'|1}} ||2 ||3 ||4 ||5 |
||
|- |
|- |
||
! t |
! t |
||
| 3 ||3 ||2 ||{{ |
| 3 ||3 ||2 ||{{tooltip|2='t' equals 't'|1}} ||2 ||3 ||4 |
||
|- |
|- |
||
!t |
!t |
||
| 4 ||4 ||3 ||2 ||{{ |
| 4 ||4 ||3 ||2 ||{{tooltip|2='t' equals 't'|1}} ||2 ||3 |
||
|- |
|- |
||
! i |
! i |
||
| 5 ||5 ||4 ||3 ||2 ||{{ |
| 5 ||5 ||4 ||3 ||2 ||{{tooltip|2=substitution of 'e' for 'i'|2}} ||3 |
||
|- |
|- |
||
! n |
! n |
||
| 6 ||6 ||5 ||4 ||3 ||3 ||{{ |
| 6 ||6 ||5 ||4 ||3 ||3 ||{{tooltip|2='n' equals 'n'|2}} |
||
|- |
|- |
||
! g |
! g |
||
| 7 ||7 ||6 ||5 ||4 ||4 ||{{ |
| 7 ||7 ||6 ||5 ||4 ||4 ||{{tooltip|2=insert 'g'|3}} |
||
|} |
|} |
||
{{col-break|gap=1em}} |
{{col-break|gap=1em}} |
||
Line 199: | Line 215: | ||
|- |
|- |
||
! S |
! S |
||
| 1 ||{{ |
| 1 ||{{tooltip|2='S' equals 'S'|0}} ||{{tooltip|2=insert 'a'|1}} ||{{tooltip|2=insert 't'|2}} ||3 ||4 ||5 ||6 ||7 |
||
|- |
|- |
||
! u |
! u |
||
| 2 ||1 ||1 ||2 ||{{ |
| 2 ||1 ||1 ||2 ||{{tooltip|2='u' equals 'u'|2}} ||3 ||4 ||5 ||6 |
||
|- |
|- |
||
! n |
! n |
||
| 3 ||2 ||2 ||2 ||3 ||{{ |
| 3 ||2 ||2 ||2 ||3 ||{{tooltip|2=substitution of 'n' for 'r'|3}} ||4 ||5 ||6 |
||
|- |
|- |
||
! d |
! d |
||
| 4 ||3 ||3 ||3 ||3 ||4 ||{{ |
| 4 ||3 ||3 ||3 ||3 ||4 ||{{tooltip|2='d' equals 'd'|3}} ||4 ||5 |
||
|- |
|- |
||
! a |
! a |
||
| 5 ||4 ||3 ||4 ||4 ||4 ||4 ||{{ |
| 5 ||4 ||3 ||4 ||4 ||4 ||4 ||{{tooltip|2='a' equals 'a'|3}} ||4 |
||
|- |
|- |
||
! y |
! y |
||
| 6 ||5 ||4 ||4 ||5 ||5 ||5 ||4 ||{{ |
| 6 ||5 ||4 ||4 ||5 ||5 ||5 ||4 ||{{tooltip|2='y' equals 'y'|3}} |
||
|} |
|} |
||
{{col-end}} |
{{col-end}} |
||
Line 261: | Line 277: | ||
return v0[n] |
return v0[n] |
||
</syntaxhighlight> |
</syntaxhighlight> |
||
[[Hirschberg's algorithm]] combines this method with [[Divide and conquer algorithms|divide and conquer]]. It can compute the optimal edit sequence, and not just the edit distance, in the same asymptotic time and space bounds.<ref>{{cite journal |last=Hirschberg |first=D. S. |author-link=Dan Hirschberg |title=A linear space algorithm for computing maximal common subsequences |journal=[[Communications of the ACM]] |volume=18 |issue=6 |year=1975 |pages=341–343 |doi=10.1145/360825.360861 |mr=0375829 |url=http://www.ics.uci.edu/~dan/pubs/p341-hirschberg.pdf |type=Submitted manuscript |citeseerx=10.1.1.348.4774 |s2cid=207694727}}</ref> |
[[Hirschberg's algorithm]] combines this method with [[Divide and conquer algorithms|divide and conquer]]. It can compute the optimal edit sequence, and not just the edit distance, in the same asymptotic time and space bounds.<ref>{{cite journal |last=Hirschberg |first=D. S. |author-link=Dan Hirschberg |title=A linear space algorithm for computing maximal common subsequences |journal=[[Communications of the ACM]] |volume=18 |issue=6 |year=1975 |pages=341–343 |doi=10.1145/360825.360861 |mr=0375829 |url=http://www.ics.uci.edu/~dan/pubs/p341-hirschberg.pdf |type=Submitted manuscript |citeseerx=10.1.1.348.4774 |s2cid=207694727}}</ref> |
||
=== Adaptive variant === |
|||
The dynamic variant is not the ideal implementation. An adaptive approach may reduce the amount of memory required and, in the best case, may reduce the time complexity to linear in the length of the shortest string, and, in the worst case, no more than quadratic in the length of the shortest string. The idea is that one can use efficient library functions ({{code|std::mismatch}}) to check for common prefixes and suffixes and only dive into the DP part on mismatch.<ref name=iosifovich/> |
|||
=== Automata === |
=== Automata === |
||
Line 294: | Line 304: | ||
*[[Hunt–Szymanski algorithm]] |
*[[Hunt–Szymanski algorithm]] |
||
*[[Jaccard index]] |
*[[Jaccard index]] |
||
*[[Jaro–Winkler distance]] |
|||
*[[Locality-sensitive hashing]] |
*[[Locality-sensitive hashing]] |
||
*[[Longest common subsequence problem]] |
*[[Longest common subsequence problem]] |
||
Line 311: | Line 322: | ||
{{Wikibooks| Algorithm implementation|Strings/Levenshtein distance|Levenshtein distance}} |
{{Wikibooks| Algorithm implementation|Strings/Levenshtein distance|Levenshtein distance}} |
||
*{{citation |contribution=Levenshtein distance |title=Dictionary of Algorithms and Data Structures [online] |editor-first=Paul E. |editor-last=Black |publisher=U.S. National Institute of Standards and Technology |date=14 August 2008 |access-date=2 November 2016 |url=https://xlinux.nist.gov/dads/HTML/Levenshtein.html }} |
*{{citation |contribution=Levenshtein distance |title=Dictionary of Algorithms and Data Structures [online] |editor-first=Paul E. |editor-last=Black |publisher=U.S. National Institute of Standards and Technology |date=14 August 2008 |access-date=2 November 2016 |url=https://xlinux.nist.gov/dads/HTML/Levenshtein.html }} |
||
* [http://www.rosettacode.org/wiki/Levenshtein_distance |
* [http://www.rosettacode.org/wiki/Levenshtein_distance Rosetta Code implementations of Levenshtein distance] |
||
{{Natural Language Processing}} |
{{Natural Language Processing}} |
Latest revision as of 15:04, 28 August 2024
Class | measuring the difference between two sequences |
---|
In information theory, linguistics, and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. The Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. It is named after Soviet mathematician Vladimir Levenshtein, who defined the metric in 1965.[1]
Levenshtein distance may also be referred to as edit distance, although that term may also denote a larger family of distance metrics known collectively as edit distance.[2]: 32 It is closely related to pairwise string alignments.
Definition
[edit]The Levenshtein distance between two strings (of length and respectively) is given by where
where the of some string is a string of all but the first character of (i.e. ), and is the first character of (i.e. ). Either the notation or is used to refer the th character of the string , counting from 0, thus .
The first element in the minimum corresponds to deletion (from to ), the second to insertion and the third to replacement.
This definition corresponds directly to the naive recursive implementation.
Example
[edit]For example, the Levenshtein distance between "kitten" and "sitting" is 3, since the following 3 edits change one into the other, and there is no way to do it with fewer than 3 edits:
- kitten → sitten (substitution of "s" for "k"),
- sitten → sittin (substitution of "i" for "e"),
- sittin → sitting (insertion of "g" at the end).
A simple example of a deletion can be seen with "uninformed" and "uniformed" which have a distance of 1:
- uninformed → uniformed (deletion of "n").
Upper and lower bounds
[edit]The Levenshtein distance has several simple upper and lower bounds. These include:
- It is at least the absolute value of the difference of the sizes of the two strings.
- It is at most the length of the longer string.
- It is zero if and only if the strings are equal.
- If the strings have the same size, the Hamming distance is an upper bound on the Levenshtein distance. The Hamming distance is the number of positions at which the corresponding symbols in the two strings are different.
- The Levenshtein distance between two strings is no greater than the sum of their Levenshtein distances from a third string (triangle inequality).
An example where the Levenshtein distance between two strings of the same length is strictly less than the Hamming distance is given by the pair "flaw" and "lawn". Here the Levenshtein distance equals 2 (delete "f" from the front; insert "n" at the end). The Hamming distance is 4.
Applications
[edit]In approximate string matching, the objective is to find matches for short strings in many longer texts, in situations where a small number of differences is to be expected. The short strings could come from a dictionary, for instance. Here, one of the strings is typically short, while the other is arbitrarily long. This has a wide range of applications, for instance, spell checkers, correction systems for optical character recognition, and software to assist natural-language translation based on translation memory.
The Levenshtein distance can also be computed between two longer strings, but the cost to compute it, which is roughly proportional to the product of the two string lengths, makes this impractical. Thus, when used to aid in fuzzy string searching in applications such as record linkage, the compared strings are usually short to help improve speed of comparisons.[citation needed]
In linguistics, the Levenshtein distance is used as a metric to quantify the linguistic distance, or how different two languages are from one another.[3] It is related to mutual intelligibility: the higher the linguistic distance, the lower the mutual intelligibility, and the lower the linguistic distance, the higher the mutual intelligibility.
Relationship with other edit distance metrics
[edit]There are other popular measures of edit distance, which are calculated using a different set of allowable edit operations. For instance,
- the Damerau–Levenshtein distance allows the transposition of two adjacent characters alongside insertion, deletion, substitution;
- the longest common subsequence (LCS) distance allows only insertion and deletion, not substitution;
- the Hamming distance allows only substitution, hence, it only applies to strings of the same length.
- the Jaro distance allows only transposition.
Edit distance is usually defined as a parameterizable metric calculated with a specific set of allowed edit operations, and each operation is assigned a cost (possibly infinite). This is further generalized by DNA sequence alignment algorithms such as the Smith–Waterman algorithm, which make an operation's cost depend on where it is applied.
Computation
[edit]Recursive
[edit]This is a straightforward, but inefficient, recursive Haskell implementation of a lDistance
function that takes two strings, s and t, together with their lengths, and returns the Levenshtein distance between them:
lDistance :: Eq a => [a] -> [a] -> Int
lDistance [] t = length t -- If s is empty, the distance is the number of characters in t
lDistance s [] = length s -- If t is empty, the distance is the number of characters in s
lDistance (a : s') (b : t') =
if a == b
then lDistance s' t' -- If the first characters are the same, they can be ignored
else
1
+ minimum -- Otherwise try all three possible actions and select the best one
[ lDistance (a : s') t', -- Character is inserted (b inserted)
lDistance s' (b : t'), -- Character is deleted (a deleted)
lDistance s' t' -- Character is replaced (a replaced with b)
]
This implementation is very inefficient because it recomputes the Levenshtein distance of the same substrings many times.
A more efficient method would never repeat the same distance calculation. For example, the Levenshtein distance of all possible suffixes might be stored in an array , where is the distance between the last characters of string s
and the last characters of string t
. The table is easy to construct one row at a time starting with row 0. When the entire table has been built, the desired distance is in the table in the last row and column, representing the distance between all of the characters in s
and all the characters in t
.
Iterative with full matrix
[edit]This section uses 1-based strings rather than 0-based strings. If m is a matrix, is the ith row and the jth column of the matrix, with the first row having index 0 and the first column having index 0.
Computing the Levenshtein distance is based on the observation that if we reserve a matrix to hold the Levenshtein distances between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix in a dynamic programming fashion, and thus find the distance between the two full strings as the last value computed.
This algorithm, an example of bottom-up dynamic programming, is discussed, with variants, in the 1974 article The String-to-string correction problem by Robert A. Wagner and Michael J. Fischer.[4]
This is a straightforward pseudocode implementation for a function LevenshteinDistance
that takes two strings, s of length m, and t of length n, and returns the Levenshtein distance between them:
function LevenshteinDistance(char s[1..m], char t[1..n]):
// for all i and j, d[i,j] will hold the Levenshtein distance between
// the first i characters of s and the first j characters of t
declare int d[0..m, 0..n]
set each element in d to zero
// source prefixes can be transformed into empty string by
// dropping all characters
for i from 1 to m:
d[i, 0] := i
// target prefixes can be reached from empty source prefix
// by inserting every character
for j from 1 to n:
d[0, j] := j
for j from 1 to n:
for i from 1 to m:
if s[i] = t[j]:
substitutionCost := 0
else:
substitutionCost := 1
d[i, j] := minimum(d[i-1, j] + 1, // deletion
d[i, j-1] + 1, // insertion
d[i-1, j-1] + substitutionCost) // substitution
return d[m, n]
Two examples of the resulting matrix (hovering over a tagged number reveals the operation performed to get that number):
|
|
The invariant maintained throughout the algorithm is that we can transform the initial segment s[1..i]
into t[1..j]
using a minimum of d[i, j]
operations. At the end, the bottom-right element of the array contains the answer.
Iterative with two matrix rows
[edit]It turns out that only two rows of the table – the previous row and the current row being calculated – are needed for the construction, if one does not want to reconstruct the edited input strings.
The Levenshtein distance may be calculated iteratively using the following algorithm:[5]
function LevenshteinDistance(char s[0..m-1], char t[0..n-1]):
// create two work vectors of integer distances
declare int v0[n + 1]
declare int v1[n + 1]
// initialize v0 (the previous row of distances)
// this row is A[0][i]: edit distance from an empty s to t;
// that distance is the number of characters to append to s to make t.
for i from 0 to n:
v0[i] = i
for i from 0 to m - 1:
// calculate v1 (current row distances) from the previous row v0
// first element of v1 is A[i + 1][0]
// edit distance is delete (i + 1) chars from s to match empty t
v1[0] = i + 1
// use formula to fill in the rest of the row
for j from 0 to n - 1:
// calculating costs for A[i + 1][j + 1]
deletionCost := v0[j + 1] + 1
insertionCost := v1[j] + 1
if s[i] = t[j]:
substitutionCost := v0[j]
else:
substitutionCost := v0[j] + 1
v1[j + 1] := minimum(deletionCost, insertionCost, substitutionCost)
// copy v1 (current row) to v0 (previous row) for next iteration
// since data in v1 is always invalidated, a swap without copy could be more efficient
swap v0 with v1
// after the last swap, the results of v1 are now in v0
return v0[n]
Hirschberg's algorithm combines this method with divide and conquer. It can compute the optimal edit sequence, and not just the edit distance, in the same asymptotic time and space bounds.[6]
Automata
[edit]Levenshtein automata efficiently determine whether a string has an edit distance lower than a given constant from a given string.[7]
Approximation
[edit]The Levenshtein distance between two strings of length n can be approximated to within a factor
where ε > 0 is a free parameter to be tuned, in time O(n1 + ε).[8]
Computational complexity
[edit]It has been shown that the Levenshtein distance of two strings of length n cannot be computed in time O(n2 − ε) for any ε greater than zero unless the strong exponential time hypothesis is false.[9]
See also
[edit]- agrep
- Damerau–Levenshtein distance
- diff
- Dynamic time warping
- Euclidean distance
- Homology of sequences in genetics
- Hamming distance
- Hunt–Szymanski algorithm
- Jaccard index
- Jaro–Winkler distance
- Locality-sensitive hashing
- Longest common subsequence problem
- Lucene (an open source search engine that implements edit distance)
- Manhattan distance
- Metric space
- MinHash
- Optimal matching algorithm
- Numerical taxonomy
- Sørensen similarity index
References
[edit]- ^ В. И. Левенштейн (1965). Двоичные коды с исправлением выпадений, вставок и замещений символов [Binary codes capable of correcting deletions, insertions, and reversals]. Доклады Академии Наук СССР (in Russian). 163 (4): 845–848. Appeared in English as: Levenshtein, Vladimir I. (February 1966). "Binary codes capable of correcting deletions, insertions, and reversals". Soviet Physics Doklady. 10 (8): 707–710. Bibcode:1966SPhD...10..707L.
- ^ Jan D. ten Thije; Ludger Zeevaert (1 January 2007), Receptive multilingualism: linguistic analyses, language policies, and didactic concepts, John Benjamins Publishing Company, ISBN 978-90-272-1926-8,
Assuming that intelligibility is inversely related to linguistic distance ... the content words the percentage of cognates (related directly or via a synonym) ... lexical relatedness ... grammatical relatedness
. - ^ Wagner, Robert A.; Fischer, Michael J. (1974), "The String-to-String Correction Problem", Journal of the ACM, 21 (1): 168–173, doi:10.1145/321796.321811, S2CID 13381535
- ^ Hjelmqvist, Sten (26 March 2012), Fast, memory efficient Levenshtein algorithm.
- ^ Hirschberg, D. S. (1975). "A linear space algorithm for computing maximal common subsequences" (PDF). Communications of the ACM (Submitted manuscript). 18 (6): 341–343. CiteSeerX 10.1.1.348.4774. doi:10.1145/360825.360861. MR 0375829. S2CID 207694727.
- ^ Schulz, Klaus U.; Mihov, Stoyan (2002). "Fast String Correction with Levenshtein-Automata". International Journal of Document Analysis and Recognition. 5 (1): 67–85. CiteSeerX 10.1.1.16.652. doi:10.1007/s10032-002-0082-8. S2CID 207046453.
- ^ Andoni, Alexandr; Krauthgamer, Robert; Onak, Krzysztof (2010). Polylogarithmic approximation for edit distance and the asymmetric query complexity. IEEE Symp. Foundations of Computer Science (FOCS). arXiv:1005.4033. Bibcode:2010arXiv1005.4033A. CiteSeerX 10.1.1.208.2079.
- ^ Backurs, Arturs; Indyk, Piotr (2015). Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false). Forty-Seventh Annual ACM on Symposium on Theory of Computing (STOC). arXiv:1412.0348. Bibcode:2014arXiv1412.0348B.
External links
[edit]- Black, Paul E., ed. (14 August 2008), "Levenshtein distance", Dictionary of Algorithms and Data Structures [online], U.S. National Institute of Standards and Technology, retrieved 2 November 2016
- Rosetta Code implementations of Levenshtein distance