Jump to content

Earley parser: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
The algorithm: Corrections & clarifying terminology
 
(158 intermediate revisions by more than 100 users not shown)
Line 1: Line 1:
{{Short description|Algorithm for parsing context-free languages}}
In [[computer science]], the '''Earley parser''' is an [[algorithm]] for [[parsing]] [[String (computer science)|strings]] that belong to a given [[context-free language]]. The algorithm, named after its inventor, [[Jay Earley]], is a [[chart parser]] that uses [[dynamic programming]]; it is mainly used for parsing in [[computational linguistics]]. It was first introduced in his dissertation<ref name=Earley1>{{cite book
{{Infobox algorithm
|name={{PAGENAMEBASE}}
|class=[[Parsing]] grammars that are [[Context-free grammar|context-free]]
|data=[[String (computer science)|String]]
|time=<math>O(n^3)</math>
|best-time={{plainlist|
* <math>\Omega(n)</math> for all [[deterministic context-free grammar]]s
* <math>\Omega(n^2)</math> for [[Ambiguous grammar|unambiguous grammars]]
}}
|average-time=<math>\Theta(n^3)</math>
|space=
}}

In [[computer science]], the '''Earley parser''' is an [[algorithm]] for [[parsing]] [[String (computer science)|strings]] that belong to a given [[context-free language]], though (depending on the variant) it may suffer problems with certain [[Nullable grammar|nullable grammars]].<ref>{{cite web|last=Kegler|first=Jeffrey|title=What is the Marpa algorithm?|url=http://blogs.perl.org/users/jeffrey_kegler/2011/11/what-is-the-marpa-algorithm.html|access-date=20 August 2013}}</ref> The algorithm, named after its inventor, [[Jay Earley]], is a [[chart parser]] that uses [[dynamic programming]]; it is mainly used for parsing in [[computational linguistics]]. It was first introduced in his dissertation<ref name=Earley1>{{cite book
| last=Earley
| last=Earley
| first=Jay
| first=Jay
Line 5: Line 19:
| year=1968
| year=1968
| publisher=Carnegie-Mellon Dissertation
| publisher=Carnegie-Mellon Dissertation
| url=http://reports-archive.adm.cs.cmu.edu/anon/anon/usr/ftp/scan/CMU-CS-68-earley.pdf}}</ref> (and later appeared in abbreviated, more legible form in a journal<ref name="Earley2">{{citation
| url=http://reports-archive.adm.cs.cmu.edu/anon/anon/usr/ftp/scan/CMU-CS-68-earley.pdf
| access-date=2012-09-12
| last = Earley | first = Jay | authorlink = Jay Earley
| archive-date=2017-09-22
| doi = 10.1145/362007.362035
| archive-url=https://web.archive.org/web/20170922004954/http://reports-archive.adm.cs.cmu.edu/anon/anon/usr/ftp/scan/CMU-CS-68-earley.pdf
| issue = 2
| url-status=dead
}}</ref> in 1968 (and later appeared in an abbreviated, more legible, form in a journal<ref name="Earley2">{{citation
| last = Earley | first = Jay | author-link = Jay Earley
| doi = 10.1145/362007.362035 | url = http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/cmt-55/lti/Courses/711/Class-notes/p94-earley.pdf | archive-url = https://web.archive.org/web/20040708052627/http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/cmt-55/lti/Courses/711/Class-notes/p94-earley.pdf | url-status = dead | archive-date = 2004-07-08 | issue = 2
| journal = [[Communications of the ACM]]
| journal = [[Communications of the ACM]]
| pages = 94–102
| pages = 94–102
| title = An efficient context-free parsing algorithm
| title = An efficient context-free parsing algorithm
| volume = 13
| volume = 13
| year = 1970}}</ref>).
| year = 1970| s2cid = 47032707 }}</ref>).


Earley parsers are appealing because they can parse all context-free languages{{discuss}}, unlike [[LR parser]]s and [[LL parser]]s, which are more typically used in [[compiler]]s but which can only handle restricted classes of languages. The Earley parser executes in cubic time in the general case <math>{O}(n^3)</math>, where ''n'' is the length of the parsed string, quadratic time for unambiguous grammars <math>{O}(n^2)</math>, and linear time for almost all [[LR parser|LR(k) grammars]]. It performs particularly well when the rules are written [[left recursion|left-recursively]].
Earley parsers are appealing because they can parse all context-free languages, unlike [[LR parser]]s and [[LL parser]]s, which are more typically used in [[compiler]]s but which can only handle restricted classes of languages. The Earley parser executes in cubic time in the general case <math>{O}(n^3)</math>, where ''n'' is the length of the parsed string, quadratic time for [[unambiguous grammar]]s <math>{O}(n^2)</math>,<ref>{{cite book | isbn=978-0-201-02988-8 | author=John E. Hopcroft and Jeffrey D. Ullman | title=Introduction to Automata Theory, Languages, and Computation | location=Reading/MA | publisher=Addison-Wesley | year=1979 | url-access=registration | url=https://archive.org/details/introductiontoau00hopc }} p.145</ref> and linear time for all [[deterministic context-free grammar]]s. It performs particularly well when the rules are written [[left recursion|left-recursively]].


= Earley Recognizer =
== Earley recogniser ==
The following algorithm describes the Earley recognizer. The recognizer can be easily modified to create a parse tree as it recognizes, and in that way can be turned into a parser.<ref name=Earley1/>
The following algorithm describes the Earley recogniser. The recogniser can be modified to create a parse tree as it recognises, and in that way can be turned into a parser.
== The algorithm ==


== The algorithm ==
In the following descriptions, α, β, and γ represent any [[string (computer science)|string]] of [[Terminal and nonterminal symbols|terminals/nonterminals]] (including the [[empty string]]), X and Y represent single nonterminals, and ''a'' represents a terminal symbol.
In the following descriptions, α, β, and γ represent any [[string (computer science)|string]] of [[Terminal and nonterminal symbols|terminals/nonterminals]] (including the [[empty string]]), X and Y represent single nonterminals, and ''a'' represents a terminal symbol.


Earley's algorithm is a top-down [[dynamic programming]] algorithm. In the following, we use Earley's dot notation: given a [[Formal grammar#The syntax of grammars|production]] X → αβ, the notation X → α • β represents a condition in which α has already been parsed and β is expected.
Earley's algorithm is a top-down [[dynamic programming]] algorithm. In the following, we use Earley's dot notation: given a [[Formal grammar#The syntax of grammars|production]] X → αβ, the notation X → α • β represents a condition in which α has already been parsed and β is expected.


Input position 0 is the position prior to input. Input position ''n'' is the position after accepting the the ''n''th token. (Informally, input positions can be thought of as locations at [[Lexical analysis|token]] boundaries.) For every input position, the parser generates a ''state set''. Each state is a [[tuple]] (X → α • β, ''i''), consisting of
Input position 0 is the position prior to input. Input position ''n'' is the position after accepting the ''n''th token. (Informally, input positions can be thought of as locations at [[Lexical analysis|token]] boundaries.) For every input position, the parser generates a ''state set''. Each state is a [[tuple]] (X → α • β, ''i''), consisting of


* the production currently being matched (X → α β)
* the production currently being matched (X → α β)
* our current position in that production (represented by the dot)
* the current position in that production (visually represented by the dot)
* the position ''i'' in the input at which the matching of this production began: the ''origin position''
* the position ''i'' in the input at which the matching of this production began: the ''origin position''


(Earley's original algorithm included a look-ahead in the state; later research showed this to have little practical effect on the parsing efficiency, and it has subsequently been dropped from most implementations.)
(Earley's original algorithm included a look-ahead in the state; later research showed this to have little practical effect on the parsing efficiency, and it has subsequently been dropped from most implementations.)


A state is finished when its current position is the last position of the right side of the production, that is, when there is no symbol to the right of the dot • in the visual representation of the state.
The state set at input position ''k'' is called S(''k''). The parser is seeded with S(0) consisting of only the top-level rule. The parser then repeatedly executes three operations: ''prediction'', ''scanning'', and ''completion''.


The state set at input position ''k'' is called S(''k''). The parser is seeded with S(0) consisting of only the top-level rule. The parser then repeatedly executes three operations: ''prediction'', ''scanning'', and ''completion''.
* '''Prediction''': For every state in S(''k'') of the form (X → α • Y β, ''j'') (where ''j'' is the origin position as above), add (Y → • γ, ''k'') to S(''k'') for every production in the grammar with Y on the left-hand side (Y → γ).


* '''Scanning''': If ''a'' is the next symbol in the input stream, for every state in S(''k'') of the form (X → α • ''a'' β, ''j''), add (X α ''a''β, ''j'') to S(''k''+1).
* ''Prediction'': For every state in S(''k'') of the form (X → α • Y β, ''j'') (where ''j'' is the origin position as above), add (Y → • γ, ''k'') to S(''k'') for every production in the grammar with Y on the left-hand side (Y → γ).
* ''Scanning'': If ''a'' is the next symbol in the input stream, for every state in S(''k'') of the form (X → α • ''a'' β, ''j''), add (X → α ''a'' • β, ''j'') to S(''k''+1).
* ''Completion'': For every state in S(''k'') of the form (Y → γ •, ''j''), find all states in S(''j'') of the form (X → α • Y β, ''i'') and add (X → α Y • β, ''i'') to S(''k'').


Duplicate states are not added to the state set, only new ones. These three operations are repeated until no new states can be added to the set. The set is generally implemented as a queue of states to process, with the operation to be performed depending on what kind of state it is.
* '''Completion''': For every state in S(''k'') of the form (X → γ •, ''j''), find states in S(''j'') of the form (Y → α • X β, ''i'') and add (Y → α X • β, ''i'') to S(''k'').


The algorithm accepts if (X → γ •, 0) ends up in S(''n''), where (X → γ) is the top level-rule and ''n'' the input length, otherwise it rejects.
It is important to note that duplicate states are not added to the state set, only new ones. These three operations are repeated until no new states can be added to the set. The set is generally implemented as a queue of states to process, with the operation to be performed depending on what kind of state it is.


== Pseudocode ==
== Pseudocode ==
Adapted from <ref name=Jurafsky>{{cite book|last=Jurafsky|first=D.|title=Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition|year=2009|publisher=Pearson Prentice Hall|isbn=9780131873216|url=http://books.google.co.uk/books?id=fZmj5UNK8AQC}}</ref> by Daniel Jurafsky and James H. Martin
Adapted from Speech and Language Processing<ref name=Jurafsky>{{cite book|last=Jurafsky|first=D.|title=Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition|year=2009|publisher=Pearson Prentice Hall|isbn=9780131873216|url=https://books.google.com/books?id=fZmj5UNK8AQC}}</ref> by [[Daniel Jurafsky]] and James H. Martin,


<syntaxhighlight lang="pascal">
<syntaxhighlight lang="pascal">
DECLARE ARRAY S;
function EARLEY-PARSE(words, grammar)

ENQUEUE((γ → •S, 0), chart[0])
function INIT(words)
for i ← from 0 to LENGTH(words) do
S ← CREATE_ARRAY(LENGTH(words) + 1)
for each state in chart[i] do
if INCOMPLETE?(state) then
for k from 0 to LENGTH(words) do
S[k] ← EMPTY_ORDERED_SET
if NEXT-CAT(state) is a nonterminal then

PREDICTOR(state, i, grammar) // non-terminal
function EARLEY_PARSE(words, grammar)
INIT(words)
ADD_TO_SET((γ → •S, 0), S[0])
for k ← from 0 to LENGTH(words) do
for each state in S[k] do // S[k] can expand during this loop
if not FINISHED(state) then
if NEXT_ELEMENT_OF(state) is a nonterminal then
PREDICTOR(state, k, grammar) // non_terminal
else do
else do
SCANNER(state, i) // terminal
SCANNER(state, k, words) // terminal
else do
else do
COMPLETER(state, i)
COMPLETER(state, k)
end
end
end
end
return chart
return chart


procedure PREDICTOR((A → α•B, i), j, grammar),
procedure PREDICTOR((A → α•Bβ, j), k, grammar)
for each (B → γ) in GRAMMAR-RULES-FOR(B, grammar) do
for each (B → γ) in GRAMMAR_RULES_FOR(B, grammar) do
ADD-TO-SET((B → •γ, j), chart[ j])
ADD_TO_SET((B → •γ, k), S[k])
end
end


procedure SCANNER((A → α•B, i), j),
procedure SCANNER((A → α•aβ, j), k, words)
if BPARTS-OF-SPEECH(word[j]) then
if j < LENGTH(words) and aPARTS_OF_SPEECH(words[k]) then
ADD-TO-SET((Bword[j], i), chart[j + 1])
ADD_TO_SET((Aαa•β, j), S[k+1])
end
end


procedure COMPLETER((B → γ•, j), k),
procedure COMPLETER((B → γ•, x), k)
for each (A → α•Bβ, i) in chart[j] do
for each (A → α•Bβ, j) in S[x] do
ADD-TO-SET((A → αB•β, i), chart[k])
ADD_TO_SET((A → αB•β, j), S[k])
end
end
</syntaxhighlight>
</syntaxhighlight>


== Example ==
== Example ==
Consider the following simple grammar for arithmetic expressions:<syntaxhighlight lang="bnf">
Consider the following simple grammar for arithmetic expressions:
<syntaxhighlight lang="text">
<P> ::= <S> # the start rule

::= S # the start rule
<S> ::= <S> "+" <M> | <M>
<S> ::= <S> "+" <M>|<M>
<M> ::= <M> "*" <T> | <T>
<M> ::= <M> "*" <T>|<T>
<T> ::= "1" | "2" | "3" | "4"
<T> ::= "1" | "2" | "3" | "4"
</syntaxhighlight>
</syntaxhighlight>
Line 90: Line 118:


This is the sequence of state sets:
This is the sequence of state sets:
{| class="wikitable"
! (state no.) !! Production !! (Origin) !! Comment
|-----------------------------------------
! scope="row" colspan="4" style="text-align:left; background:#e9e9e9;font-family:monospace" | S(0): • 2 + 3 * 4
|-
| 1 || style="font-family:monospace" |P → • S || 0 || start rule
|-
| 2 || style="font-family:monospace" |S → • S + M || 0 || predict from (1)
|-
| 3 || style="font-family:monospace" |S → • M || 0 || predict from (1)
|-
| 4 || style="font-family:monospace" |M → • M * T || 0 || predict from (3)
|-
| 5 || style="font-family:monospace" |M → • T || 0 || predict from (3)
|-
| 6 || style="font-family:monospace" |T → • number || 0 || predict from (5)
|-
! scope="row" colspan="4" style="text-align:left; background:#e9e9e9;font-family:monospace" | S(1): 2 • + 3 * 4
|-
| 1 || style="font-family:monospace" |T → number • || 0 || scan from S(0)(6)
|-
| 2 || style="font-family:monospace" |M → T • || 0 || complete from (1) and S(0)(5)
|-
| 3 || style="font-family:monospace" |M → M • * T || 0 || complete from (2) and S(0)(4)
|-
| 4 || style="font-family:monospace" |S → M • || 0 || complete from (2) and S(0)(3)
|-
| 5 || style="font-family:monospace" |S → S • + M || 0 || complete from (4) and S(0)(2)
|-
| 6 || style="font-family:monospace" |P → S • || 0 || complete from (4) and S(0)(1)
|-
! scope="row" colspan="4" style="text-align:left; background:#e9e9e9;font-family:monospace" | S(2): 2 + • 3 * 4
|-
| 1 || style="font-family:monospace" |S → S + • M || 0 || scan from S(1)(5)
|-
| 2 || style="font-family:monospace" |M → • M * T || 2 || predict from (1)
|-
| 3 || style="font-family:monospace" |M → • T || 2 || predict from (1)
|-
| 4 || style="font-family:monospace" |T → • number || 2 || predict from (3)
|-
! scope="row" colspan="4" style="text-align:left; background:#e9e9e9;font-family:monospace" | S(3): 2 + 3 • * 4
|-
| 1 || style="font-family:monospace" |T → number • || 2 || scan from S(2)(4)
|-
| 2 || style="font-family:monospace" |M → T • || 2 || complete from (1) and S(2)(3)
|-
| 3 || style="font-family:monospace" |M → M • * T || 2 || complete from (2) and S(2)(2)
|-
| 4 || style="font-family:monospace" |S → S + M • || 0 || complete from (2) and S(2)(1)
|-
| 5 || style="font-family:monospace" |S → S • + M || 0 || complete from (4) and S(0)(2)
|-
| 6 || style="font-family:monospace" |P → S • || 0 || complete from (4) and S(0)(1)
|-
! scope="row" colspan="4" style="text-align:left; background:#e9e9e9;font-family:monospace" | S(4): 2 + 3 * • 4
|-
| 1 || style="font-family:monospace" |M → M * • T || 2 || scan from S(3)(3)
|-
| 2 || style="font-family:monospace" |T → • number || 4 || predict from (1)
|-
! scope="row" colspan="4" style="text-align:left; background:#e9e9e9;font-family:monospace" | S(5): 2 + 3 * 4 •
|-
| 1 || style="font-family:monospace" |T → number • || 4 || scan from S(4)(2)
|-
| 2 || style="font-family:monospace" |M → M * T • || 2 || complete from (1) and S(4)(1)
|-
| 3 || style="font-family:monospace" |M → M • * T || 2 || complete from (2) and S(2)(2)
|-
| 4 || style="font-family:monospace" |S → S + M • || 0 || complete from (2) and S(2)(1)
|-
| 5 || style="font-family:monospace" |S → S • + M || 0 || complete from (4) and S(0)(2)
|-
| 6 || style="font-family:monospace" |P → S • || 0 || complete from (4) and S(0)(1)
|-
|}
The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences.


== Constructing the parse forest ==
(state no.) Production (Origin) # Comment
Earley's dissertation<ref name=Earley3>{{cite book
-----------------------------------------
| last=Earley
| first=Jay
| title=An Efficient Context-Free Parsing Algorithm
| year=1968
| publisher=Carnegie-Mellon Dissertation
| page=106
| url=http://reports-archive.adm.cs.cmu.edu/anon/anon/usr/ftp/scan/CMU-CS-68-earley.pdf
| access-date=2012-09-12
| archive-date=2017-09-22
| archive-url=https://web.archive.org/web/20170922004954/http://reports-archive.adm.cs.cmu.edu/anon/anon/usr/ftp/scan/CMU-CS-68-earley.pdf
| url-status=dead
}}</ref> briefly describes an algorithm for constructing parse trees by adding a set of pointers from each non-terminal in an Earley item back to the items that caused it to be recognized. But [[Masaru Tomita|Tomita]] noticed<ref>{{cite book|last1=Tomita|first1=Masaru|title=Efficient Parsing for Natural Language: A Fast Algorithm for Practical Systems|date=April 17, 2013|publisher=Springer Science and Business Media|isbn=978-1475718850|page=74|url=https://books.google.com/books?id=DAjkBwAAQBAJ&q=Tomita%20Efficient%20Parsing%20for%20natural%20Language&pg=PA74|access-date=16 September 2015}}</ref> that this does not take into account the relations between symbols, so if we consider the grammar S → SS | b and the string bbb, it only notes that each S can match one or two b's, and thus produces spurious derivations for bb and bbbb as well as the two correct derivations for bbb.


Another method<ref>{{cite journal|last1=Scott|first1=Elizabeth|title=SPPF-Style Parsing From Earley Recognizers|journal=Electronic Notes in Theoretical Computer Science|date=April 1, 2008|volume=203|issue=2|pages=53–67|doi=10.1016/j.entcs.2008.03.044|doi-access=free}}</ref> is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation for [[syntactic ambiguity|ambiguous]] parses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest.
=== S(0): • 2 + 3 * 4 ===
(1) P → • S (0) # start rule
(2) S → • S + M (0) # predict from (1)
(3) S → • M (0) # predict from (1)
(4) M → • M * T (0) # predict from (3)
(5) M → • T (0) # predict from (3)
(6) T → • number (0) # predict from (5)


* Predicted items have a null SPPF pointer.
=== S(1): 2 • + 3 * 4 ===
* The scanner creates an SPPF node representing the non-terminal it is scanning.
(1) T → number • (0) # scan from S(0)(6)
* Then when the scanner or completer advance an item, they add a derivation whose children are the node from the item whose dot was advanced, and the one for the new symbol that was advanced over (the non-terminal or completed item).
(2) M → T • (0) # complete from (1) and S(0)(5)
(3) M → M • * T (0) # complete from (2) and S(0)(4)
(4) S → M • (0) # complete from (2) and S(0)(3)
(5) S → S • + M (0) # complete from (4) and S(0)(2)
(6) P → S • (0) # complete from (4) and S(0)(1)


SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from.
=== S(2): 2 + • 3 * 4 ===
(1) S → S + • M (0) # scan from S(1)(5)
(2) M → • M * T (2) # predict from (1)
(3) M → • T (2) # predict from (1)
(4) T → • number (2) # predict from (3)


== Optimizations ==
=== S(3): 2 + 3 • * 4 ===
(1) T → number • (2) # scan from S(2)(4)
(2) M → T • (2) # complete from (1) and S(2)(3)
(3) M → M • * T (2) # complete from (2) and S(2)(2)
(4) S → S + M • (0) # complete from (2) and S(2)(1)
(5) S → S • + M (0) # complete from (4) and S(0)(2)
(6) P → S • (0) # complete from (4) and S(0)(1)

=== S(4): 2 + 3 * • 4 ===
(1) M → M * • T (2) # scan from S(3)(3)
(2) T → • number (4) # predict from (1)

=== S(5): 2 + 3 * 4 • ===
(1) T → number • (4) # scan from S(4)(2)
(2) M → M * T • (2) # complete from (1) and S(4)(1)
(3) M → M • * T (2) # complete from (2) and S(2)(2)
(4) S → S + M • (0) # complete from (2) and S(2)(1)
(5) S → S • + M (0) # complete from (4) and S(0)(2)
(6) P → S • (0) # complete from (4) and S(0)(1)

The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences.


Philippe McLean and R. Nigel Horspool in their paper [https://link.springer.com/content/pdf/10.1007%2F3-540-61053-7_68.pdf "A Faster Earley Parser"] combine Earley parsing with LR parsing and achieve an improvement in an order of magnitude.
= See also =


== See also ==
* [[CYK algorithm]]
* [[CYK algorithm]]
* [[Context-free grammar]]
* [[Context-free grammar]]
* [[List of algorithms#Parsing|Parsing Algorithms]]
* [[List of algorithms#Parsing|Parsing algorithms]]


= Citations =
== Citations ==
{{Reflist}}
{{Reflist}}

= Other Reference Materials =
== Other reference materials ==
*{{cite book
*{{cite journal
| last1 = Aycock | first1 = John
| last1 = Aycock | first1 = John
| last2 = Horspool | first2 = R. Nigel | author2-link = Nigel Horspool
| last2 = Horspool | first2 = R. Nigel | author2-link = Nigel Horspool
Line 156: Line 241:
| title = Practical Earley Parsing
| title = Practical Earley Parsing
| volume = 45
| volume = 45
| year = 2002}}
| year = 2002| citeseerx = 10.1.1.12.4254
}}
*{{citation
*{{citation
| last = Leo | first = Joop M. I. M.
| last = Leo | first = Joop M. I. M.
Line 166: Line 252:
| title = A general context-free parsing algorithm running in linear time on every LR(''k'') grammar without using lookahead
| title = A general context-free parsing algorithm running in linear time on every LR(''k'') grammar without using lookahead
| volume = 82
| volume = 82
| year = 1991}}.
| year = 1991
| doi-access = free
}}


*{{cite conference |first= Masaru|last= Tomita|title= LR parsers for natural languages |conference= 10th International Conference on Computational Linguistics |booktitle= COLING|pages= 354–357|date= 1984}}
*{{cite conference |first= Masaru|last= Tomita|title= LR parsers for natural languages |conference= 10th International Conference on Computational Linguistics |book-title= COLING|pages= 354–357|year= 1984|url=https://aclanthology.info/pdf/P/P84/P84-1073.pdf}}

=External links=
===C Implementations===
* [http://cocom.sourceforge.net/ammunition-13.html 'early'] An Earley parser [[C (programming language)|C]] -library.
* [https://bitbucket.org/abki/c-earley-parser/src 'C Earley Parser'] An Earley parser [[C (programming language)|C]].

===Java Implementations===
* [http://linguateca.dei.uc.pt/index.php?sep=recursos PEN] A Java library that implements the Earley algorithm.
* [http://www.ling.ohio-state.edu/~scott/#projects-pep Pep] A Java library that implements the Earley algorithm and provides charts and parse trees as parsing artifacts.
* [http://www.cs.umanitoba.ca/~comp4190/Earley/Earley.java] A Java implementation of Earley parser.

===Perl Implementations===
* [http://search.cpan.org/dist/Marpa-XS/ Marpa::XS] and [http://search.cpan.org/dist/Marpa-PP/ Marpa::PP], [[Perl]] modules, incorporating improvements made to the Earley algorithm by Joop Leo, and by Aycock and Horspool.
* [http://search.cpan.org/~lpalmer/Parse-Earley-0.15/Earley.pm Parse::Earley] A [[Perl]] module that implements Jay Earley's original algorithm.

===Python Implementations===
* [http://www.cavar.me/damir/charty/python/ Charty] a [[Python (programming language)|Python]] implementation of an Earley parser.
* [http://nltk.org/ NLTK] a [[Python (programming language)|Python]] toolkit that has an Earley parser.
* [http://pages.cpsc.ucalgary.ca/~aycock/spark/ Spark] an Object Oriented "little language framework" for [[Python (programming language)|Python]] that implements an Earley parser.
* [http://github.com/tomerfiliba/tau/blob/master/earley3.py earley3.py] A stand-alone implementation of the algorithm in less than 150 lines of code, including generation of the parsing-forest and samples.

===Common Lisp Implementations===
* [http://www.cliki.net/CL-EARLEY-PARSER CL-EARLEY-PARSER] A Common Lisp library that implements an Earley parser.

===Scheme/Racket Implementations===
* [http://www.cavar.me/damir/charty/scheme/ Charty-Racket] A [[Scheme (programming language)|Scheme]] / [[Racket (programming language)|Racket]] implementation of an Earley parser.

===Resources===
* [http://accent.compilertools.net/Entire.html The Accent compiler-compiler]


{{parsers}}
[[Category:Parsing algorithms]]
[[Category:Parsing algorithms]]
[[Category:Dynamic programming]]
[[Category:Dynamic programming]]

{{Link FA|pl}}
[[bg:Алгоритъм на Ерли]]
[[de:Earley-Algorithmus]]
[[es:Algoritmo de Earley]]
[[fr:Analyse Earley]]
[[ja:アーリー法]]
[[pl:Algorytm Earleya]]
[[pt:Algoritmo Earley]]
[[ru:Алгоритм Эрли]]
[[sr:Erlijev analizator]]
[[uk:Алгоритм Ерлі]]

Latest revision as of 12:59, 20 November 2024

Earley parser
ClassParsing grammars that are context-free
Data structureString
Worst-case performance
Best-case performance
Average performance

In computer science, the Earley parser is an algorithm for parsing strings that belong to a given context-free language, though (depending on the variant) it may suffer problems with certain nullable grammars.[1] The algorithm, named after its inventor, Jay Earley, is a chart parser that uses dynamic programming; it is mainly used for parsing in computational linguistics. It was first introduced in his dissertation[2] in 1968 (and later appeared in an abbreviated, more legible, form in a journal[3]).

Earley parsers are appealing because they can parse all context-free languages, unlike LR parsers and LL parsers, which are more typically used in compilers but which can only handle restricted classes of languages. The Earley parser executes in cubic time in the general case , where n is the length of the parsed string, quadratic time for unambiguous grammars ,[4] and linear time for all deterministic context-free grammars. It performs particularly well when the rules are written left-recursively.

Earley recogniser

[edit]

The following algorithm describes the Earley recogniser. The recogniser can be modified to create a parse tree as it recognises, and in that way can be turned into a parser.

The algorithm

[edit]

In the following descriptions, α, β, and γ represent any string of terminals/nonterminals (including the empty string), X and Y represent single nonterminals, and a represents a terminal symbol.

Earley's algorithm is a top-down dynamic programming algorithm. In the following, we use Earley's dot notation: given a production X → αβ, the notation X → α • β represents a condition in which α has already been parsed and β is expected.

Input position 0 is the position prior to input. Input position n is the position after accepting the nth token. (Informally, input positions can be thought of as locations at token boundaries.) For every input position, the parser generates a state set. Each state is a tuple (X → α • β, i), consisting of

  • the production currently being matched (X → α β)
  • the current position in that production (visually represented by the dot •)
  • the position i in the input at which the matching of this production began: the origin position

(Earley's original algorithm included a look-ahead in the state; later research showed this to have little practical effect on the parsing efficiency, and it has subsequently been dropped from most implementations.)

A state is finished when its current position is the last position of the right side of the production, that is, when there is no symbol to the right of the dot • in the visual representation of the state.

The state set at input position k is called S(k). The parser is seeded with S(0) consisting of only the top-level rule. The parser then repeatedly executes three operations: prediction, scanning, and completion.

  • Prediction: For every state in S(k) of the form (X → α • Y β, j) (where j is the origin position as above), add (Y → • γ, k) to S(k) for every production in the grammar with Y on the left-hand side (Y → γ).
  • Scanning: If a is the next symbol in the input stream, for every state in S(k) of the form (X → α • a β, j), add (X → α a • β, j) to S(k+1).
  • Completion: For every state in S(k) of the form (Y → γ •, j), find all states in S(j) of the form (X → α • Y β, i) and add (X → α Y • β, i) to S(k).

Duplicate states are not added to the state set, only new ones. These three operations are repeated until no new states can be added to the set. The set is generally implemented as a queue of states to process, with the operation to be performed depending on what kind of state it is.

The algorithm accepts if (X → γ •, 0) ends up in S(n), where (X → γ) is the top level-rule and n the input length, otherwise it rejects.

Pseudocode

[edit]

Adapted from Speech and Language Processing[5] by Daniel Jurafsky and James H. Martin,

DECLARE ARRAY S;

function INIT(words)
    S  CREATE_ARRAY(LENGTH(words) + 1)
    for k  from 0 to LENGTH(words) do
        S[k]  EMPTY_ORDERED_SET

function EARLEY_PARSE(words, grammar)
    INIT(words)
    ADD_TO_SET((γ  S, 0), S[0])
    for k  from 0 to LENGTH(words) do
        for each state in S[k] do  // S[k] can expand during this loop
            if not FINISHED(state) then
                if NEXT_ELEMENT_OF(state) is a nonterminal then
                    PREDICTOR(state, k, grammar)         // non_terminal
                else do
                    SCANNER(state, k, words)             // terminal
            else do
                COMPLETER(state, k)
        end
    end
    return chart

procedure PREDICTOR((A  α•Bβ, j), k, grammar)
    for each (B  γ) in GRAMMAR_RULES_FOR(B, grammar) do
        ADD_TO_SET((B  •γ, k), S[k])
    end

procedure SCANNER((A  α•aβ, j), k, words)
    if j < LENGTH(words) and a  PARTS_OF_SPEECH(words[k]) then
        ADD_TO_SET((A  αa•β, j), S[k+1])
    end

procedure COMPLETER((B  γ•, x), k)
    for each (A  α•Bβ, j) in S[x] do
        ADD_TO_SET((A  αB•β, j), S[k])
    end

Example

[edit]

Consider the following simple grammar for arithmetic expressions:

<P> ::= <S>      # the start rule
<S> ::= <S> "+" <M> | <M>
<M> ::= <M> "*" <T> | <T>
<T> ::= "1" | "2" | "3" | "4"

With the input:

2 + 3 * 4

This is the sequence of state sets:

(state no.) Production (Origin) Comment
S(0): • 2 + 3 * 4
1 P → • S 0 start rule
2 S → • S + M 0 predict from (1)
3 S → • M 0 predict from (1)
4 M → • M * T 0 predict from (3)
5 M → • T 0 predict from (3)
6 T → • number 0 predict from (5)
S(1): 2 • + 3 * 4
1 T → number • 0 scan from S(0)(6)
2 M → T • 0 complete from (1) and S(0)(5)
3 M → M • * T 0 complete from (2) and S(0)(4)
4 S → M • 0 complete from (2) and S(0)(3)
5 S → S • + M 0 complete from (4) and S(0)(2)
6 P → S • 0 complete from (4) and S(0)(1)
S(2): 2 + • 3 * 4
1 S → S + • M 0 scan from S(1)(5)
2 M → • M * T 2 predict from (1)
3 M → • T 2 predict from (1)
4 T → • number 2 predict from (3)
S(3): 2 + 3 • * 4
1 T → number • 2 scan from S(2)(4)
2 M → T • 2 complete from (1) and S(2)(3)
3 M → M • * T 2 complete from (2) and S(2)(2)
4 S → S + M • 0 complete from (2) and S(2)(1)
5 S → S • + M 0 complete from (4) and S(0)(2)
6 P → S • 0 complete from (4) and S(0)(1)
S(4): 2 + 3 * • 4
1 M → M * • T 2 scan from S(3)(3)
2 T → • number 4 predict from (1)
S(5): 2 + 3 * 4 •
1 T → number • 4 scan from S(4)(2)
2 M → M * T • 2 complete from (1) and S(4)(1)
3 M → M • * T 2 complete from (2) and S(2)(2)
4 S → S + M • 0 complete from (2) and S(2)(1)
5 S → S • + M 0 complete from (4) and S(0)(2)
6 P → S • 0 complete from (4) and S(0)(1)

The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences.

Constructing the parse forest

[edit]

Earley's dissertation[6] briefly describes an algorithm for constructing parse trees by adding a set of pointers from each non-terminal in an Earley item back to the items that caused it to be recognized. But Tomita noticed[7] that this does not take into account the relations between symbols, so if we consider the grammar S → SS | b and the string bbb, it only notes that each S can match one or two b's, and thus produces spurious derivations for bb and bbbb as well as the two correct derivations for bbb.

Another method[8] is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation for ambiguous parses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest.

  • Predicted items have a null SPPF pointer.
  • The scanner creates an SPPF node representing the non-terminal it is scanning.
  • Then when the scanner or completer advance an item, they add a derivation whose children are the node from the item whose dot was advanced, and the one for the new symbol that was advanced over (the non-terminal or completed item).

SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from.

Optimizations

[edit]

Philippe McLean and R. Nigel Horspool in their paper "A Faster Earley Parser" combine Earley parsing with LR parsing and achieve an improvement in an order of magnitude.

See also

[edit]

Citations

[edit]
  1. ^ Kegler, Jeffrey. "What is the Marpa algorithm?". Retrieved 20 August 2013.
  2. ^ Earley, Jay (1968). An Efficient Context-Free Parsing Algorithm (PDF). Carnegie-Mellon Dissertation. Archived from the original (PDF) on 2017-09-22. Retrieved 2012-09-12.
  3. ^ Earley, Jay (1970), "An efficient context-free parsing algorithm" (PDF), Communications of the ACM, 13 (2): 94–102, doi:10.1145/362007.362035, S2CID 47032707, archived from the original (PDF) on 2004-07-08
  4. ^ John E. Hopcroft and Jeffrey D. Ullman (1979). Introduction to Automata Theory, Languages, and Computation. Reading/MA: Addison-Wesley. ISBN 978-0-201-02988-8. p.145
  5. ^ Jurafsky, D. (2009). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Pearson Prentice Hall. ISBN 9780131873216.
  6. ^ Earley, Jay (1968). An Efficient Context-Free Parsing Algorithm (PDF). Carnegie-Mellon Dissertation. p. 106. Archived from the original (PDF) on 2017-09-22. Retrieved 2012-09-12.
  7. ^ Tomita, Masaru (April 17, 2013). Efficient Parsing for Natural Language: A Fast Algorithm for Practical Systems. Springer Science and Business Media. p. 74. ISBN 978-1475718850. Retrieved 16 September 2015.
  8. ^ Scott, Elizabeth (April 1, 2008). "SPPF-Style Parsing From Earley Recognizers". Electronic Notes in Theoretical Computer Science. 203 (2): 53–67. doi:10.1016/j.entcs.2008.03.044.

Other reference materials

[edit]