Jump to content

Hidden Markov model: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Skaakt (talk | contribs)
Removed dead link [http://hmmer.janelia.org/ HMMer]
Line 9: Line 9:
In a regular Markov model, the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. In a ''hidden'' Markov model, the state is not directly visible, but variables influenced by the state are visible. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states.
In a regular Markov model, the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. In a ''hidden'' Markov model, the state is not directly visible, but variables influenced by the state are visible. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states.


Hidden Markov models are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition and [[bioinformatics]] (e.g. [http://hmmer.janelia.org/ HMMer]).
Hidden Markov models are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition and [[bioinformatics]].


== Architecture of a Hidden Markov Model ==
== Architecture of a Hidden Markov Model ==

Revision as of 15:47, 23 October 2006

File:MarkovModel.png
State transitions in a hidden Markov model (example)
x — hidden states
y — observable outputs
a — transition probabilities
b — output probabilities

A hidden Markov model (HMM) is a statistical model where the system being modeled is assumed to be a Markov process with unknown parameters, and the challenge is to determine the hidden parameters from the observable parameters. The extracted model parameters can then be used to perform further analysis, for example for pattern recognition applications. A HMM can be considered as the simplest dynamic Bayesian network.

In a regular Markov model, the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. In a hidden Markov model, the state is not directly visible, but variables influenced by the state are visible. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states.

Hidden Markov models are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition and bioinformatics.

Architecture of a Hidden Markov Model

The diagram below shows the general architecture of an HMM. Each oval shape represents a random variable that can adopt a number of values. The random variable is the value of the hidden variable at time . The random variable is the value of the observed variable at time . The arrows in the diagram denote conditional dependencies.

From the diagram, it is clear that the value of the hidden variable (at time ) only depends on the value of the hidden variable (at time ). This is called the Markov property. Similarly, the value of the observed variable only depends on the value of the hidden variable (both at time ).


Temporal evolution of a hidden Markov model
Temporal evolution of a hidden Markov model

Probability of an observed sequence

The probability of observing a sequence of length is given by:

where the sum runs over all possible hidden node sequences . A brute force calculation of is intractable for realistic problems, as the number of possible hidden node sequences typically is extremely high. The calculation can however be speeded up enormously using an algorithm called the forward-backward procedure [1].

Using Hidden Markov Models

There are 3 canonical problems associated with HMMs:

  • Given the parameters of the model, compute the probability of a particular output sequence. This problem is solved by the forward-backward procedure.
  • Given the parameters of the model, find the most likely sequence of hidden states that could have generated a given output sequence. This problem is solved by the Viterbi algorithm.
  • Given an output sequence or a set of such sequences, find the most likely set of state transition and output probabilities. In other words, train the parameters of the HMM given a dataset of sequences. This problem is solved by the Baum-Welch algorithm.

A concrete example

Template:HMM example

This example is further elaborted in the Viterbi algorithm page.

Applications of hidden Markov models

History

Hidden Markov Models were first described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s.[2]

In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.[3]

Notes

  1. ^ Rabiner, p. 262
  2. ^ Rabiner, p. 258
  3. ^ Durbin et al.

References

See also