Jump to content

Markov property: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
SmackBot (talk | contribs)
m Date maintenance tags and general fixes
Thudso (talk | contribs)
m First sentence for style.
Line 2: Line 2:
{{Cleanup|date=October 2009}}
{{Cleanup|date=October 2009}}


In [[probability theory]], '''Markov property''' and '''Markov-type property''' refer to two closely related properties of a [[stochastic process]]. Their namesake is the [[Russia]]n [[mathematician]] [[Andrey Markov]].<ref>Markov, A. A. (1954). ''Theory of Algorithms''. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [Jerusalem, Israel Program for Scientific Translations, 1961; available from Office of Technical Services, [[United States Department of Commerce]]] Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: ''Teoriya algorifmov''. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60-51085.]</ref>
In [[probability theory]], the terms '''Markov property''' and '''Markov-type property''' refer to two closely related properties of a [[stochastic process]]. Their namesake is the [[Russia]]n [[mathematician]] [[Andrey Markov]].<ref>Markov, A. A. (1954). ''Theory of Algorithms''. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [Jerusalem, Israel Program for Scientific Translations, 1961; available from Office of Technical Services, [[United States Department of Commerce]]] Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: ''Teoriya algorifmov''. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60-51085.]</ref>


A stochastic process has the '''Markov property''' if the [[conditional probability distribution]] of future states of the process depend only upon the present state and a fixed number of past states; that is, future states are conditionally independent of past states older than a fixed number of past states. A process with this property is called '''Markovian''' or a '''[[Markov process]]'''. The articles [[Markov chain]] and [[Continuous-time Markov process]] explore this property in greater detail.
A stochastic process has the '''Markov property''' if the [[conditional probability distribution]] of future states of the process depend only upon the present state and a fixed number of past states; that is, future states are conditionally independent of past states older than a fixed number of past states. A process with this property is called '''Markovian''' or a '''[[Markov process]]'''. The articles [[Markov chain]] and [[Continuous-time Markov process]] explore this property in greater detail.

Revision as of 17:53, 11 December 2009

In probability theory, the terms Markov property and Markov-type property refer to two closely related properties of a stochastic process. Their namesake is the Russian mathematician Andrey Markov.[1]

A stochastic process has the Markov property if the conditional probability distribution of future states of the process depend only upon the present state and a fixed number of past states; that is, future states are conditionally independent of past states older than a fixed number of past states. A process with this property is called Markovian or a Markov process. The articles Markov chain and Continuous-time Markov process explore this property in greater detail.

A stochastic process has a Markov-type property if the process's random variables determine a set of probabilities can be factored in a way that yields the Markov property. Useful in applied research, members of such classes[clarification needed] defined by their mathematics or area of application[clarification needed] are referred to as Markov random fields., and occur in many situations. The Ising model is a prototypical example.

Definition

If one has a system composed of a set of random variables , then in general, the probability of a given random variable being in a state is written as

That is, in general, the probability of being in a state depends on the values of all of the other random variables . If, instead, one has that this probability only depends on some, but not all of these, then one says that the collection has the Markov property[2]. Letting denote the subset of on which depends, one then writes this limited dependence as

Any collection of random variables having this property is referred to as a Markov network. The set is sometimes referred to as the neighbors of ; alternately, it is the Markov blanket of .

The probability distribution of a Markov network can always be written as a Gibbs distribution, that is, as

for an appropriate energy function E defined on the subset . The normalizing constant is known as the partition function.

Markov networks are commonly seen in maximum entropy methods, since the Gibbs measure also has the property of being the unique stochastic measure that maximizes the entropy for a given energy functional.

Notes

  1. ^ Markov, A. A. (1954). Theory of Algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [Jerusalem, Israel Program for Scientific Translations, 1961; available from Office of Technical Services, United States Department of Commerce] Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algorifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60-51085.]
  2. ^ For a more advanced approach cf: Markov Processes and Semi-groups, Ch. X, § 8, Vol II Introduction to Probability Theory and Its Applications (2nd edition), William Feller, Wiley 1971, LCCCN 57-10805, ISBN 0-471-25709-5