Jump to content

Markov random field

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Witbrock (talk | contribs) at 20:09, 17 November 2005 (link to Markov blanket). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A Markov network, also called a Markov random field, is represented by:

- an undirected graph , where each vertex represents a random variable and each edge represents a dependency between random variables and ,

- a set of potential functions , where each represents a clique in , and is the set of all possible assignments to the random variables represented by . In other words, each clique has associated with it a function from possible assignments to all nodes to the nonnegative real numbers.

The potential functions used in a Markov network do not necessarily have a probabilistic interpretation by themselves, but a higher value indicates a more probable assignment to the variables in a clique. The network is used to represent the joint distribution over all variables represented by nodes in the graph. The probability of an assignment to all nodes in the network is given by , where is a normalizing constant summing over all possible assignments to all nodes in the network.

The Markov blanket of a node in a Markov network is defined to be every node with an edge to , i.e. all such that . Every node in a Markov network is conditionally independent of every other node given the Markov blanket of .

A Markov network is similar to a Bayesian network in its representation of dependencies, but a Markov network can represent dependencies that a Bayesian network can not. A Bayesian network, for instance, can not represent cyclic dependencies, while a Markov network can.

As in a Bayesian network, one may calculate the conditional distribution of a set of nodes given values to another set of nodes in the Markov network by summing over all possible assignments to ; this is called exact inference. However, exact inference is a #P-complete problem, and thus computationally intractable. Approximation techniques such as Markov chain Monte Carlo and belief propagation are more feasible in practice.