Graphical model: Difference between revisions
m robot Adding: ko:그래프 모형 |
mNo edit summary |
||
Line 14: | Line 14: | ||
In other words, the [[probability distribution|joint distribution]] factors into a product of conditional distributions. Any two nodes that are not connected by an arrow are [[Conditional independence|conditionally independent]] given the values of their parents. In general, any two sets of nodes are conditionally |
In other words, the [[probability distribution|joint distribution]] factors into a product of conditional distributions. Any two nodes that are not connected by an arrow are [[Conditional independence|conditionally independent]] given the values of their parents. In general, any two sets of nodes are conditionally |
||
independent |
independent given a third set if a criterion called [[d-separation|''d''-separation]] holds in the graph. |
||
(link d-separation to wiki entry). |
|||
This type of graphical model is known as a directed graphical model, [[Bayesian network]], or belief network. Classic machine learning models like [[hidden Markov models]], [[neural networks]] and newer models such as [[variable-order Markov model]]s can be considered as special cases of Bayesian networks. |
This type of graphical model is known as a directed graphical model, [[Bayesian network]], or belief network. Classic machine learning models like [[hidden Markov models]], [[neural networks]] and newer models such as [[variable-order Markov model]]s can be considered as special cases of Bayesian networks. |
||
Line 21: | Line 20: | ||
Graphical models with undirected edges are generally called [[Markov random field]]s or [[Markov network]]s. |
Graphical models with undirected edges are generally called [[Markov random field]]s or [[Markov network]]s. |
||
A third type of graphical model is a [[factor graph]], which is an undirected [[bipartite graph]] connecting variables and ''factor nodes''. Each factor represents a probability distribution over the variables it's connected to. In contrast to a Bayesian network, a factor may be connected to more than two nodes. |
|||
Applications of graphical models include modeling of [[gene regulatory network]]s, [[speech recognition]], gene finding, [[computer vision]] and diagnosis of diseases. |
|||
Applications of graphical models include [[speech recognition]], [[computer vision]], decoding of [[low-density parity-check codes]], modeling of [[gene regulatory network]]s, gene finding and diagnosis of diseases. |
|||
A good reference for learning the basics of graphical models is written by Neapolitan, Learning Bayesian networks (2004). A more advanced and statistically oriented book is by Cowell, Dawid, Lauritzen and Spiegelhalter, Probabilistic networks and expert systems (1999). |
A good reference for learning the basics of graphical models is written by Neapolitan, Learning Bayesian networks (2004). A more advanced and statistically oriented book is by Cowell, Dawid, Lauritzen and Spiegelhalter, Probabilistic networks and expert systems (1999). |
Revision as of 15:47, 7 September 2007
In probability theory, statistics, and machine learning, a graphical model (GM) is a graph that represents independencies among random variables by a graph in which each node is a random variable, and the missing edges between the nodes represent conditional independencies.
Two common types of GMs correspond to graphs with directed and undirected edges. If the network structure of the model is a directed acyclic graph (DAG), the GM represents a factorization of the joint probability of all random variables. More precisely, if the events are
- X1, ..., Xn,
then the joint probability
- P(X1, ..., Xn),
is equal to the product of the conditional probabilities
- P(Xi | parents of Xi) for i = 1,...,n.
In other words, the joint distribution factors into a product of conditional distributions. Any two nodes that are not connected by an arrow are conditionally independent given the values of their parents. In general, any two sets of nodes are conditionally independent given a third set if a criterion called d-separation holds in the graph.
This type of graphical model is known as a directed graphical model, Bayesian network, or belief network. Classic machine learning models like hidden Markov models, neural networks and newer models such as variable-order Markov models can be considered as special cases of Bayesian networks.
Graphical models with undirected edges are generally called Markov random fields or Markov networks.
A third type of graphical model is a factor graph, which is an undirected bipartite graph connecting variables and factor nodes. Each factor represents a probability distribution over the variables it's connected to. In contrast to a Bayesian network, a factor may be connected to more than two nodes.
Applications of graphical models include speech recognition, computer vision, decoding of low-density parity-check codes, modeling of gene regulatory networks, gene finding and diagnosis of diseases.
A good reference for learning the basics of graphical models is written by Neapolitan, Learning Bayesian networks (2004). A more advanced and statistically oriented book is by Cowell, Dawid, Lauritzen and Spiegelhalter, Probabilistic networks and expert systems (1999).
A computational reasoning approach is provided in Pearl, Probabilistic Reasoning in Intelligence Systems (1988)[1] were the relationships between graphs and probabilities were formally introduced.
See also
Reference
- ^ Pearl, J. (1988) Probabilistic Reasoning in Intelligent Systems, (Revised Second Printing) San Mateo, CA: Morgan Kaufmann.