Law of total expectation: Difference between revisions
The proof in the discrete case didn't make sense. I replaced it with one that does. |
LiranKatzir (talk | contribs) m A link addition to relevant new page. |
||
Line 32: | Line 32: | ||
:<math>\operatorname{E}_t (X) = \operatorname{E}_t ( \operatorname{E}_{t+1} ( X )).</math> |
:<math>\operatorname{E}_t (X) = \operatorname{E}_t ( \operatorname{E}_{t+1} ( X )).</math> |
||
==See also== |
|||
* [[Law of total probability]] |
|||
* [[Rule of Average Conditional Expectations]] |
|||
==References== |
==References== |
||
Line 37: | Line 41: | ||
*{{cite book | last=Billingsley | first=Patrick | title=Probability and measure | publisher=John Wiley & Sons, Inc | location=New York, NY | year=1995 | isbn=0-471-00710-2}} (Theorem 34.4) |
*{{cite book | last=Billingsley | first=Patrick | title=Probability and measure | publisher=John Wiley & Sons, Inc | location=New York, NY | year=1995 | isbn=0-471-00710-2}} (Theorem 34.4) |
||
*http://sims.princeton.edu/yftp/Bubbles2007/ProbNotes.pdf, especially equations (16) through (18) |
*http://sims.princeton.edu/yftp/Bubbles2007/ProbNotes.pdf, especially equations (16) through (18) |
||
{{DEFAULTSORT:Law Of Total Expectation}} |
{{DEFAULTSORT:Law Of Total Expectation}} |
Revision as of 23:49, 29 September 2012
The proposition in probability theory known as the law of total expectation [1], the law of iterated expectations, Adam's law, the tower rule, the smoothing theorem, among other names, states that if X is an integrable random variable (i.e., a random variable satisfying E( | X | ) < ∞) and Y is any random variable, not necessarily integrable, on the same probability space, then
i.e., the expected value of the conditional expected value of X given Y is the same as the expected value of X.
The nomenclature used here parallels the phrase law of total probability. See also law of total variance.
(The conditional expected value E( X | Y ) is a random variable in its own right, whose value depends on the value of Y. Notice that the conditional expected value of X given the event Y = y is a function of y (this is where adherence to the conventional rigidly case-sensitive notation of probability theory becomes important!). If we write E( X | Y = y) = g(y) then the random variable E( X | Y ) is just g(Y).
Proof in the discrete case
Iterated expectations with nested conditioning sets
The following formulation of the law of iterated expectations plays an important role in many economic and finance models:
where the value of I1 is determined by that of I2. To build intuition, imagine an investor who forecasts a random stock price X based on the limited information set I1. The law of iterated expectations says that the investor can never gain a more precise forecast of X by conditioning on more specific information (I2), if the more specific forecast must itself be forecast with the original information (I1).
This formulation is often applied in a time series context, where Et denotes expectations conditional on only the information observed up to and including time period t. In typical models the information set t + 1 contains all information available through time t, plus additional information revealed at time t + 1. One can then write[2]:
See also
References
- ^ Neil A. Weiss, A Course in Probability, Addison–Wesley, 2005, pages 380–383.
- ^ Recursive macroeconomic theory. ISBN 978-0-262-12274-0.
{{cite book}}
: Unknown parameter|authors=
ignored (help)
- Billingsley, Patrick (1995). Probability and measure. New York, NY: John Wiley & Sons, Inc. ISBN 0-471-00710-2. (Theorem 34.4)
- http://sims.princeton.edu/yftp/Bubbles2007/ProbNotes.pdf, especially equations (16) through (18)