Jump to content

Distributed lag: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Added reference
m Added reference
Line 3: Line 3:
In general <math>y_t=\sum \beta_i y_{t-i} +\epsilon_t</math>, where <math>y_t</math> is the time series and <math>\epsilon</math> is the error.
In general <math>y_t=\sum \beta_i y_{t-i} +\epsilon_t</math>, where <math>y_t</math> is the time series and <math>\epsilon</math> is the error.


<ref>Jeff B. Cromwell, [et. al] (1994). Multivariate Tests For Time Series Models. SAGE Publications, Inc. ISBN 0-8039-5440-9</ref>
<ref>Jeff B. Cromwell, et. al., (1994). Multivariate Tests For Time Series Models. SAGE Publications, Inc. ISBN 0-8039-5440-9</ref>


{{Unreferenced|date=January 2008}}
{{Unreferenced|date=January 2008}}

Revision as of 23:20, 10 March 2009

In statistics a distributed lag model explains a time series by a series of lags of the same variable. In general , where is the time series and is the error.

[1]

  1. ^ Jeff B. Cromwell, et. al., (1994). Multivariate Tests For Time Series Models. SAGE Publications, Inc. ISBN 0-8039-5440-9