Jump to content

Distributed lag

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Melcombe (talk | contribs) at 16:13, 6 November 2009 (rearrange intro, remove context tag, add cats, change to refimprove tag). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In statistics, a distributed lag model is a model for time series data in which a regression-like equation is used to predict current values of a dependent variable based on both the current values of an explanatory variable and the lagged (past period) values of this explanatory variable.[1]

References

  1. ^ Jeff B. Cromwell, et. al., (1994). Multivariate Tests For Time Series Models. SAGE Publications, Inc. ISBN 0-8039-5440-9