Jump to content

User:Neg99/sandbox

From Wikipedia, the free encyclopedia

Singular spectrum analysis (SSA) combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. The origin of SSA lies in Principal component analysis, the classical Karhunen–Loeve theorem, which provides spectral decomposition of time series and random fields, and in the embedding theorem.

The areas where SSA can be applied are very broad: climatology, marine science, geophysics, engineering, image processing, medicine, econometrics among them. Hence different modifications of SSA have been proposed and different methodologies of SSA are used in practical applications. The two main directions are SSA for general purpose (Golyandina et all, 2011) such as trend extraction, periodicity detection, seasonal adjustment, smoothing, noise reduction and SSA for spectral analysis of stationary time series (Vautard and Ghil, 1989).

SSA as a model-free tool (Basic SSA)

[edit]

SSA can be used as a model-free technique so that it can be applied to arbitrary time series including non-stationary time series. The basic aim of SSA is to decompose the time series into the sum of interpretable components such as trend, periodic components and noise with no a-priori assumptions about the parametric form of these components.

Consider a real-valued time series of length . Let be some integer called the window length and .

Main algorithm of SSA

[edit]

1st step: Embedding.

Form the trajectory matrix of the series , which is the matrix

where are lagged vectors of size . The matrix is Hankel which means that has equal elements on the anti-diagonals .


2nd step: Singular value decomposition (SVD).

Perform the singular value decomposition (SVD) of the trajectory matrix . Set and denote by the eigenvalues of taken in the decreasing order of magnitude () and by the orthonormal system of the eigenvectors of the matrix corresponding to these eigenvalues.

Set (note that for a typical real-life series) and . In this notation, the SVD of the trajectory matrix can be written as

where the matrices have rank 1 and are called elementary matrices. The collection will be called th eigentriple (abbreviated as ET) of the SVD. Vectors are the left singular vectors of the matrix , numbers are called the singular values and provide the singular spectrum of ; this gives the name to SSA. Vectors are called vectors of principal components (PCs).


3rd step: Eigentriple grouping.

Partition the set of indices into disjoint subsets .

Let . Then the resultant matrix corresponding to the group is defined as . The resultant matrices are computed for the groups and the grouped SVD expansion of can now be written as


4th step: Diagonal averaging.

Each matrix of the grouped decomposition is hankelized and then the obtained Hankel matrix is transformed into a new series of length using the one-to-one correspondence between Hankel matrices and time series. Diagonal averaging applied to a resultant matrix produces a reconstructed series . In this way, the initial series is decomposed into a sum of reconstructed subseries:

This decomposition is the main result of the SSA algorithm. The decomposition is meaningful if each reconstructed subseries could be classified as a part of either trend or some periodic component or noise.

Theory of SSA

[edit]

The two main questions which the theory of SSA attempts to answer are: (a) what time series components can be separated by SSA, and (b) how to choose the window length and make proper grouping for extraction of a desirable component. Many theoretical results can be found in Golyandina et al (2001, Ch. 1 and 6).

Trend (which is defined as a slowly varying component of the time series), periodic components and noise are asymptotically separable as . In practice is fixed and one is interested in approximate separability between time series components. A number of indicators of approximate separability can be used, see Golyandina et al (2001, Ch. 1). The window length determines the resolution of the method: larger values of provide more refined decomposition into elementary components and therefore better separability. The window length determines the longest periodicity captured by SSA. Trends can be extracted by grouping of eigentriples with slowly varying eigenvectors. A sinusoid with frequency smaller than 0.5 produces two approximately equal eigenvalues and two sine-wave eigenvectors with the same frequencies and -shifted phases.

Separation of two time series components can be considered as extraction of one component in the presence of perturbation by the other component. SSA perturbation theory is developed in Nekrutkin (2010).

SSA for stationary time series

[edit]

For stationary time series, SSA can be considered as a nonparametric spectral estimation method. It has some specifics including different terminology and methodology of application. In particular, centering is conventionally used as preprocessing before application of SSA to stationary time series.

The main difference in the algorithm is related to the use of the lag-covariance matrix of instead of to obtain spectral information on the time series, assumed to be stationary in the weak sense. The matrix can be estimated directly from the data as a Toeplitz matrix with constant diagonals (Vautard and Ghil, 1989):

The eigenvectors of the lag-covariance matrix are called temporal empirical orthogonal functions (EOFs).

The use of the matrix in Basic SSA was introduced in Broomhead and King (1986a,1986b) and therefore Basic SSA is sometimes called `BK-SSA’, The use of matrix was introduced in Vautard and Ghil (1987) what gives the name `VG-SSA’ (another name for this version of SSA is ‘Toeplitz SSA’ (Golyandina et al, 2001, Sect. 1.7)).

Signal-to-noise separation can be obtained by merely inspecting the slope break in a "scree diagram" of eigenvalues or singular values vs. . The point at which this break occurs should not be confused with a ``dimension" of the underlying deterministic dynamics (Vautard and Ghil, 1989).

A Monte-Carlo test (Allen and Robertson, 1996; Allen and Smith, 1996) can be applied to ascertain the statistical significance of the oscillatory pairs detected by SSA (usually `VG’ version of SSA), in presence of red noise.

Forecasting by SSA

[edit]

If for some series the SVD step in Basic SSA gives , then this series is called time series of rank (Golyandina et al, 2001, Ch.5). The subspace spanned by the leading eigenvectors is called signal subspace. This subspace is used for estimating the signal parameters in signal processing, e.g. ESPRIT for high-resolution frequency estimation. Also, this subspace determines the linear homogeneous recurrence relation (LRR) governing the series, which can be used for forecasting. Continuation of the series by the LRR is similar to forward linear prediction in signal processing.

Let the series be governed by the minimal LRR . Let us choose , be the eigenvectors (left singular vectors of the -trajectory matrix), which are provided by the SVD step of SSA. Then this series is governed by an LRR , where are expressed through (Golyandina et al, 2001, Ch.5), and can be continued by the same LRR.

This provides the basis for SSA recurrent and vector forecasting algorithms (Golyandina et al, 2001, Ch.2). In practice, the signal is corrupted by a perturbation, e.g., by noise, and its subspace is estimated by SSA approximately. Thus, SSA forecasting can be applied for forecasting of a time series component that is approximately governed by an LRR and is approximately separated from the residual.

Multivariate extension

[edit]

Multi-channel SSA (or M-SSA) is a natural extension of SSA to for analyzing multivariate time series, where the size of different univariate series does not have to be the same. The trajectory matrix of multi-channel time series consists of stacked trajectory matrices of separate times series. The rest of the algorithm is the same as in the univariate case. System of series can be forecasted analogously to SSA recurrent and vector algorithms (Golyandina and Stepanov, 2005).

In the meteorological literature, extended EOF (EEOF) analysis is often assumed to be synonymous with M-SSA. The two methods are both extensions of classical principal component analysis (PCA) but they differ in emphasis: EEOF analysis typically utilizes a number of spatial channels much greater than the number of temporal lags, thus limiting the temporal and spectral information. In M-SSA, on the other hand, one usually chooses . Often M-SSA is applied to a few leading PCs of the spatial data, with chosen large enough to extract detailed temporal and spectral information from the multivariate time series (Ghil et al., 2002).

Other multivariate extension is 2D-SSA that can be applied to two-dimensional data like digital images (Golyandina and Usevich, 2010). The analogue of trajectory matrix is constructed by moving 2D windows of size .

Spatio-temporal gap filling

[edit]

The gap-filling versions of SSA can be used to analyze data sets that are unevenly sampled or contain missing data (Schoellhamer, 2001; Kondrashov and Ghil, 2006; Golyandina and Osipov, 2007).

Schoellhamer (2001) shows that the straightforward idea to formally calculate approximate inner products omitting unknown terms is workable for long stationary time series.

In the algorithm described in Kondrashov and Ghil (2006) the estimates of missing data points are produced iteratively, and are then used to compute a self-consistent lag-covariance matrix and its EOFs ; moreover, cross-validation is used to optimize the window length and the number of leading SSA modes to fill the gaps with the iteratively estimated "signal", while the noise is discarded. For a stationary univariate time series, the SSA gap filling procedure utilizes temporal correlations to fill in the missing points. For a stationary multivariate data set, gap filling by M-SSA takes advantage of both spatial and temporal correlations.

Golyandina and Osipov (2007) uses the idea of filling in missing entries in vectors taken from the given subspace. The recurrent and vector SSA forecasting can be considered as particular cases of filling in algorithms described in the paper.

Detection of structural changes

[edit]

SSA can be effectively used as a non-parametric method of time series monitoring and change detection. To do that, SSA performs the subspace tracking in the following way. SSA is applied sequentially to the initial parts of the series, constructs the corresponding signal subspaces and checks the distances between these subspaces and the lagged vectors formed from the few most recent observations. If these distances become too large, a structural change is suspected to have occurred in the series (Golyandina et al, 2001, Ch.3; Moskvina and Zhigljavsky, 2003).

In this way, SSA could be used for change detection not only in trends but also in the variability of the series, in the mechanism that determines dependence between different series and even in the noise structure. The method have proved to be useful in different engineering problems (e.g. Mohammad and Nishida (2011) in robotics).

Relation between SSA and other methods

[edit]

SSA and Autoregression. Typical model for SSA is , where (signal satisfying an LRR) and is noise. The model of AR is . Despite these two models look similar they are very different. SSA considers AR as a noise component only. AR(1), which is red noise, is typical model of noise for Monte-Carlo SSA (Allen and Smith,1996 ).


SSA and spectral Fourier analysis. In contrast with Fourier analysis with fixed basis of sine and cosine functions, SSA use an adaptive basis generated by the time series itself. As a result, the underlying model in SSA is more general and SSA can extract amplitude-modulated sine wave components with frequencies different from . SSA-related methods like ESPRIT can estimate frequencies with higher resolution than spectral Fourier analysis.


SSA and linear recurrence relations. Let the signal be modeled by a series, which satisfies a linear recurrence relation ; that is, a series that can be represented as sums of products of exponential, polynomial and sine wave functions. This includes the sum of dumped sinusoids model whose complex-valued form is . SSA-related methods allow estimation of frequencies and exponential factors (Golyandina and Zhigljavsky, 2013, Sect 2.8). Coefficients can be estimated by the least squares method. Extension of the model, where are replaced by polynomials of , can be also considered within the SSA-related methods (Badeau et al, 2008).


SSA and signal subspace methods. SSA can be considered as a subspace-based method, since it allows estimation of the signal subspace of dimension by .


SSA and State space models. The main model behind SSA is , where and is noise. Formally, this model belongs to the general class of state space models. The specifics of SSA is in the facts that parameter estimation is a problem of secondary importance in SSA and the data analysis procedures in SSA are nonlinear as they are based on the SVD of either trajectory or lag-covariance matrix.


SSA and Independent Component Analysis (ICA). SSA is used in blind source separation by ICA as a preprocessing step (Pietilä et al, 2006). On the other hand, ICA can be used as a replacement of the SVD step in the SSA algorithm for achieving better separability (Golyandina and Zhigljavsky, 2013, Sect. 1.5.4).


SSA and regression. SSA is able to extract polynomial and exponential trends. However, unlike regression, SSA does not assume any parametric model which may give significant advantage when an exploratory data analysis is performed with no obvious model in hand (Golyandina et al, 2001, Ch.1).


SSA and linear filters. The reconstruction of the series by SSA can be considered as adaptive linear filtration. If the window length is small, then each eigenvector generates a linear filter of width for reconstruction of the middle of the series , . The filtration is non-causal. However, the so-called Last-point SSA can be used as a causal filter (Golyandina and Zhigljavsky 2013, Sect. 2.9).


SSA and density estimation. Since SSA can be used as a method of data smoothing it can be used as a method of non-parametric density estimation (Golyandina et al, 2012).

Brief history

[edit]

The first publication, which can be considered as one of the origins of SSA (and more generally of the subspace-based methods of signal processing), can be traced back to the eighteenth century (Prony's method).

Broomhead and King (1986a, b) and Fraedrich (1986) proposed to use SSA and M-SSA in the context of nonlinear dynamics for the purpose of reconstructing the attractor of a system from measured time series. These authors provided an extension and a more robust application of the idea of reconstructing dynamics from a single time series based on the embedding theorem.

Ghil, Vautard and their colleagues (Vautard and Ghil, 1989; Ghil and Vautard, 1991; Vautard et al., 1992) noticed the analogy between the trajectory matrix of Broomhead and King, on the one hand, and the Karhunen–Loeve decomposition (Principal component analysis in the time domain), on the other. Thus, SSA can be used as a time-and-frequency domain method for time series analysis — independently from attractor reconstruction and including cases in which the latter may fail.

Note also the so-called ‘Caterpillar’ methodology, see Danilov and Zhigljavsky (Eds) (1997) and Golyandina et al (2001). This methodology is a version of SSA that was developed in the former Soviet Union independently of the main-stream SSA. The main difference between the main-stream SSA and the ‘Caterpillar-SSA’ is in the emphasis of the study of SSA properties. The main concept behind theoretical studies and methodological principles of ‘Caterpillar-SSA’ is the concept of separability. Careful consideration of separability requirements the ‘Caterpillar-SSA’ leads, for example, to specific recommendations concerning the choice of SSA parameters.

At present, there are several dozen papers dealing with methodological aspects of SSA and many more with applications of SSA. A very basic introduction to SSA can be found in Elsner and Tsonis (1996). More advanced texts are the monograph Golyandina et al. (2001), the survey Ghil et al. (2002), a recent special issue of ‘Statistics and Its Interface’ (Zhigljavsky, 2010, Ed.) and the recent book Golyandina and Zhigljavsky (2013).

References

[edit]
  • Allen, M.R., and A.W. Robertson (1996): "Distinguishing modulated oscillations from coloured noise in multivariate datasets", Clim. Dyn., 12, 775–-784.
  • Allen, M.R. and L.A. Smith (1996): "Monte Carlo SSA: detecting irregular oscillations in the presence of colored noise". Journal of climate, 9 (12), 3373--3404.
  • Badeau, R., G. Richard, and B. David (2008): "Performance of ESPRIT for Estimating Mixtures of Complex Exponentials Modulated by Polynomials". IEEE Transactions on signal processing, 56(2), 492--504.
  • Bozzo, E., R. Carniel and D. Fasino (2010): "Relationship between singular spectrum analysis and Fourier analysis: Theory and application to the monitoring of volcanic activity", Comput. Math. Appl. 60(3), 812–820
  • Broomhead, D.S., and G.P. King (1986a): "Extracting qualitative dynamics from experimental data", Physica D, 20, 217–236.
  • Broomhead, D.S., and G. P. King (1986b): "On the qualitative analysis of experimental dynamical systems". Nonlinear Phenomena and Chaos, Sarkar S (Ed.), Adam Hilger, Bristol, 113-–144.
  • Elsner, J.B. and Tsonis, A.A. (1996): Singular Spectral Analysis. A New Tool in Time Series Analysis, Plenum Press.
  • Fraedrich, K. (1986) "Estimating dimensions of weather and climate attractors". J. Atmos. Sci. 43, 419–432.
  • Ghil, M., and R. Vautard (1991): "Interdecadal oscillations and the warming trend in global temperature time series", Nature, 350, 324–327.
  • Golyandina, N., V. Nekrutkin and A. Zhigljavsky (2001): Analysis of Time Series Structure: SSA and related techniques. Chapman and Hall/CRC. ISBN 1-58488-194-1.
  • Golyandina, N., and E. Osipov (2007) "The ‘Caterpillar’-SSA method for analysis of time series with missing values", J. Stat. Plan. Inference 137(8), 2642–2653.
  • Golyandina, N., A. Pepelyshev and A. Steland (2012): "New approaches to nonparametric density estimation and selection of smoothing parameters", Comput. Stat. Data Anal. 56(7), 2206–2218.
  • Golyandina, N. and K. Usevich (2010): "2D-extension of Singular Spectrum Analysis: algorithm and elements of theory". In: Matrix Methods: Theory, Algorithms and Applications (Eds. V.Olshevsky and E.Tyrtyshnikov). World Scientific Publishing, 449--473.
  • Harris, T. and H. Yan (2010): "Filtering and frequency interpretations of singular spectrum analysis". Physica D 239, 1958–1967.
  • Mohammad, Y., and T. Nishida (2011) "On comparing SSA-based change point discovery algorithms". IEEE SII, 938–945.
  • Moskvina, V., and A. Zhigljavsky (2003) "An algorithm based on singular spectrum analysis for change-point detection". Commun Stat Simul Comput 32, 319–352.
  • Nekrutkin, V. (2010) "Perturbation expansions of signal subspaces for long signals". J. Stat. Interface 3, 297–319.
  • Pietilä, A., M. El-Segaier, R. Vigário and E. Pesonen (2006) "Blind source separation of cardiac murmurs from heart recordings". In: Rosca J, et al (eds) Independent Component Analysis and Blind Signal Separation, Lecture Notes in Computer Science, vol 3889, Springer, pp 470–477.
  • de Prony, G. (1795) "Essai expérimental et analytique sur les lois de la dilatabilité des fluides élastiques et sur celles de la force expansive de la vapeur de l’eau et la vapeur de l’alkool à différentes températures". J. de l’Ecole Polytechnique, 1(2), 24–76.
  • Schoellhamer, D. (2001) "Singular spectrum analysis for time series with missing data". Geophys. Res. Lett. 28(16), 3187–3190.
  • Vautard, R., and M. Ghil (1989): "Singular spectrum analysis in nonlinear dynamics, with applications to paleoclimatic time series", Physica D, 35, 395–424.
  • Zhigljavsky, A. (ed.) (2010) Statistics and Its Interface (Special issue on the singular spectrum analysis in time series), vol 3. Guest Editor.

See also

[edit]
[edit]

Category:Time series analysis Category:Signal processing