Jump to content

Spectral density

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 98.81.0.222 (talk) at 23:39, 8 July 2012 (Electrical engineering). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In statistical signal processing and physics, the spectral density, power spectral density (PSD), or energy spectral density (ESD), is a positive real function of a frequency variable associated with a stationary stochastic process, or a deterministic function of time, which has dimensions of power per hertz (Hz), or energy per hertz. It is often called simply the spectrum of the signal. Intuitively, the spectral density measures the frequency content of a stochastic process and helps identify periodicities.

Explanation

In physics, the signal is usually a wave, such as an electromagnetic wave, random vibration, or an acoustic wave. The spectral density of the wave, when multiplied by an appropriate factor, will give the power carried by the wave, per unit frequency, known as the power spectral density (PSD) of the signal. Power spectral density is commonly expressed in watts per hertz (W/Hz).[1].

For voltage signals, it is customary to use units of V2Hz−1 for PSD, and V2sHz−1 for ESD.[2]

For random vibration analysis, units of g2Hz−1 are sometimes used for acceleration spectral density.[3]

Although it is not necessary to assign physical dimensions to the signal or its argument, in the following discussion the terms used will assume that the signal varies in time.

Definition

Energy spectral density

The energy spectral density describes how the energy of a signal or a time series is distributed with frequency. Here, the term energy is used in the generalized sense of signal processing to denote a variance of the signal. This energy spectral density is most suitable for pulse-like signals characterized by a finite total energy; mathematically, we require that the signal is described by a square integrable function. In this case, the energy spectral density of the signal is the square of the magnitude of the continuous Fourier transform of the signal

where is the angular frequency ( times the ordinary frequency) and is the continuous Fourier transform of , and is its complex conjugate. As is always the case, the multiplicative factor of is not absolute, but rather depends on the particular normalizing constants used in the definition of the various Fourier transforms.

As an example, if represents the potential (in volts) of an electrical signal propagating across a transmission line, then the units of measure for spectral density would appear as volt2×seconds2, which is per se not yet dimensionally correct for an spectral energy density in the sense of the physical sciences. However, after dividing by the characteristic impedance Z (in ohms) of the transmission line, the dimensions of would become volt2×seconds2 per ohm, which is equivalent to joules per hertz, the SI unit for spectral energy density as defined in the physical sciences.

This definition generalizes in a straight-forward manner to a discrete signal with an infinite number of values such as a signal sampled at discrete times :

where is the discrete-time Fourier transform of In the mathematical sciences, the sampling interval is often set to one. It is needed, however, to keep the correct physical units and to ensure that we recover the continuous case in the limit

Power spectral density

The above definition of energy spectral density is most suitable for pulse-like signals for which the Fourier transforms of the signals exist. For continued signals that describe for example stationary physical processes, it makes more sense to define a power spectral density (PSD), which describes how the power of a signal or time series is distributed with frequency. Here, power can be the actual physical power, or more often, for convenience with abstract signals, can be defined as the squared value of the signal. This instantaneous power is then given by

for a signal . The mean (or expected value) of is the total power, which is the integral of the power spectral density over all frequencies.

We can use a normalized Fourier transform:

and define the power spectral density as:[4][5]

For stochastic signals, the squared magnitude of the Fourier transform typically does not approach a limit, but its expectation does; see periodogram.

Remark: Many signals of interest are not integrable and the non-normalized (=ordinary) Fourier transform of the signal does not exist. Some authors (e.g. Risken[6] ) still use the non-normalized Fourier transform in a formal way to formulate a definition of the power spectral density

.

Such formal statements may be sometimes useful to guide the intuition, but should always be used with utmost care.

Using such formal reasoning, one may already guess that for a stationary random process, the power spectral density and the autocorrelation function of this signal , should be a Fourier pair. This is indeed true and represent a deep theorem due to Wiener and Khinchine

Many authors use this equality to actually define the power spectral density[7]

The power of the signal in a given frequency band can be calculated by integrating over positive and negative frequencies,

The power spectral density of a signal exists if the signal is a wide-sense stationary process. If the signal is not wide-sense stationary, then the autocorrelation function must be a function of two variables. In some cases, such as wide-sense cyclostationary processes, a PSD may still exist.[8] More generally, similar techniques may be used to estimate a time-varying spectral density.

The definition of the power spectral density generalizes in a straight-forward manner to finite time-series with such as a signal sampled at discrete times for a total measurement period .

.

In a real-word application, one would typically average this single-measurement PSD over several repetitions of the measurement to obtain a more accurate estimate of the real PSD underlying the observed physical process. This computed PSD is sometimes called periodogram. One can prove that this periodigram converges to the true PSD when the averaging time interval T goes to infinity (Brown & Hwang[9]) to approach the Power Spectral Density (PSD).

If two signals both possess power spectra (the correct terminology), then a cross-power spectrum can be calculated by using their cross-correlation function.

Properties of the power spectral density

Some properties of the PSD include:[10]

  • the spectrum of a real valued process is symmetric: , or in other words, it is an even function
  • it is continuous and differentiable on [-1/2, +1/2]
  • its derivative is zero at f = 0 (this is required by the fact that the power spectrum is an even function), else the derivative does not exist at f = 0
  • the autocovariance function can be reconstructed by using the Inverse Fourier transform
  • it describes the distribution of variance across time scales. In particular
  • it is a linear function of the autocovariance function
    If is decomposed into two functions then
    where

The power spectrum is defined as[11]

Cross-spectral density

"Just as the Power Spectral Density (PSD) is the Fourier transform of the auto-covariance function we may define the Cross Spectral Density (CSD) as the Fourier transform of the cross-covariance function."[12]

The PSD is a special case of the cross spectral density (CPSD) function, defined between two signals xn and yn as

Estimation

The goal of spectral density estimation is to estimate the spectral density of a random signal from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve parametric or non-parametric approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to an autoregressive model. A common non-parametric technique is the periodogram.

The spectral density is usually estimated using Fourier transform methods, but other techniques such as Welch's method and the maximum entropy method can also be used.

Properties

  • The spectral density of and the autocorrelation of form a Fourier transform pair (for PSD versus ESD, different definitions of autocorrelation function are used).
  • One of the results of Fourier analysis is Parseval's theorem which states that the area under the energy spectral density curve is equal to the area under the square of the magnitude of the signal, the total energy:
The above theorem holds true in the discrete cases as well. A similar result holds for the total power in a power spectral density being equal to the corresponding mean total signal power, which is the autocorrelation function at zero lag.
  • Most "frequency" graphs really display only the spectral density. Sometimes the complete frequency spectrum is graphed in two parts, "amplitude" versus frequency (which is the spectral density) and "phase" versus frequency (which contains the rest of the information from the frequency spectrum). cannot be recovered from the spectral density part alone — the "temporal information" is lost.
  • The spectral centroid of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts.
  • The spectral edge frequency of a signal is an extension of the previous concept to any proportion instead of two equal parts.
  • Spectral density is a function of frequency, not a function of time. However, the spectral density of small "windows" of a longer signal may be calculated, and plotted versus time associated with the window. Such a graph is called a spectrogram. This is the basis of a number of spectral analysis techniques such as the short-time Fourier transform and wavelets.
  • In radiometry and colorimetry (or color science more generally), the spectral power distribution (SPD) of a light source is a measure of the power carried by each frequency or "color" in a light source. The light spectrum is usually measured at points (often 31) along the visible spectrum, in wavelength space instead of frequency space, which makes it not strictly a spectral density. Some spectrophotometers can measure increments as fine as one to two nanometers. Values are used to calculate other specifications and then plotted to demonstrate the spectral attributes of the source. This can be a helpful tool in analyzing the color characteristics of a particular source.

Applications

Electrical engineering

The concept and use of the power spectrum of a signal is fundamental in electrical engineering, especially in electronic communication systems, including radio communications, radars, and related systems, plus passive [remote sensing] technology. Much effort has been expended and millions of dollars spent on developing and producing electronic instruments called "spectrum analyzers" for aiding electrical engineers and technicians in observing and measuring the power spectra of signals. The cost of a spectrum analyzer varies depending on its frequency range, its bandwidth, and its accuracy. The higher the frequency range (S-band, C-band, X-band, Ku-band, K-band, Ka-band, etc.), the more difficult the components are to make, assemble, and test and the more expensive the spectrum analyzer is. Also, the wider the bandwidth that a spectrum analyzer possesses, the more costly that it is, and the capability for more accurate measurements increases costs as well.

The spectrum analyzer measures the magnitude of the short-time Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density. These devices work in low frequencies and with small bandwidths.

Coherence

See Coherence (signal processing) for use of the cross-spectral density.

See also

References

  1. ^ Gérard Maral (2003). VSAT Networks. John Wiley and Sons. ISBN 0-470-86684-5.
  2. ^ Michael Peter Norton and Denis G. Karczub (2003). Fundamentals of Noise and Vibration Analysis for Engineers. Cambridge University Press. ISBN 0-521-49913-5. {{cite book}}: line feed character in |author= at position 21 (help)
  3. ^ Alessandro Birolini (2007). Reliability Engineering. Springer. p. 83. ISBN 978-3-540-49388-4.
  4. ^ Fred Rieke, William Bialek, and David Warland (1999). Spikes: Exploring the Neural Code (Computational Neuroscience). MIT Press. ISBN 978-0262681087.{{cite book}}: CS1 maint: multiple names: authors list (link)
  5. ^ Scott Millers and Donald Childers (2012). Probability and random processes. Academic Press.
  6. ^ Hannes Risken (1996). The Fokker–Planck Equation: Methods of Solution and Applications (2nd ed.). Springer. p. 30. ISBN 9783540615309.
  7. ^ Dennis Ward Ricker (2003). Echo Signal Processing. Springer. ISBN 1-4020-7395-X.
  8. ^ Andreas F. Molisch (2011). Wireless Communications (2nd ed.). John Wiley and Sons. p. 194. ISBN 978-0-470-74187-0.
  9. ^ Robert Grover Brown & Patrick Y.C. Hwang (1997). Introduction to Random Signals and Applied Kalman Filtering. John Wiley & Sons. ISBN 0-471-12839-2.
  10. ^ Storch, H. Von (2001). Statistical analysis in climate research. Cambridge Univ Pr. ISBN 0-521-01230-9. {{cite book}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  11. ^ An Introduction to the Theory of Random Signals and Noise, Wilbur B. Davenport and Willian L. Root, IEEE Press, New York, 1987, ISBN 0-87942-235-1
  12. ^ http://www.fil.ion.ucl.ac.uk/~wpenny/course/course.html, chapter 7