Spectral density: Difference between revisions
Fgnievinski (talk | contribs) |
|||
(433 intermediate revisions by more than 100 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Relative importance of certain frequencies in a composite signal}} |
|||
{{refimprove|date=May 2008}} |
|||
{{about|signal processing and relation of spectra to time-series|further applications in the physical sciences|Spectrum (physical sciences)}} |
|||
In [[statistical signal processing]] and [[physics]], the '''spectral density''', '''power spectral density''' ('''PSD'''), or '''energy spectral density''' ('''ESD'''), is a positive real function of a frequency variable associated with a [[Stationary_process|stationary]] [[stochastic process]], or a deterministic function of time, which has dimensions of power per [[hertz]] (Hz), or energy per hertz. It is often called simply the ''[[spectrum]]'' of the signal. Intuitively, the spectral density measures the frequency content of a [[stochastic process]] and helps identify periodicities. |
|||
{{Use American English|date = March 2019}} |
|||
{{distinguish-redirect|Spectral power density|Spectral power}} |
|||
[[File:Fluorescent lighting spectrum peaks labelled.svg|thumb|right|The spectral density of a [[fluorescent light]] as a function of optical wavelength shows peaks at atomic transitions, indicated by the numbered arrows.]] |
|||
[[File:Voice waveform and spectrum.png|thumb|right|The voice waveform over time (left) has a broad audio power spectrum (right).]]{{Too technical|date=June 2024}} |
|||
In [[signal processing]], the power spectrum <math>S_{xx}(f)</math> of a [[continuous time]] [[signal]] <math>x(t)</math> describes the distribution of [[Power (physics)|power]] into frequency components <math>f</math> composing that signal.<ref name="P Stoica">{{cite web |
|||
| url = http://user.it.uu.se/~ps/SAS-new.pdf |
|||
| title = Spectral Analysis of Signals |
|||
|author1=P Stoica |
|||
|author-link=Peter Stoica |
|||
|author2=R Moses |
|||
|name-list-style=amp | year = 2005 |
|||
}}</ref> According to [[Fourier analysis]], any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal (including [[Noise (electronics)|noise]]) as analyzed in terms of its frequency content, is called its [[spectrum]]. |
|||
When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute the '''energy spectral density'''. More commonly used is the '''power spectral density''' (PSD, or simply '''power spectrum'''), which applies to signals existing over ''all'' time, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The PSD then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite. [[Summation]] or integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integrating <math>x^2(t)</math> over the time domain, as dictated by [[Parseval's theorem]].<ref name="P Stoica" /> |
|||
== Explanation == |
|||
The spectrum of a physical process <math>x(t)</math> often contains essential information about the nature of <math>x</math>. For instance, the [[Pitch (music)|pitch]] and [[timbre]] of a musical instrument are immediately determined from a spectral analysis. The [[color]] of a light source is determined by the spectrum of the electromagnetic wave's electric field <math>E(t)</math> as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves the [[Fourier transform]], and generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice, such as when a [[dispersive prism]] is used to obtain a spectrum of light in a [[spectrograph]], or when a sound is perceived through its effect on the auditory receptors of the inner ear, each of which is sensitive to a particular frequency. |
|||
In [[physics]], the signal is usually a wave, such as an [[electromagnetic wave]], [[random vibration]], or an [[sound wave|acoustic wave]]. The spectral density of the wave, when multiplied by an appropriate factor, will give the [[power (physics)|power]] carried by the wave, per unit frequency, known as the '''power spectral density''' (PSD) of the signal. Power spectral density is commonly expressed in [[watt]]s per [[hertz]] (W/Hz).<ref>{{cite book | title = VSAT Networks | author = Gérard Maral | publisher = John Wiley and Sons | year = 2003 | ISBN = 0-470-86684-5 | url = http://books.google.com/books?id=CMx5HQ1Mr_UC&pg=PR20&dq=%22power+spectral+density%22+W/Hz&lr=&as_brr=0&ei=VYwvSImyA4L4sQPxxJXzAg&sig=-bko0DhmJwzISN6PcHszF9E3qUE#PPR20,M1 }}</ref>. |
|||
However this article concentrates on situations in which the time series is known (at least in a statistical sense) or directly measured (such as by a microphone sampled by a computer). The power spectrum is important in [[statistical signal processing]] and in the statistical study of [[stochastic process]]es, as well as in many other branches of [[physics]] and [[engineering]]. Typically the process is a function of time, but one can similarly discuss data in the spatial domain being decomposed in terms of [[spatial frequency]].<ref name="P Stoica" /> |
|||
== Units == |
|||
For [[voltage]] signals, it is customary to use units of V<sup>2</sup>Hz<sup>−1</sup> for PSD, and V<sup>2</sup>sHz<sup>−1</sup> for ESD.<ref>{{cite book | title = Fundamentals of Noise and Vibration Analysis for Engineers | author = Michael Peter Norton |
|||
{{see also|Fourier transform#Units}} |
|||
and Denis G. Karczub | publisher = Cambridge University Press | year = 2003 | isbn = 0-521-49913-5 | url = http://books.google.com/books?id=jDeRCSqtev4C&pg=PA352&dq=%22power+spectral+density%22+%22energy+spectral+density%22&lr=&as_brr=3&ei=i3IvSLL6H4-KsgPfze13&sig=RJgA8uGocYf5d6mC6rKKS-X_2bc }}</ref> |
|||
In [[physics]], the signal might be a wave, such as an [[electromagnetic wave]], an [[sound wave|acoustic wave]], or the vibration of a mechanism. The ''power spectral density'' (PSD) of the signal describes the [[power (physics)|power]] present in the signal as a function of frequency, per unit frequency. Power spectral density is commonly expressed in [[SI units]] of [[Watt|watts]] per [[hertz]] (abbreviated as W/Hz).{{sfn | Maral | 2004}} |
|||
For random vibration analysis, units of [[g-force|g]]<sup>2</sup>Hz<sup>−1</sup> are sometimes used for [[acceleration]] spectral density.<ref>{{cite book |
|||
| title = Reliability Engineering |
|||
| author = Alessandro Birolini |
|||
| publisher = Springer |
|||
| year = 2007 |
|||
| isbn = 978-3-540-49388-4 |
|||
| page = 83 |
|||
| url = http://books.google.com/books?id=xPIW3AI9tdAC&pg=PA83&dq=acceleration-spectral-density+g+hz&as_brr=3&ei=q24xSpKOBZXkzASPrs39BQ |
|||
}}</ref> |
|||
When a signal is defined in terms only of a [[voltage]], for instance, there is no unique power associated with the stated amplitude. In this case "power" is simply reckoned in terms of the square of the signal, as this would always be ''proportional'' to the actual power delivered by that signal into a given [[Electrical impedance|impedance]]. So one might use units of V<sup>2</sup> Hz<sup>−1</sup> for the PSD. ''Energy spectral density'' (ESD) would have units of V<sup>2</sup> s Hz<sup>−1</sup>, since [[energy (physics)|energy]] has units of power multiplied by time (e.g., [[watt-hour]]).{{sfn | Norton | Karczub | 2003}} |
|||
Although it is not necessary to assign physical dimensions to the signal or its argument, in the following discussion the terms used will assume that the signal varies in time. |
|||
In the general case, the units of PSD will be the ratio of units of variance per unit of frequency; so, for example, a series of displacement values (in meters) over time (in seconds) will have PSD in units of meters squared per hertz, m<sup>2</sup>/Hz. |
|||
== Definition == |
|||
In the analysis of random [[vibration]]s, units of ''g''<sup>2</sup> Hz<sup>−1</sup> are frequently used for the PSD of [[acceleration]], where ''g'' denotes the [[g-force]].{{sfn | Birolini | 2007 | p=83}} |
|||
Mathematically, it is not necessary to assign physical dimensions to the signal or to the independent variable. In the following discussion the meaning of ''x''(''t'') will remain unspecified, but the independent variable will be assumed to be that of time. |
|||
===Energy spectral density{{Anchor|energy spectral density}}=== |
|||
=== One-sided vs two-sided === |
|||
The '''energy spectral density''' describes how the [[Energy (signal processing)|energy]] of a signal or a [[time series]] is distributed with frequency. Here, the term [[Energy (signal processing)|energy]] is used in the generalized sense of signal processing to denote a variance of the signal. This '''energy spectral density''' is most suitable for pulse-like signals characterized by a finite total energy; mathematically, we require that the signal is described by a [[integrable function|square integrable]] function. In this case, the energy spectral density <math>ESD(\omega)</math> of the signal is the square of the magnitude of the [[continuous Fourier transform]] of the signal |
|||
A PSD can be either a ''one-sided'' function of only positive frequencies or a ''two-sided'' function of both positive and [[Negative frequency|negative frequencies]] but with only half the amplitude. Noise PSDs are generally one-sided in engineering and two-sided in physics.<ref>{{Cite web |last=Paschotta |first=Rüdiger |title=Power Spectral Density |url=https://www.rp-photonics.com/power_spectral_density.html |archive-url=https://web.archive.org/web/20240415070408/https://www.rp-photonics.com/power_spectral_density.html |archive-date=2024-04-15 |access-date=2024-06-26 |website=rp-photonics.com |language=en}}</ref> |
|||
== Definition == |
|||
:<math>ESD(\omega)=\frac{1}{2\pi} \left|\int_{-\infty}^\infty f(t)e^{-i\omega t}\,dt\right|^2 = \frac{1}{2\pi} F(\omega)F^*(\omega)</math> |
|||
=== Energy spectral density{{Anchor|Energy}} === |
|||
where <math>\omega</math> is the [[angular frequency]] (<math>2\pi</math> times the ordinary frequency) and <math>F(\omega)</math> is the [[continuous Fourier transform]] of <math>f(t)</math>, and <math>F^*(\omega)</math> is its [[complex conjugate]]. As is always the case, the multiplicative factor of <math>1/2\pi</math> is not absolute, but rather depends on the particular normalizing constants used in the definition of the various Fourier transforms. |
|||
{{distinguish-redirect|Energy spectral density|energy spectrum}} |
|||
Energy spectral density describes how the [[Energy (signal processing)|energy]] of a signal or a [[time series]] is distributed with frequency. Here, the term [[Energy (signal processing)|energy]] is used in the generalized sense of signal processing;{{sfn | Oppenheim | Verghese | 2016 | p=12}} that is, the energy <math>E</math> of a signal <math>x(t)</math> is: |
|||
As an example, if <math>f(t)</math> represents the potential (in volts) of an electrical signal propagating across a transmission line, then the units of measure for spectral density <math>ESD(\omega)</math> would appear as volt<sup>2</sup>×seconds<sup>2</sup>, which is per se not yet dimensionally correct for an spectral energy density in the sense of the physical sciences. However, after dividing by the characteristic impedance Z (in ohms) of the transmission line, the dimensions of <math>ESD(\omega)/Z</math> would become volt<sup>2</sup>×seconds<sup>2</sup> per ohm, which is equivalent to joules per hertz, the SI unit for spectral energy density as defined in the physical sciences. |
|||
<math display="block"> E \triangleq \int_{-\infty}^\infty \left|x(t)\right|^2\ dt.</math> |
|||
The energy spectral density is most suitable for transients—that is, pulse-like signals—having a finite total energy. Finite or not, [[Parseval's theorem]] (or Plancherel's theorem) gives us an alternate expression for the energy of the signal:{{sfn | Stein | 2000 | pp=108,115}} |
|||
This definition generalizes in a straight-forward manner to a discrete signal with an infinite number of values <math>f_n</math> such as a signal sampled at discrete times <math>f_n=f(n\,dt)</math>: |
|||
<math display="block">\int_{-\infty}^\infty |x(t)|^2\, dt = \int_{-\infty}^\infty \left|\hat{x}(f)\right|^2\, df,</math> |
|||
where: |
|||
<math display="block">\hat{x}(f) \triangleq\int_{-\infty}^\infty e^{-i 2\pi ft}x(t) \ dt</math> |
|||
is the value of the [[Fourier transform]] of <math>x(t)</math> at [[frequency]] <math>f</math> (in [[Hz]]). The theorem also holds true in the discrete-time cases. Since the integral on the left-hand side is the energy of the signal, the value of<math>\left| \hat{x}(f) \right|^2 df</math> can be interpreted as a [[density function]] multiplied by an infinitesimally small frequency interval, describing the energy contained in the signal at frequency <math>f</math> in the frequency interval <math>f + df</math>. |
|||
Therefore, the '''energy spectral density''' of <math>x(t)</math> is defined as:{{sfn | Oppenheim | Verghese | 2016 | p=14}} |
|||
:<math>ESD(\omega)=\frac{dt^2}{2\pi}\left|\sum_{n=-\infty}^\infty f_n e^{-i\omega n}\right|^2=\frac{dt^2}{2\pi}F_d(\omega)F_d^*(\omega)</math> |
|||
{{Equation box 1 |
|||
where <math>F_d(\omega)</math> is the [[discrete-time Fourier transform]] of <math>f_n .</math> In the mathematical sciences, the sampling interval <math>dt</math> is often set to one. It is needed, however, to keep the correct physical units and to ensure that we recover the continuous case in the limit <math>dt\rightarrow 0.</math> |
|||
|indent = : |
|||
|title = |
|||
|equation = {{NumBlk||<math> \bar{S}_{xx}(f) \triangleq \left| \hat{x}(f) \right|^2 </math>|{{EquationRef|Eq.1}}}} |
|||
|cellpadding = 6 |
|||
|border |
|||
|border colour = #0073CF |
|||
|background colour = #F5FFFA |
|||
}} |
|||
The function <math>\bar{S}_{xx}(f)</math> and the [[autocorrelation]] of <math>x(t)</math> form a Fourier transform pair, a result also known as the [[Wiener–Khinchin theorem]] (see also [[Periodogram#Definition|Periodogram]]). |
|||
===Power spectral density=== |
|||
As a physical example of how one might measure the energy spectral density of a signal, suppose <math>V(t)</math> represents the [[electric potential|potential]] (in [[volt]]s) of an electrical pulse propagating along a [[transmission line]] of [[Electrical impedance|impedance]] <math>Z</math>, and suppose the line is terminated with a [[impedance matching|matched]] resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By [[Ohm's law]], the power delivered to the resistor at time <math>t</math> is equal to <math>V(t)^2/Z</math>, so the total energy is found by integrating <math>V(t)^2/Z</math> with respect to time over the duration of the pulse. To find the value of the energy spectral density <math>\bar{S}_{xx}(f)</math> at frequency <math>f</math>, one could insert between the transmission line and the resistor a [[bandpass filter]] which passes only a narrow range of frequencies (<math>\Delta f</math>, say) near the frequency of interest and then measure the total energy <math>E(f)</math> dissipated across the resistor. The value of the energy spectral density at <math>f</math> is then estimated to be <math>E(f)/\Delta f</math>. In this example, since the power <math>V(t)^2/Z</math> has units of V<sup>2</sup> Ω<sup>−1</sup>, the energy <math>E(f)</math> has units of V<sup>2</sup> s Ω<sup>−1</sup> = [[Joule|J]], and hence the estimate <math>E(f)/\Delta f</math> of the energy spectral density has units of J Hz<sup>−1</sup>, as required. In many situations, it is common to forget the step of dividing by <math>Z</math> so that the energy spectral density instead has units of V<sup>2</sup> Hz<sup>−1</sup>. |
|||
The above definition of energy spectral density is most suitable for pulse-like signals for which the Fourier transforms of the signals exist. For continued signals that describe for example stationary physical processes, it makes more sense to define a '''power spectral density''' (PSD), which describes how the [[power (physics)|power]] of a signal or time series is distributed with frequency. Here, power can be the actual physical power, or more often, for convenience with abstract signals, can be defined as the squared value of the signal. This instantaneous power is then given by |
|||
This definition generalizes in a straightforward manner to a discrete signal with a [[countably infinite]] number of values <math>x_n</math> such as a signal sampled at discrete times <math>t_n=t_0 + (n\,\Delta t)</math>: |
|||
: <math> P(t) = f(t)^2 </math> |
|||
<math display="block">\bar{S}_{xx}(f) = \lim_{N\to \infty} (\Delta t)^2 \underbrace{\left|\sum_{n=-N}^N x_n e^{-i 2\pi f n \, \Delta t}\right|^2}_{\left|\hat x_d(f)\right|^2},</math> |
|||
where <math>\hat x_d(f)</math> is the [[discrete-time Fourier transform]] of <math>x_n.</math> The sampling interval <math>\Delta t</math> is needed to keep the correct physical units and to ensure that we recover the continuous case in the limit <math>\Delta t\to 0.</math> But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (also see [[Normalized frequency (unit)|normalized frequency]]) |
|||
=== Power spectral density === |
|||
for a signal <math>f(t)</math>. The mean (or expected value) of <math>P(t)</math> is the ''total power'', which is the integral of the power spectral density over all frequencies. |
|||
{{distinguish|spectral power distribution}} |
|||
[[File:PowerSpectrumExt.svg|thumb|right|300px|The power spectrum of the measured [[cosmic microwave background radiation]] temperature anisotropy in terms of the angular scale. The solid line is a theoretical model, for comparison.]] |
|||
The above definition of energy spectral density is suitable for transients (pulse-like signals) whose energy is concentrated around one time window; then the Fourier transforms of the signals generally exist. For continuous signals over all time, one must rather define the ''power spectral density'' (PSD) which exists for [[stationary process]]es; this describes how the [[power (physics)|power]] of a signal or time series is distributed over frequency, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, is simply identified with the squared value of the signal. For example, statisticians study the [[variance]] of a function over time <math>x(t)</math> (or over another independent variable), and using an analogy with electrical signals (among other physical processes), it is customary to refer to it as the ''power spectrum'' even when there is no physical power involved. If one were to create a physical [[voltage]] source which followed <math>x(t)</math> and applied it to the terminals of a one [[ohm]] [[resistor]], then indeed the instantaneous power dissipated in that resistor would be given by <math>x^2(t)</math> [[watt]]s. |
|||
The average power <math>P</math> of a signal <math>x(t)</math> over all time is therefore given by the following time average, where the period <math> T</math> is centered about some arbitrary time <math> t=t_{0}</math>: |
|||
We can use a normalized Fourier transform: |
|||
<math display="block"> P = \lim_{T\to \infty} \frac 1 {T} \int_{t_{0}-T/2}^{t_{0}+T/2} \left|x(t)\right|^2\,dt</math> |
|||
However, for the sake of dealing with the math that follows, it is more convenient to deal with time limits in the signal itself rather than time limits in the bounds of the integral. As such, we have an alternative representation of the average power, where <math>x_{T}(t) = x(t)w_{T}(t)</math> and <math>w_{T}(t)</math> is unity within the arbitrary period and zero elsewhere. |
|||
:<math> \mathcal{F}_T(\omega) = \frac{1}{\sqrt{T}} \int_0^T f(t) \exp(-i\omega t), dt</math> |
|||
<math display="block"> P = \lim_{T\to \infty} \frac 1 {T} \int_{-\infty}^{\infty} \left|x_{T}(t)\right|^2\,dt.</math> |
|||
Clearly, in cases where the above expression for P is non-zero, the integral must grow without bound as T grows without bound. That is the reason why we cannot use the energy of the signal, which ''is'' that diverging integral, in such cases. |
|||
In analyzing the frequency content of the signal <math>x(t)</math>, one might like to compute the ordinary Fourier transform <math>\hat{x}(f)</math>; however, for many signals of interest the Fourier transform does not formally exist.<ref group=nb>Some authors, e.g., {{harv| Risken | Frank | 1996 | p=30}} still use the non-normalized Fourier transform in a formal way to formulate a definition of the power spectral density |
|||
and define the power spectral density as:<ref> |
|||
<math display="block"> \langle \hat x(\omega) \hat x^\ast(\omega') \rangle = 2\pi f(\omega) \delta(\omega - \omega'),</math> |
|||
{{cite book |
|||
where <math> \delta(\omega-\omega')</math> is the [[Dirac delta function]]. Such formal statements may sometimes be useful to guide the intuition, but should always be used with utmost care.</ref> Regardless, [[Parseval's theorem]] tells us that we can re-write the average power as follows. |
|||
| title = Spikes: Exploring the Neural Code (Computational Neuroscience) |
|||
<math display="block"> P = \lim_{T\to \infty} \frac 1 {T} \int_{-\infty}^{\infty} |\hat{x}_{T}(f)|^2\,df</math> |
|||
| author = Fred Rieke, William Bialek, and David Warland |
|||
| publisher = MIT Press |
|||
| year = 1999 |
|||
| isbn = 978-0262681087 |
|||
}}</ref><ref> |
|||
{{cite book |
|||
| title = Probability and random processes |
|||
| author = Scott Millers and Donald Childers |
|||
| publisher = Academic Press |
|||
| year = 2012 |
|||
}}</ref> |
|||
Then the power spectral density is simply defined as the integrand above.{{sfn | Oppenheim | Verghese | 2016 | pp=422-423}}{{sfn | Miller | Childers | 2012 | pp=429-431}} |
|||
:<math> PSD(\omega) = \lim_{T \rightarrow \infty} \mathbf{E} \left[ | \mathcal{F}_T(\omega) | ^ 2 \right]. </math> |
|||
{{Equation box 1 |
|||
For stochastic signals, the squared magnitude of the Fourier transform typically does not approach a limit, but its expectation does; see [[periodogram]]. |
|||
|indent = : |
|||
|title = |
|||
|equation = {{NumBlk||<math> S_{xx}(f) = \lim_{T\to \infty} \frac 1 {T} |\hat{x}_{T}(f)|^2\,</math>|{{EquationRef|Eq.2}}}} |
|||
|cellpadding = 6 |
|||
|border |
|||
|border colour = #0073CF |
|||
|background colour = #F5FFFA |
|||
}} |
|||
From here, due to the [[convolution theorem]], we can also view <math> |\hat{x}_{T}(f)|^2</math> as the [[Fourier transform]] of the time [[convolution]] of <math> x_{T}^*(-t)</math> and <math> x_{T}(t)</math>, where * represents the complex conjugate. Taking into account that |
|||
'''Remark:''' Many signals of interest are not integrable and the ''non-normalized (=ordinary)'' Fourier transform <math>F(\omega)</math> of the signal does not exist. Some authors (e.g. Risken<ref> |
|||
<math display="block"> \begin{align} |
|||
{{cite book |
|||
\mathcal{F}\left\{ x_{T}^* ( - t ) \right\} &= \int ^\infty _{ - \infty} x_{T}^* ( - t ) e^{-i 2\pi f t } dt |
|||
| title = The Fokker–Planck Equation: Methods of Solution and Applications |
|||
\\ |
|||
| edition = 2nd |
|||
&= \int ^\infty _{ - \infty} x_{T}^* ( t ) e^{ i 2\pi f t } dt |
|||
| author = Hannes Risken |
|||
\\ |
|||
| publisher = Springer |
|||
&= \int ^\infty _{ - \infty} x_{T}^* ( t ) [ e^{ - i 2\pi f t } ]^* dt |
|||
| year = 1996 |
|||
\\ |
|||
| isbn = 9783540615309 |
|||
&= \left[\int ^\infty _{ - \infty} x_{T} ( t ) e^{ - i 2\pi f t } dt \right]^* |
|||
| page = 30 |
|||
\\ |
|||
| url = http://books.google.com/books?id=MG2V9vTgSgEC&pg=PA30 |
|||
&= \left[\mathcal{F} \left\{ x_{T} ( t )\right\}\right] ^* |
|||
}}</ref> |
|||
\\ |
|||
) still use the non-normalized Fourier transform in a formal way to formulate a definition of the power spectral density |
|||
&= \left[\hat{x}_T(f) \right] ^* |
|||
\end{align}</math> |
|||
and making, <math> u(t) = x_T ^{ * } ( - t)</math>, we have: |
|||
<math display="block">\begin{align} |
|||
\left|\hat{x}_{T}(f)\right|^2 &= [ \hat{x}_{T}(f) ] ^* \cdot \hat{x}_{T}(f) |
|||
\\ |
|||
& = \mathcal{F}\left\{ x_{T}^* ( - t ) \right\} \cdot \mathcal{F}\left\{ x_{T} ( t ) \right\} |
|||
\\ |
|||
& = \mathcal{F}\left\{ u(t) \right\} \cdot \mathcal{F}\left\{ x_{T} ( t ) \right\} |
|||
\\ |
|||
&= \mathcal{F}\left\{ u(t) \mathbin{\mathbf{*}} x_{T}(t) \right\} |
|||
\\ |
|||
&= \int_{-\infty}^\infty \left[ \int_{-\infty}^\infty u (\tau - t) x_T ( t ) dt \right] e^{-i 2\pi f\tau} d\tau |
|||
\\ |
|||
&= \int_{-\infty}^\infty \left[\int_{-\infty}^\infty x_{T}^*(t - \tau)x_{T}(t) dt \right]e^{-i 2\pi f\tau} \ d\tau, |
|||
\end{align}</math> |
|||
where the [[convolution theorem]] has been used when passing from the 3rd to the 4th line. |
|||
Now, if we divide the time convolution above by the period <math>T</math> and take the limit as <math>T \rightarrow \infty</math>, it becomes the [[autocorrelation]] function of the non-windowed signal <math> x(t)</math>, which is denoted as <math>R_{xx}(\tau)</math>, provided that <math>x(t)</math> is [[ergodic]], which is true in most, but not all, practical cases.<ref group=nb> The [[Wiener–Khinchin theorem]] makes sense of this formula for any [[wide-sense stationary process]] under weaker hypotheses: <math>R_{xx}</math> does not need to be absolutely integrable, it only needs to exist. But the integral can no longer be interpreted as usual. The formula also makes sense if interpreted as involving [[Distribution (mathematics)|distributions]] (in the sense of [[Laurent Schwartz]], not in the sense of a statistical [[Cumulative distribution function]]) instead of functions. If <math>R_{xx}</math> is continuous, [[Bochner's theorem]] can be used to prove that its Fourier transform exists as a positive [[Measure (mathematics)|measure]], whose distribution function is F (but not necessarily as a function and not necessarily possessing a probability density).</ref> |
|||
:<math> \langle F(\omega) F^\ast(\omega') \rangle = 2\pi\,PSD(\omega)\,\delta(\omega-\omega')</math>. |
|||
<math display="block"> \lim_{T\to \infty} \frac{1}{T} \left|\hat{x}_{T}(f)\right|^2 = \int_{-\infty}^\infty \left[\lim_{T\to \infty} \frac{1}{T}\int_{-\infty}^\infty x_{T}^*(t - \tau)x_{T}(t) dt \right]e^{-i 2\pi f\tau} \ d\tau = \int_{-\infty}^\infty R_{xx}(\tau)e^{-i 2\pi f\tau} d\tau </math> |
|||
From here we see, again assuming the ergodicity of <math>x(t)</math>, that the power spectral density can be found as the Fourier transform of the autocorrelation function ([[Wiener–Khinchin theorem]]).{{sfn | Miller | Childers | 2012 | p=433}} |
|||
Such formal statements may be sometimes useful to guide the intuition, but should always be used with utmost care. |
|||
{{Equation box 1 |
|||
Using such formal reasoning, one may already guess that for a [[Stationary process|stationary random process]], the power spectral density <math>PSD(\omega)</math> and the [[autocorrelation function]] of this signal <math> R(\tau)=\langle f(t) f(t+\tau)\rangle</math>, should be a Fourier pair. This is indeed true and represent a deep theorem due to [[Wiener–Khinchin theorem|Wiener and Khinchine]] |
|||
|indent = : |
|||
|title = |
|||
|equation = {{NumBlk||<math>S_{xx}(f) = \int_{-\infty}^\infty R_{xx}(\tau) e^{-i 2 \pi f \tau}\,d \tau = \hat R_{xx}(f)</math>|{{EquationRef|Eq.3}}}} |
|||
|cellpadding = 6 |
|||
|border |
|||
|border colour = #0073CF |
|||
|background colour = #F5FFFA |
|||
}} |
|||
Many authors use this equality to actually ''define'' the power spectral density.<ref>{{cite book|title = Echo Signal Processing| author = Dennis Ward Ricker | publisher = Springer | year = 2003 | isbn = 978-1-4020-7395-3 | url = https://books.google.com/books?id=NF2Tmty9nugC&q=%22power+spectral+density%22+%22energy+spectral+density%22&pg=PA23 }}</ref> |
|||
: <math>PSD(f)=\int_{-\infty}^\infty \,R(\tau)\,e^{-i\omega\tau}\,d \tau=\mathcal{F}(R(\tau)). </math> |
|||
The power of the signal in a given frequency band <math>[f_1, f_2]</math>, where <math> 0<f_1 < f_2</math>, can be calculated by integrating over frequency. Since <math>S_{xx}(-f) = S_{xx}(f)</math>, an equal amount of power can be attributed to positive and negative frequency bands, which accounts for the factor of 2 in the following form (such trivial factors depend on the conventions used): |
|||
Many authors use this equality to actually ''define'' the power spectral density<ref>{{cite book | title = Echo Signal Processing | author = Dennis Ward Ricker | publisher = Springer | year = 2003 | ISBN = 1-4020-7395-X | url = http://books.google.com/books?id=NF2Tmty9nugC&pg=PA23&dq=%22power+spectral+density%22+%22energy+spectral+density%22&lr=&as_brr=3&ei=HZMvSPSWFZyStwPWsfyBAw&sig=1ZZcHwxXkErvNXtAHv21ijTXoP8#PPA23,M1 }}</ref> |
|||
<math display="block"> P_\textsf{bandlimited} = 2 \int_{f_1}^{f_2} S_{xx}(f) \, df</math> |
|||
More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the time interval <math>T</math> is finite rather than approaching infinity. This results in decreased spectral coverage and resolution since frequencies of less than <math>1/T</math> are not sampled, and results at frequencies which are not an integer multiple of <math>1/T</math> are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy"; however this can be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding to [[statistical ensemble]]s of realizations of <math>x(t)</math> evaluated over the specified time window. |
|||
Just as with the energy spectral density, the definition of the power spectral density can be generalized to [[discrete time]] variables <math>x_n</math>. As before, we can consider a window of <math>-N\le n\le N</math> with the signal sampled at discrete times <math>t_n = t_0 + (n\,\Delta t)</math> for a total measurement period <math>T = (2N + 1) \,\Delta t</math>. |
|||
The power of the signal in a given frequency band <math>[\omega_1,\omega_2]</math> can be calculated by integrating over positive and negative frequencies, |
|||
<math display="block">S_{xx}(f) = \lim_{N\to \infty}\frac{(\Delta t)^2}{T}\left|\sum_{n=-N}^N x_n e^{-i 2\pi f n \,\Delta t}\right|^2</math> |
|||
Note that a single estimate of the PSD can be obtained through a finite number of samplings. As before, the actual PSD is achieved when <math>N</math> (and thus <math>T</math>) approaches infinity and the expected value is formally applied. In a real-world application, one would typically average a finite-measurement PSD over many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called a [[periodogram]]. This periodogram converges to the true PSD as the number of estimates as well as the averaging time interval <math>T</math> approach infinity.{{sfn | Brown | Hwang | 1997}} |
|||
If two signals both possess power spectral densities, then the [[#Cross-spectral density|cross-spectral density]] can similarly be calculated; as the PSD is related to the autocorrelation, so is the cross-spectral density related to the [[cross-correlation]]. |
|||
: <math> |
|||
P=\int_{\omega_1}^{\omega_2}\,PSD(\omega)+PSD(-\omega) \,d \omega |
|||
</math> |
|||
The power spectral density of a signal exists if the signal is a wide-sense [[stationary process]]. If the signal is not wide-sense stationary, then the autocorrelation function must be a function of two variables. In some cases, such as wide-sense [[cyclostationary process]]es, a PSD may still exist.<ref> |
|||
{{cite book |
|||
| title = Wireless Communications |
|||
| edition = 2nd |
|||
| author = Andreas F. Molisch |
|||
| publisher = John Wiley and Sons |
|||
| year = 2011 |
|||
| isbn = 978-0-470-74187-0 |
|||
| page = 194 |
|||
| url = http://books.google.com/books?id=vASyH5-jfMYC&pg=PA194 |
|||
}}</ref> |
|||
More generally, similar techniques may be used to estimate a time-varying spectral density. |
|||
The definition of the power spectral density generalizes in a straight-forward manner to finite time-series <math>f_n=f(n\,dt)</math> with <math>1\le n\le N</math> such as a signal sampled at discrete times <math>f_n=f(n\,dt)</math> for a total measurement period <math>T=n dt</math>. |
|||
:<math>PSD(\omega)=\frac{dt^2}{T}\left|\sum_{n=1}^N f_n e^{-i\omega n}\right|^2</math>. |
|||
In a real-word application, one would typically average this single-measurement PSD over several repetitions of the measurement to obtain a more accurate estimate of the real PSD underlying the observed physical process. This computed PSD is sometimes called [[periodogram]]. One can prove that this periodigram converges to the true PSD when the averaging time interval T goes to infinity (Brown & Hwang<ref>{{cite book | title = Introduction to Random Signals and Applied Kalman Filtering | author = Robert Grover Brown & Patrick Y.C. Hwang | publisher = John Wiley & Sons | year = 1997 | ISBN = 0-471-12839-2 | url = http://www.amazon.com/dp/0471128392}}</ref>) to approach the Power Spectral Density (PSD). |
|||
If two signals both possess '''''power spectra''''' (the correct terminology), then a cross-power spectrum can be calculated by using their [[cross-correlation]] function. |
|||
==== Properties of the power spectral density ==== |
==== Properties of the power spectral density ==== |
||
Some properties of the PSD include: |
Some properties of the PSD include:{{sfn | Miller | Childers | 2012 | p=431}} |
||
{{bulleted list |
|||
| publisher = Cambridge Univ Pr |
|||
| The power spectrum is always real and non-negative, and the spectrum of a real valued process is also an [[even function]] of frequency: <math>S_{xx}(-f) = S_{xx}(f)</math>. |
|||
| isbn = 0-521-01230-9 |
|||
| For a continuous [[stochastic process]] x(t), the autocorrelation function ''R''<sub>''xx''</sub>(''t'') can be reconstructed from its power spectrum S<sub>xx</sub>(f) by using the [[inverse Fourier transform]] |
|||
| last = Storch |
|||
| Using [[Parseval's theorem]], one can compute the [[variance]] (average power) of a process by integrating the power spectrum over all frequency: |
|||
| first = H. Von |
|||
<math display="block">P = \operatorname{Var}(x) = \int_{-\infty}^{\infty}\! S_{xx}(f) \, df</math> |
|||
| coauthors = F. W Zwiers |
|||
| For a real process ''x''(''t'') with power spectral density <math>S_{xx}(f)</math>, one can compute the ''integrated spectrum'' or ''power spectral distribution'' <math>F(f)</math>, which specifies the average ''bandlimited'' power contained in frequencies from DC to f using:{{sfn | Davenport | Root | 1987}} |
|||
| title = Statistical analysis in climate research |
|||
<math display="block">F(f) = 2 \int _0^f S_{xx}(f')\, df'. </math> |
|||
| date = 2001 |
|||
Note that the previous expression for total power (signal variance) is a special case where {{math|''f'' → ∞}}. |
|||
}}</ref> |
|||
}} |
|||
=== Cross power spectral density {{anchor|Cross|Cross spectral density|Cross-spectral density}} === |
|||
* the spectrum of a real valued process is symmetric: <math>S(-f) = S(f)</math>, or in other words, it is an ''[[even function]]'' |
|||
{{See also|Coherence (signal processing)}} |
|||
* it is continuous and differentiable on [-1/2, +1/2] |
|||
* its derivative is zero at f = 0 (this is required by the fact that the power spectrum is an even function), else the derivative does not exist at f = 0 |
|||
* the autocovariance function can be reconstructed by using the [[Inverse Fourier transform]] |
|||
* it describes the distribution of variance across time scales. In particular |
|||
*: <math>\text{Var}(X_t) = \gamma_0 = 2 \int_0^{1/2} S(\omega) d\omega</math> |
|||
* it is a linear function of the autocovariance function |
|||
*: If <math>\gamma</math> is decomposed into two functions <math>\gamma(\tau) = \alpha_1 \gamma_1(\tau) + \alpha_1 \gamma_2(\tau)</math> then |
|||
*: <math>S(f) = \alpha_1 S_1(f) + \alpha_2 S_2(f)</math> |
|||
*:: where <math>S_i(f) = \mathcal{F}\{ \gamma_i \}</math> |
|||
Given two signals <math>x(t)</math> and <math>y(t)</math>, each of which possess power spectral densities <math>S_{xx}(f)</math> and <math>S_{yy}(f)</math>, it is possible to define a '''cross power spectral density''' ('''CPSD''') or '''cross spectral density''' ('''CSD'''). To begin, let us consider the average power of such a combined signal. |
|||
The '''power spectrum''' <math>G(f)</math> is defined as<ref>An Introduction to the Theory of Random Signals and Noise, Wilbur B. Davenport and Willian L. Root, IEEE Press, New York, 1987, ISBN 0-87942-235-1</ref> |
|||
<math display="block">\begin{align} |
|||
P &= \lim_{T\to \infty} \frac{1}{T} \int_{-\infty}^{\infty} \left[x_T(t) + y_T(t)\right]^*\left[x_T(t) + y_T(t)\right]dt |
|||
\\ |
|||
&= \lim_{T\to \infty} \frac{1}{T} \int_{-\infty}^{\infty} |x_T(t)|^2 + x^*_T(t) y_T(t) + y^*_T(t) x_{T}(t) + |y_T(t)|^2 dt |
|||
\\ |
|||
\end{align}</math> |
|||
Using the same notation and methods as used for the power spectral density derivation, we exploit Parseval's theorem and obtain |
|||
: <math>G(f)= \int _{-\infty}^f S(f^\prime) \, df^\prime. </math> |
|||
<math display="block">\begin{align} |
|||
S_{xy}(f) &= \lim_{T\to\infty} \frac{1}{T} \left[\hat{x}^*_T(f) \hat{y}_T(f)\right] & |
|||
S_{yx}(f) &= \lim_{T\to\infty} \frac{1}{T} \left[\hat{y}^*_T(f) \hat{x}_T(f)\right] |
|||
\end{align}</math> |
|||
where, again, the contributions of <math>S_{xx}(f)</math> and <math>S_{yy}(f)</math> are already understood. Note that <math>S^*_{xy}(f) = S_{yx}(f)</math>, so the full contribution to the cross power is, generally, from twice the real part of either individual '''CPSD'''. Just as before, from here we recast these products as the Fourier transform of a time convolution, which when divided by the period and taken to the limit <math>T\to\infty</math> becomes the Fourier transform of a [[cross-correlation]] function.<ref>{{cite web|author=William D Penny|year=2009|title=Signal Processing Course, chapter 7|url=http://www.fil.ion.ucl.ac.uk/~wpenny/course/course.html}}</ref> |
|||
<math display="block">\begin{align} |
|||
S_{xy}(f) &= \int_{-\infty}^{\infty} \left[\lim_{T\to\infty} \frac 1 {T} \int_{-\infty}^{\infty} x^*_{T}(t-\tau) y_{T}(t) dt \right] e^{-i 2 \pi f \tau} d\tau= \int_{-\infty}^{\infty} R_{xy}(\tau) e^{-i 2 \pi f \tau} d\tau \\ |
|||
S_{yx}(f) &= \int_{-\infty}^{\infty} \left[\lim_{T\to\infty} \frac 1 {T} \int_{-\infty}^{\infty} y^*_{T}(t-\tau) x_{T}(t) dt \right] e^{-i 2 \pi f \tau} d\tau= \int_{-\infty}^{\infty} R_{yx}(\tau) e^{-i 2 \pi f \tau} d\tau, |
|||
\end{align}</math> |
|||
where <math>R_{xy}(\tau)</math> is the [[cross-correlation]] of <math>x(t)</math> with <math>y(t)</math> and <math>R_{yx}(\tau)</math> is the cross-correlation of <math>y(t)</math> with <math>x(t)</math>. In light of this, the PSD is seen to be a special case of the CSD for <math>x(t) = y(t)</math>. If <math>x(t)</math> and <math>y(t)</math> are real signals (e.g. voltage or current), their Fourier transforms <math>\hat{x}(f)</math> and <math>\hat{y}(f)</math> are usually restricted to positive frequencies by convention. Therefore, in typical signal processing, the full '''CPSD''' is just one of the '''CPSD'''s scaled by a factor of two. |
|||
<math display="block">\operatorname{CPSD}_\text{Full} = 2S_{xy}(f) = 2 S_{yx}(f)</math> |
|||
For discrete signals {{math|''x<sub>n</sub>''}} and {{math|''y<sub>n</sub>''}}, the relationship between the cross-spectral density and the cross-covariance is |
|||
=== Cross-spectral density === |
|||
<math display="block">S_{xy}(f) = \sum_{n=-\infty}^\infty R_{xy}(\tau_n)e^{-i 2 \pi f \tau_n}\,\Delta\tau</math> |
|||
"Just as the Power Spectral Density (PSD) is the Fourier transform of the auto-covariance function we may define the Cross Spectral Density (CSD) as the Fourier transform of the cross-covariance function."<ref>http://www.fil.ion.ucl.ac.uk/~wpenny/course/course.html, chapter 7</ref> |
|||
The PSD is a special case of the cross spectral density (CPSD) function, defined between two signals x<sup>n</sup> and y<sup>n</sup> as |
|||
: <math> |
|||
P_{xy}(\omega)=\frac{1}{2\pi}\sum_{n=-\infty}^\infty R_{xy}e^{-j\omega n} |
|||
</math> |
|||
{{Expand section|date=April 2009}} |
|||
== Estimation == |
== Estimation == |
||
Line 162: | Line 200: | ||
The goal of spectral density estimation is to [[estimation theory|estimate]] the spectral density of a [[random signal]] from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve [[parametric statistics|parametric]] or [[non-parametric statistics|non-parametric]] approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to an [[autoregressive model]]. A common non-parametric technique is the [[periodogram]]. |
The goal of spectral density estimation is to [[estimation theory|estimate]] the spectral density of a [[random signal]] from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve [[parametric statistics|parametric]] or [[non-parametric statistics|non-parametric]] approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to an [[autoregressive model]]. A common non-parametric technique is the [[periodogram]]. |
||
The spectral density is usually estimated using [[Fourier transform]] methods |
The spectral density is usually estimated using [[Fourier transform]] methods (such as the [[Welch method]]), but other techniques such as the [[Maximum entropy spectral estimation|maximum entropy]] method can also be used. |
||
== |
== Related concepts == |
||
{{distinguish|spectral density (physical science)}} |
|||
* The ''[[spectral centroid]]'' of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts. |
|||
* The spectral density of <math>f(t)</math> and the [[autocorrelation]] of <math>f(t)</math> form a Fourier transform pair (for PSD versus ESD, different definitions of autocorrelation function are used). |
|||
* {{anchor|Spectral edge frequency}} The '''spectral edge frequency''' ('''SEF'''), usually expressed as "SEF ''x''", represents the [[frequency]] below which ''x'' percent of the total power of a given signal are located; typically, ''x'' is in the range 75 to 95. It is more particularly a popular measure used in [[EEG]] monitoring, in which case SEF has variously been used to estimate the depth of [[anesthesia]] and stages of [[sleep]].{{sfn | Iranmanesh | Rodriguez-Villegas | 2017 }}{{sfn | Imtiaz | Rodriguez-Villegas | 2014}} |
|||
* {{anchor|Envelope}} A '''spectral envelope''' is the [[envelope curve]] of the spectrum density. It describes one point in time (one window, to be precise). For example, in [[remote sensing]] using a [[spectrometer]], the spectral envelope of a feature is the boundary of its [[electromagnetic spectrum|spectral]] properties, as defined by the range of brightness levels in each of the [[spectral bands]] of interest. |
|||
* The spectral density is a function of frequency, not a function of time. However, the spectral density of a small window of a longer signal may be calculated, and plotted versus time associated with the window. Such a graph is called a ''[[spectrogram]]''. This is the basis of a number of spectral analysis techniques such as the [[short-time Fourier transform]] and [[wavelets]]. |
|||
* {{anchor|Phase spectrum}} A "spectrum" generally means the power spectral density, as discussed above, which depicts the distribution of signal content over frequency. For [[transfer function]]s (e.g., [[Bode plot]], [[Chirp#Relation to an impulse signal|chirp]]) the complete frequency response may be graphed in two parts: power versus frequency and [[phase (waves)|phase]] versus frequency—the '''phase spectral density''', '''phase spectrum''', or '''spectral phase'''. Less commonly, the two parts may be the [[real and imaginary parts]] of the transfer function. This is not to be confused with the ''[[frequency response]]'' of a transfer function, which also includes a phase (or equivalently, a real and imaginary part) as a function of frequency. The time-domain [[impulse response]] <math>h(t)</math> cannot generally be uniquely recovered from the power spectral density alone without the phase part. Although these are also Fourier transform pairs, there is no symmetry (as there is for the [[autocorrelation]]) forcing the Fourier transform to be real-valued. See [[Ultrashort pulse#Spectral phase]], [[phase noise]], [[group delay]]. |
|||
* {{anchor|Amplitude}} Sometimes one encounters an '''amplitude spectral density''' ('''ASD'''), which is the square root of the PSD; the ASD of a voltage signal has units of V Hz<sup>−1/2</sup>.<ref>{{cite web |
|||
| url = http://www.lumerink.com/courses/ece697/docs/Papers/The%20Fundamentals%20of%20FFT-Based%20Signal%20Analysis%20and%20Measurements.pdf |
|||
| title = The Fundamentals of FFT-Based Signal Analysis and Measurement |
|||
|author1=Michael Cerna |author2=Audrey F. Harvey |
|||
|name-list-style=amp | year = 2000 |
|||
}}</ref> This is useful when the ''shape'' of the spectrum is rather constant, since variations in the ASD will then be proportional to variations in the signal's voltage level itself. But it is mathematically preferred to use the PSD, since only in that case is the area under the curve meaningful in terms of actual power over all frequency or over a specified bandwidth. |
|||
== Applications == |
|||
* One of the results of Fourier analysis is [[Parseval's theorem]] which states that the area under the energy spectral density curve is equal to the area under the square of the magnitude of the signal, the total energy: |
|||
{{further|Spectrum}} |
|||
Any signal that can be represented as a variable that varies in time has a corresponding frequency spectrum. This includes familiar entities such as [[visible light]] (perceived as [[color]]), musical notes (perceived as [[Pitch (music)|pitch]]), [[radio frequency|radio/TV]] (specified by their frequency, or sometimes [[wavelength]]) and even the regular rotation of the earth. When these signals are viewed in the form of a frequency spectrum, certain aspects of the received signals or the underlying processes producing them are revealed. In some cases the frequency spectrum may include a distinct peak corresponding to a [[sine wave]] component. And additionally there may be peaks corresponding to [[harmonics]] of a fundamental peak, indicating a periodic signal which is ''not'' simply sinusoidal. Or a continuous spectrum may show narrow frequency intervals which are strongly enhanced corresponding to resonances, or frequency intervals containing almost zero power as would be produced by a [[notch filter]]. |
|||
::<math>\int_{-\infty}^\infty \left| f(t) \right|^2\, dt = \int_{-\infty}^\infty \Phi(\omega)\, d\omega.</math> |
|||
:The above theorem holds true in the discrete cases as well. A similar result holds for the total power in a power spectral density being equal to the corresponding mean total signal power, which is the autocorrelation function at zero lag. |
|||
==Related concepts== |
|||
* Most "frequency" graphs really display only the spectral density. Sometimes the complete frequency spectrum is graphed in two parts, "amplitude" versus frequency (which is the spectral density) and "[[phase (waves)|phase]]" versus frequency (which contains the rest of the information from the frequency spectrum). <math>f(t)</math> cannot be recovered from the spectral density part alone — the "temporal information" is lost. |
|||
* The [[spectral centroid]] of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts. |
|||
* The [[spectral edge frequency]] of a signal is an extension of the previous concept to any proportion instead of two equal parts. |
|||
* Spectral density is a function of frequency, not a function of time. However, the spectral density of small "windows" of a longer signal may be calculated, and plotted versus time associated with the window. Such a graph is called a ''[[spectrogram]]''. This is the basis of a number of spectral analysis techniques such as the [[short-time Fourier transform]] and [[wavelets]]. |
|||
*In [[radiometry]] and [[colorimetry]] (or [[color science]] more generally), the [[spectral power distribution]] (SPD) of a [[light source]] is a measure of the power carried by each frequency or "color" in a light source. The light spectrum is usually measured at points (often 31) along the [[visible spectrum]], in wavelength space instead of frequency space, which makes it not strictly a spectral density. Some [[spectrophotometry|spectrophotometers]] can measure increments as fine as one to two [[nanometer]]s. Values are used to calculate other specifications and then plotted to demonstrate the spectral attributes of the source. This can be a helpful tool in analyzing the [[color]] characteristics of a particular source. |
|||
==Applications== |
|||
=== Electrical engineering === |
=== Electrical engineering === |
||
[[File:Spectrogram-fm-radio.png|thumb|right|Spectrogram of an [[FM radio]] signal with frequency on the horizontal axis and time increasing upwards on the vertical axis.]] |
|||
The concept and use of the [[power spectrum]] of a signal is fundamental in [[electrical engineering]], especially in [[communication systems|electronic communication system]]s, including [[radio communication]]s, [[radar]]s, and related systems, plus passive [remote sensing] technology. Much effort has been expended and millions of dollars spent on developing and producing electronic instruments called "[[spectrum analyzer]]s" for aiding electrical engineers and technicians in observing and measuring the '''''power spectra''''' of signals. The cost of a spectrum analyzer varies depending on its frequency range, its [[bandwidth (signal processing)|bandwidth]], and its accuracy. The higher the frequency range ([[S-band]], [[C-band]], [[X-band]], Ku-band, [[K-band]], Ka-band, etc.), the more difficult the components are to make, assemble, and test and the more expensive the spectrum analyzer is. Also, the wider the bandwidth that a spectrum analyzer possesses, the more costly that it is, and the capability for more accurate measurements increases costs as well. |
|||
The concept and use of the power spectrum of a signal is fundamental in [[electrical engineering]], especially in [[communication systems|electronic communication system]]s, including [[radio communication]]s, [[radar]]s, and related systems, plus passive [[remote sensing]] technology. Electronic instruments called [[spectrum analyzer]]s are used to observe and measure the '''''power spectra''''' of signals. |
|||
The spectrum analyzer measures the magnitude of the [[short-time Fourier transform]] (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density |
The spectrum analyzer measures the magnitude of the [[short-time Fourier transform]] (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density. |
||
=== |
=== Cosmology === |
||
[[Primordial fluctuations]], density variations in the early universe, are quantified by a power spectrum which gives the power of the variations as a function of spatial scale. |
|||
See [[Coherence (signal processing)]] for use of the cross-spectral density. |
|||
==See also== |
|||
== See also == |
|||
* [[Bispectrum]] |
|||
* [[Brightness temperature]] |
|||
* [[Colors of noise]] |
|||
* [[Least-squares spectral analysis]] |
|||
* [[Noise spectral density]] |
* [[Noise spectral density]] |
||
* [[Spectral density estimation]] |
* [[Spectral density estimation]] |
||
* [[Spectral efficiency]] |
* [[Spectral efficiency]] |
||
* [[Colors of noise]] |
|||
* [[Spectral leakage]] |
* [[Spectral leakage]] |
||
* [[Spectral power distribution]] |
|||
* [[Whittle likelihood]] |
|||
* [[Window function]] |
* [[Window function]] |
||
* [[Frequency domain]] |
|||
* [[Frequency spectrum]] |
|||
* [[Bispectrum]] |
|||
== |
== Notes == |
||
{{Reflist|group="nb"}} |
|||
{{Reflist}} |
{{Reflist}} |
||
== References == |
|||
* {{cite book | last=Birolini | first=Alessandro | title=Reliability Engineering | publisher=Springer Science & Business Media | publication-place=Berlin ; New York | date=2007 | isbn=978-3-540-49388-4}} |
|||
* {{cite book | last1=Brown | first1=Robert Grover | last2=Hwang | first2=Patrick Y. C. | title=Introduction to Random Signals and Applied Kalman Filtering with Matlab Exercises and Solutions | publisher=Wiley-Liss | publication-place=New York | date=1997 | isbn=978-0-471-12839-7}} |
|||
* {{cite book | last1=Davenport | first1=Wilbur B. (Jr) | last2=Root | first2=William L. | title=An Introduction to the Theory of Random Signals and Noise | publisher=Wiley-IEEE Press | publication-place=New York | date=1987 | isbn=978-0-87942-235-6}} |
|||
* {{cite journal|last1=Imtiaz|first1=Syed Anas|last2=Rodriguez-Villegas|first2=Esther|title=A Low Computational Cost Algorithm for REM Sleep Detection Using Single Channel EEG|journal=Annals of Biomedical Engineering|date=2014|volume=42|issue=11|pages=2344–59|doi=10.1007/s10439-014-1085-6|pmid=25113231|pmc=4204008}} |
|||
* {{cite journal|last1=Iranmanesh|first1=Saam|last2=Rodriguez-Villegas|first2=Esther|author-link2=Esther Rodriguez-Villegas|date=2017|title=An Ultralow-Power Sleep Spindle Detection System on Chip|journal=IEEE Transactions on Biomedical Circuits and Systems|volume=11|issue=4|pages=858–866|doi=10.1109/TBCAS.2017.2690908|pmid=28541914|hdl=10044/1/46059|s2cid=206608057 |hdl-access=free}} |
|||
* {{cite book | last=Maral | first=Gerard | title=VSAT Networks | publisher=Wiley | publication-place=West Sussex, England ; Hoboken, NJ | date=2004 | isbn=978-0-470-86684-9}} |
|||
* {{cite book | last1=Miller | first1=Scott | last2=Childers | first2=Donald | title=Probability and Random Processes | publisher=Academic Press | publication-place=Boston, MA | date=2012 | isbn=978-0-12-386981-4 | oclc=696092052}} |
|||
* {{cite book | last1=Norton | first1=M. P. | last2=Karczub | first2=D. G. | title=Fundamentals of Noise and Vibration Analysis for Engineers | publisher=Cambridge University Press | publication-place=Cambridge | date=2003 | isbn=978-0-521-49913-2}} |
|||
* {{cite book | last1=Oppenheim | first1=Alan V. | last2=Verghese | first2=George C. | title=Signals, Systems & Inference | publisher=Pearson | publication-place=Boston | date=2016 | isbn=978-0-13-394328-3}} |
|||
* {{cite book | last1=Risken | first1=Hannes | last2=Frank | first2=Till | title=The Fokker-Planck Equation | publisher=Springer Science & Business Media | publication-place=New York | date=1996 | isbn=978-3-540-61530-9}} |
|||
* {{cite book | last=Stein | first=Jonathan Y. | title=Digital Signal Processing | publisher=Wiley-Interscience | publication-place=New York Weinheim | date=2000 | isbn=978-0-471-29546-4}} |
|||
== External links == |
|||
* [http://vibrationdata.wordpress.com/category/power-spectral-density/ Power Spectral Density Matlab scripts] |
|||
{{decibel}} |
|||
{{DEFAULTSORT:Spectral Density}} |
{{DEFAULTSORT:Spectral Density}} |
||
[[Category:Frequency |
[[Category:Frequency-domain analysis]] |
||
[[Category:Signal processing]] |
[[Category:Signal processing]] |
||
[[Category:Waves]] |
[[Category:Waves]] |
||
[[Category:Spectroscopy]] |
|||
[[Category:Scattering]] |
|||
[[ca:Densitat espectral]] |
|||
[[Category:Fourier analysis]] |
|||
[[de:Spektrale Leistungsdichte]] |
|||
[[Category:Radio spectrum]] |
|||
[[es:Densidad espectral]] |
|||
[[Category:Spectrum (physical sciences)]] |
|||
[[eo:Spektra povuma distribuo]] |
|||
[[fr:Densité spectrale de puissance]] |
|||
[[nl:Trillingsanalyse]] |
|||
[[ja:スペクトル密度]] |
|||
[[pl:Widmowa gęstość mocy]] |
|||
[[pt:Densidade espectral]] |
|||
[[ru:Спектральная плотность мощности]] |
|||
[[simple:Power spectrum]] |
|||
[[sv:Effektspektrum]] |
|||
[[uk:Спектральна густина]] |
|||
[[zh:谱密度]] |
Latest revision as of 05:12, 24 September 2024
This article may be too technical for most readers to understand.(June 2024) |
In signal processing, the power spectrum of a continuous time signal describes the distribution of power into frequency components composing that signal.[1] According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal (including noise) as analyzed in terms of its frequency content, is called its spectrum.
When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute the energy spectral density. More commonly used is the power spectral density (PSD, or simply power spectrum), which applies to signals existing over all time, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The PSD then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite. Summation or integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integrating over the time domain, as dictated by Parseval's theorem.[1]
The spectrum of a physical process often contains essential information about the nature of . For instance, the pitch and timbre of a musical instrument are immediately determined from a spectral analysis. The color of a light source is determined by the spectrum of the electromagnetic wave's electric field as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves the Fourier transform, and generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice, such as when a dispersive prism is used to obtain a spectrum of light in a spectrograph, or when a sound is perceived through its effect on the auditory receptors of the inner ear, each of which is sensitive to a particular frequency.
However this article concentrates on situations in which the time series is known (at least in a statistical sense) or directly measured (such as by a microphone sampled by a computer). The power spectrum is important in statistical signal processing and in the statistical study of stochastic processes, as well as in many other branches of physics and engineering. Typically the process is a function of time, but one can similarly discuss data in the spatial domain being decomposed in terms of spatial frequency.[1]
Units
[edit]In physics, the signal might be a wave, such as an electromagnetic wave, an acoustic wave, or the vibration of a mechanism. The power spectral density (PSD) of the signal describes the power present in the signal as a function of frequency, per unit frequency. Power spectral density is commonly expressed in SI units of watts per hertz (abbreviated as W/Hz).[2]
When a signal is defined in terms only of a voltage, for instance, there is no unique power associated with the stated amplitude. In this case "power" is simply reckoned in terms of the square of the signal, as this would always be proportional to the actual power delivered by that signal into a given impedance. So one might use units of V2 Hz−1 for the PSD. Energy spectral density (ESD) would have units of V2 s Hz−1, since energy has units of power multiplied by time (e.g., watt-hour).[3]
In the general case, the units of PSD will be the ratio of units of variance per unit of frequency; so, for example, a series of displacement values (in meters) over time (in seconds) will have PSD in units of meters squared per hertz, m2/Hz. In the analysis of random vibrations, units of g2 Hz−1 are frequently used for the PSD of acceleration, where g denotes the g-force.[4]
Mathematically, it is not necessary to assign physical dimensions to the signal or to the independent variable. In the following discussion the meaning of x(t) will remain unspecified, but the independent variable will be assumed to be that of time.
One-sided vs two-sided
[edit]A PSD can be either a one-sided function of only positive frequencies or a two-sided function of both positive and negative frequencies but with only half the amplitude. Noise PSDs are generally one-sided in engineering and two-sided in physics.[5]
Definition
[edit]Energy spectral density
[edit]Energy spectral density describes how the energy of a signal or a time series is distributed with frequency. Here, the term energy is used in the generalized sense of signal processing;[6] that is, the energy of a signal is:
The energy spectral density is most suitable for transients—that is, pulse-like signals—having a finite total energy. Finite or not, Parseval's theorem (or Plancherel's theorem) gives us an alternate expression for the energy of the signal:[7] where: is the value of the Fourier transform of at frequency (in Hz). The theorem also holds true in the discrete-time cases. Since the integral on the left-hand side is the energy of the signal, the value of can be interpreted as a density function multiplied by an infinitesimally small frequency interval, describing the energy contained in the signal at frequency in the frequency interval .
Therefore, the energy spectral density of is defined as:[8]
(Eq.1) |
The function and the autocorrelation of form a Fourier transform pair, a result also known as the Wiener–Khinchin theorem (see also Periodogram).
As a physical example of how one might measure the energy spectral density of a signal, suppose represents the potential (in volts) of an electrical pulse propagating along a transmission line of impedance , and suppose the line is terminated with a matched resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By Ohm's law, the power delivered to the resistor at time is equal to , so the total energy is found by integrating with respect to time over the duration of the pulse. To find the value of the energy spectral density at frequency , one could insert between the transmission line and the resistor a bandpass filter which passes only a narrow range of frequencies (, say) near the frequency of interest and then measure the total energy dissipated across the resistor. The value of the energy spectral density at is then estimated to be . In this example, since the power has units of V2 Ω−1, the energy has units of V2 s Ω−1 = J, and hence the estimate of the energy spectral density has units of J Hz−1, as required. In many situations, it is common to forget the step of dividing by so that the energy spectral density instead has units of V2 Hz−1.
This definition generalizes in a straightforward manner to a discrete signal with a countably infinite number of values such as a signal sampled at discrete times : where is the discrete-time Fourier transform of The sampling interval is needed to keep the correct physical units and to ensure that we recover the continuous case in the limit But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (also see normalized frequency)
Power spectral density
[edit]The above definition of energy spectral density is suitable for transients (pulse-like signals) whose energy is concentrated around one time window; then the Fourier transforms of the signals generally exist. For continuous signals over all time, one must rather define the power spectral density (PSD) which exists for stationary processes; this describes how the power of a signal or time series is distributed over frequency, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, is simply identified with the squared value of the signal. For example, statisticians study the variance of a function over time (or over another independent variable), and using an analogy with electrical signals (among other physical processes), it is customary to refer to it as the power spectrum even when there is no physical power involved. If one were to create a physical voltage source which followed and applied it to the terminals of a one ohm resistor, then indeed the instantaneous power dissipated in that resistor would be given by watts.
The average power of a signal over all time is therefore given by the following time average, where the period is centered about some arbitrary time :
However, for the sake of dealing with the math that follows, it is more convenient to deal with time limits in the signal itself rather than time limits in the bounds of the integral. As such, we have an alternative representation of the average power, where and is unity within the arbitrary period and zero elsewhere. Clearly, in cases where the above expression for P is non-zero, the integral must grow without bound as T grows without bound. That is the reason why we cannot use the energy of the signal, which is that diverging integral, in such cases.
In analyzing the frequency content of the signal , one might like to compute the ordinary Fourier transform ; however, for many signals of interest the Fourier transform does not formally exist.[nb 1] Regardless, Parseval's theorem tells us that we can re-write the average power as follows.
Then the power spectral density is simply defined as the integrand above.[9][10]
(Eq.2) |
From here, due to the convolution theorem, we can also view as the Fourier transform of the time convolution of and , where * represents the complex conjugate. Taking into account that and making, , we have: where the convolution theorem has been used when passing from the 3rd to the 4th line.
Now, if we divide the time convolution above by the period and take the limit as , it becomes the autocorrelation function of the non-windowed signal , which is denoted as , provided that is ergodic, which is true in most, but not all, practical cases.[nb 2]
From here we see, again assuming the ergodicity of , that the power spectral density can be found as the Fourier transform of the autocorrelation function (Wiener–Khinchin theorem).[11]
(Eq.3) |
Many authors use this equality to actually define the power spectral density.[12]
The power of the signal in a given frequency band , where , can be calculated by integrating over frequency. Since , an equal amount of power can be attributed to positive and negative frequency bands, which accounts for the factor of 2 in the following form (such trivial factors depend on the conventions used): More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the time interval is finite rather than approaching infinity. This results in decreased spectral coverage and resolution since frequencies of less than are not sampled, and results at frequencies which are not an integer multiple of are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy"; however this can be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding to statistical ensembles of realizations of evaluated over the specified time window.
Just as with the energy spectral density, the definition of the power spectral density can be generalized to discrete time variables . As before, we can consider a window of with the signal sampled at discrete times for a total measurement period . Note that a single estimate of the PSD can be obtained through a finite number of samplings. As before, the actual PSD is achieved when (and thus ) approaches infinity and the expected value is formally applied. In a real-world application, one would typically average a finite-measurement PSD over many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called a periodogram. This periodogram converges to the true PSD as the number of estimates as well as the averaging time interval approach infinity.[13]
If two signals both possess power spectral densities, then the cross-spectral density can similarly be calculated; as the PSD is related to the autocorrelation, so is the cross-spectral density related to the cross-correlation.
Properties of the power spectral density
[edit]Some properties of the PSD include:[14]
- The power spectrum is always real and non-negative, and the spectrum of a real valued process is also an even function of frequency: .
- For a continuous stochastic process x(t), the autocorrelation function Rxx(t) can be reconstructed from its power spectrum Sxx(f) by using the inverse Fourier transform
- Using Parseval's theorem, one can compute the variance (average power) of a process by integrating the power spectrum over all frequency:
- For a real process x(t) with power spectral density , one can compute the integrated spectrum or power spectral distribution , which specifies the average bandlimited power contained in frequencies from DC to f using:[15] Note that the previous expression for total power (signal variance) is a special case where f → ∞.
Cross power spectral density
[edit]Given two signals and , each of which possess power spectral densities and , it is possible to define a cross power spectral density (CPSD) or cross spectral density (CSD). To begin, let us consider the average power of such a combined signal.
Using the same notation and methods as used for the power spectral density derivation, we exploit Parseval's theorem and obtain where, again, the contributions of and are already understood. Note that , so the full contribution to the cross power is, generally, from twice the real part of either individual CPSD. Just as before, from here we recast these products as the Fourier transform of a time convolution, which when divided by the period and taken to the limit becomes the Fourier transform of a cross-correlation function.[16] where is the cross-correlation of with and is the cross-correlation of with . In light of this, the PSD is seen to be a special case of the CSD for . If and are real signals (e.g. voltage or current), their Fourier transforms and are usually restricted to positive frequencies by convention. Therefore, in typical signal processing, the full CPSD is just one of the CPSDs scaled by a factor of two.
For discrete signals xn and yn, the relationship between the cross-spectral density and the cross-covariance is
Estimation
[edit]The goal of spectral density estimation is to estimate the spectral density of a random signal from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve parametric or non-parametric approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to an autoregressive model. A common non-parametric technique is the periodogram.
The spectral density is usually estimated using Fourier transform methods (such as the Welch method), but other techniques such as the maximum entropy method can also be used.
Related concepts
[edit]- The spectral centroid of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts.
- The spectral edge frequency (SEF), usually expressed as "SEF x", represents the frequency below which x percent of the total power of a given signal are located; typically, x is in the range 75 to 95. It is more particularly a popular measure used in EEG monitoring, in which case SEF has variously been used to estimate the depth of anesthesia and stages of sleep.[17][18]
- A spectral envelope is the envelope curve of the spectrum density. It describes one point in time (one window, to be precise). For example, in remote sensing using a spectrometer, the spectral envelope of a feature is the boundary of its spectral properties, as defined by the range of brightness levels in each of the spectral bands of interest.
- The spectral density is a function of frequency, not a function of time. However, the spectral density of a small window of a longer signal may be calculated, and plotted versus time associated with the window. Such a graph is called a spectrogram. This is the basis of a number of spectral analysis techniques such as the short-time Fourier transform and wavelets.
- A "spectrum" generally means the power spectral density, as discussed above, which depicts the distribution of signal content over frequency. For transfer functions (e.g., Bode plot, chirp) the complete frequency response may be graphed in two parts: power versus frequency and phase versus frequency—the phase spectral density, phase spectrum, or spectral phase. Less commonly, the two parts may be the real and imaginary parts of the transfer function. This is not to be confused with the frequency response of a transfer function, which also includes a phase (or equivalently, a real and imaginary part) as a function of frequency. The time-domain impulse response cannot generally be uniquely recovered from the power spectral density alone without the phase part. Although these are also Fourier transform pairs, there is no symmetry (as there is for the autocorrelation) forcing the Fourier transform to be real-valued. See Ultrashort pulse#Spectral phase, phase noise, group delay.
- Sometimes one encounters an amplitude spectral density (ASD), which is the square root of the PSD; the ASD of a voltage signal has units of V Hz−1/2.[19] This is useful when the shape of the spectrum is rather constant, since variations in the ASD will then be proportional to variations in the signal's voltage level itself. But it is mathematically preferred to use the PSD, since only in that case is the area under the curve meaningful in terms of actual power over all frequency or over a specified bandwidth.
Applications
[edit]Any signal that can be represented as a variable that varies in time has a corresponding frequency spectrum. This includes familiar entities such as visible light (perceived as color), musical notes (perceived as pitch), radio/TV (specified by their frequency, or sometimes wavelength) and even the regular rotation of the earth. When these signals are viewed in the form of a frequency spectrum, certain aspects of the received signals or the underlying processes producing them are revealed. In some cases the frequency spectrum may include a distinct peak corresponding to a sine wave component. And additionally there may be peaks corresponding to harmonics of a fundamental peak, indicating a periodic signal which is not simply sinusoidal. Or a continuous spectrum may show narrow frequency intervals which are strongly enhanced corresponding to resonances, or frequency intervals containing almost zero power as would be produced by a notch filter.
Electrical engineering
[edit]The concept and use of the power spectrum of a signal is fundamental in electrical engineering, especially in electronic communication systems, including radio communications, radars, and related systems, plus passive remote sensing technology. Electronic instruments called spectrum analyzers are used to observe and measure the power spectra of signals.
The spectrum analyzer measures the magnitude of the short-time Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density.
Cosmology
[edit]Primordial fluctuations, density variations in the early universe, are quantified by a power spectrum which gives the power of the variations as a function of spatial scale.
See also
[edit]- Bispectrum
- Brightness temperature
- Colors of noise
- Least-squares spectral analysis
- Noise spectral density
- Spectral density estimation
- Spectral efficiency
- Spectral leakage
- Spectral power distribution
- Whittle likelihood
- Window function
Notes
[edit]- ^ Some authors, e.g., (Risken & Frank 1996, p. 30) still use the non-normalized Fourier transform in a formal way to formulate a definition of the power spectral density where is the Dirac delta function. Such formal statements may sometimes be useful to guide the intuition, but should always be used with utmost care.
- ^ The Wiener–Khinchin theorem makes sense of this formula for any wide-sense stationary process under weaker hypotheses: does not need to be absolutely integrable, it only needs to exist. But the integral can no longer be interpreted as usual. The formula also makes sense if interpreted as involving distributions (in the sense of Laurent Schwartz, not in the sense of a statistical Cumulative distribution function) instead of functions. If is continuous, Bochner's theorem can be used to prove that its Fourier transform exists as a positive measure, whose distribution function is F (but not necessarily as a function and not necessarily possessing a probability density).
- ^ a b c P Stoica & R Moses (2005). "Spectral Analysis of Signals" (PDF).
- ^ Maral 2004.
- ^ Norton & Karczub 2003.
- ^ Birolini 2007, p. 83.
- ^ Paschotta, Rüdiger. "Power Spectral Density". rp-photonics.com. Archived from the original on 2024-04-15. Retrieved 2024-06-26.
- ^ Oppenheim & Verghese 2016, p. 12.
- ^ Stein 2000, pp. 108, 115.
- ^ Oppenheim & Verghese 2016, p. 14.
- ^ Oppenheim & Verghese 2016, pp. 422–423.
- ^ Miller & Childers 2012, pp. 429–431.
- ^ Miller & Childers 2012, p. 433.
- ^ Dennis Ward Ricker (2003). Echo Signal Processing. Springer. ISBN 978-1-4020-7395-3.
- ^ Brown & Hwang 1997.
- ^ Miller & Childers 2012, p. 431.
- ^ Davenport & Root 1987.
- ^ William D Penny (2009). "Signal Processing Course, chapter 7".
- ^ Iranmanesh & Rodriguez-Villegas 2017.
- ^ Imtiaz & Rodriguez-Villegas 2014.
- ^ Michael Cerna & Audrey F. Harvey (2000). "The Fundamentals of FFT-Based Signal Analysis and Measurement" (PDF).
References
[edit]- Birolini, Alessandro (2007). Reliability Engineering. Berlin ; New York: Springer Science & Business Media. ISBN 978-3-540-49388-4.
- Brown, Robert Grover; Hwang, Patrick Y. C. (1997). Introduction to Random Signals and Applied Kalman Filtering with Matlab Exercises and Solutions. New York: Wiley-Liss. ISBN 978-0-471-12839-7.
- Davenport, Wilbur B. (Jr); Root, William L. (1987). An Introduction to the Theory of Random Signals and Noise. New York: Wiley-IEEE Press. ISBN 978-0-87942-235-6.
- Imtiaz, Syed Anas; Rodriguez-Villegas, Esther (2014). "A Low Computational Cost Algorithm for REM Sleep Detection Using Single Channel EEG". Annals of Biomedical Engineering. 42 (11): 2344–59. doi:10.1007/s10439-014-1085-6. PMC 4204008. PMID 25113231.
- Iranmanesh, Saam; Rodriguez-Villegas, Esther (2017). "An Ultralow-Power Sleep Spindle Detection System on Chip". IEEE Transactions on Biomedical Circuits and Systems. 11 (4): 858–866. doi:10.1109/TBCAS.2017.2690908. hdl:10044/1/46059. PMID 28541914. S2CID 206608057.
- Maral, Gerard (2004). VSAT Networks. West Sussex, England ; Hoboken, NJ: Wiley. ISBN 978-0-470-86684-9.
- Miller, Scott; Childers, Donald (2012). Probability and Random Processes. Boston, MA: Academic Press. ISBN 978-0-12-386981-4. OCLC 696092052.
- Norton, M. P.; Karczub, D. G. (2003). Fundamentals of Noise and Vibration Analysis for Engineers. Cambridge: Cambridge University Press. ISBN 978-0-521-49913-2.
- Oppenheim, Alan V.; Verghese, George C. (2016). Signals, Systems & Inference. Boston: Pearson. ISBN 978-0-13-394328-3.
- Risken, Hannes; Frank, Till (1996). The Fokker-Planck Equation. New York: Springer Science & Business Media. ISBN 978-3-540-61530-9.
- Stein, Jonathan Y. (2000). Digital Signal Processing. New York Weinheim: Wiley-Interscience. ISBN 978-0-471-29546-4.