Fourier transform: Difference between revisions
0<alpha<1 →Distributions |
Added reference to "Fourier-Stieltjes" subsection. Extended the definition a bit. Maybe the subsection heading could be more clarifying |
||
Line 1: | Line 1: | ||
{{Short description|Mathematical transform that expresses a function of time as a function of frequency}} |
|||
In [[mathematics]], the '''Fourier transform''' (often abbreviated '''FT''') is an operation that [[Transform (mathematics)|transforms]] one [[complex number|complex]]-valued [[function (mathematics)|function]] of a [[real variable]] into another. In such applications as [[signal processing]], the domain of the original function is typically [[time]] and is accordingly called the ''[[time domain]]''. That of the new function is [[frequency]], and so the Fourier transform is often called the ''[[frequency domain]] representation'' of the original function. It describes which frequencies are present in the original function. This is in a similar spirit to the way that a chord of music can be described by notes that are being played. In effect, the Fourier transform decomposes a function into [[Oscillation|oscillatory]] functions. The term Fourier transform refers both to the frequency domain representation of a function and to the process or formula that "transforms" one function into the other. |
|||
{{Not to be confused with|Sine and cosine transforms|text=Fourier's original [[sine and cosine transforms]], which may be a simpler introduction to the Fourier transform}} |
|||
{{Fourier transforms}} |
|||
[[File:CQT-piano-chord.png|thumb|An example application of the Fourier transform is determining the constituent pitches in a [[music]]al [[waveform]]. This image is the result of applying a [[constant-Q transform]] (a [[Fourier-related transform]]) to the waveform of a [[C major]] [[piano]] [[chord (music)|chord]]. The first three peaks on the left correspond to the frequencies of the [[fundamental frequency]] of the chord (C, E, G). The remaining smaller peaks are higher-frequency [[overtone]]s of the fundamental pitches. A [[pitch detection algorithm]] could use the relative intensity of these peaks to infer which notes the pianist pressed.]] |
|||
In [[mathematics]], the '''Fourier transform''' ('''FT''') is an [[integral transform]] that takes a [[function (mathematics)|function]] as input and outputs another function that describes the extent to which various [[Frequency|frequencies]] are present in the original function. The output of the transform is a [[complex number|complex]]-valued function of frequency. The term ''Fourier transform'' refers to both this complex-valued function and the [[Operation (mathematics)|mathematical operation]]. When a distinction needs to be made, the output of the operation is sometimes called the [[frequency domain]] representation of the original function. The Fourier transform is analogous to decomposing the [[sound]] of a musical [[Chord (music)|chord]] into the [[sound intensity|intensities]] of its constituent [[Pitch (music)|pitches]]. |
|||
The Fourier transform and its generalizations are the subject of [[Fourier analysis]]. In this specific case, both the time and frequency domains are [[bounded set|unbounded]] [[linear continuum|linear continua]]. It is possible to define the Fourier transform of a function of several variables, which is important for instance in the physical study of [[wave motion]] and [[optics]]. It is also possible to generalize the Fourier transform on [[discrete mathematics|discrete]] structures such as [[finite group]]s, efficient computation of which through a [[fast Fourier transform]] is essential for high-speed computing. |
|||
[[File:Fourier transform time and frequency domains (small).gif|thumb|right|The Fourier transform relates the time domain, in red, with a function in the domain of the frequency, in blue. The component frequencies, extended for the whole frequency spectrum, are shown as peaks in the domain of the frequency.]] |
|||
{{multiple image |
|||
| total_width = 300 |
|||
| align = right |
|||
| image1 = Sine voltage.svg |
|||
| image2 = Phase shift.svg |
|||
| footer = |
|||
The red [[sine wave|sinusoid]] can be described by peak amplitude (1), peak-to-peak (2), [[root mean square|RMS]] (3), and [[wavelength]] (4). The red and blue sinusoids have a phase difference of {{mvar|θ}}. |
|||
}}Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the [[#Uncertainty principle|uncertainty principle]]. The [[critical point (mathematics)|critical]] case for this principle is the [[Gaussian function]], of substantial importance in [[probability theory]] and [[statistics]] as well as in the study of physical phenomena exhibiting [[normal distribution]] (e.g., [[diffusion]]). The Fourier transform of a Gaussian function is another Gaussian function. [[Joseph Fourier]] introduced [[sine and cosine transforms]] (which [[Sine and cosine transforms#Relation with complex exponentials|correspond to the imaginary and real components]] of the modern Fourier transform) in his study of [[heat transfer]], where Gaussian functions appear as solutions of the [[heat equation]]. |
|||
The Fourier transform can be formally defined as an [[improper integral|improper]] [[Riemann integral]], making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory.<ref group=note>Depending on the application a [[Lebesgue integral]], [[distribution (mathematics)|distributional]], or other approach may be most appropriate.</ref> For example, many relatively simple applications use the [[Dirac delta function]], which can be treated formally as if it were a function, but the justification requires a mathematically more sophisticated viewpoint.<ref group=note>{{harvtxt|Vretblad|2000}} provides solid justification for these formal procedures without going too deeply into [[functional analysis]] or the [[distribution (mathematics)|theory of distributions]].</ref> |
|||
The Fourier transform can also be generalized to functions of several variables on [[Euclidean space]], sending a function of {{nowrap|3-dimensional}} 'position space' to a function of {{nowrap|3-dimensional}} momentum (or a function of space and time to a function of [[4-momentum]]). This idea makes the spatial Fourier transform very natural in the study of waves, as well as in [[quantum mechanics]], where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possibly [[vector-valued function|vector-valued]].<ref group=note>In [[relativistic quantum mechanics]] one encounters vector-valued Fourier transforms of multi-component wave functions. In [[quantum field theory]], operator-valued Fourier transforms of operator-valued functions of spacetime are in frequent use, see for example {{harvtxt|Greiner|Reinhardt|1996}}.</ref> Still further generalization is possible to functions on [[group (mathematics)|groups]], which, besides the original Fourier transform on [[Real number#Arithmetic|{{math|'''R'''}}]] or {{math|'''R'''<sup>''n''</sup>}}, notably includes the [[discrete-time Fourier transform]] (DTFT, group = {{math|[[integers|'''Z''']]}}), the [[discrete Fourier transform]] (DFT, group = [[cyclic group|{{math|'''Z''' mod ''N''}}]]) and the [[Fourier series]] or circular Fourier transform (group = {{math|[[circle group|''S''<sup>1</sup>]]}}, the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handle [[periodic function]]s. The [[fast Fourier transform]] (FFT) is an algorithm for computing the DFT. |
|||
{{Fourier transforms}} |
|||
== Definition == |
== Definition == |
||
There are several common conventions for defining the Fourier transform of an [[Lebesgue integration|integrable]] function {{nowrap|''ƒ'' : '''R''' → '''C'''}} {{harv|Kaiser|1994}}. This article will use the definition: |
|||
:<math>\hat{f}(\xi) = \int_{-\infty}^{\infty} f(x)\ e^{- 2\pi i x \xi}\,dx, </math> for every [[real number]] [[Xi (letter)|''ξ'']]. |
|||
The Fourier transform is an ''analysis'' process, decomposing a complex-valued function <math>\textstyle f(x)</math> into its constituent frequencies and their amplitudes. The inverse process is ''synthesis'', which recreates <math>\textstyle f(x)</math> from its transform. |
|||
When the independent variable ''x'' represents ''time'' (with [[SI]] unit of [[second]]s), the transform variable ''ξ'' represents [[frequency]] (in [[hertz]]). Under suitable conditions, ''ƒ'' can be reconstructed from <math>\hat f</math> by the '''inverse transform''': |
|||
We can start with an analogy, the [[Fourier series]], which analyzes <math>\textstyle f(x)</math> over a bounded interval <math>[-P/2, P/2]</math> on the real line. The constituent frequencies at <math>\tfrac{n}{P}, n \in \mathbb Z,</math> form a discrete set of ''harmonics'' whose amplitude and phase are given by the '''analysis formula:'''<math display="block">c_n = \frac{1}{P} \int_{-P/2}^{P/2} f(x) \, e^{-i 2\pi \frac{n}{P}x} \, dx.</math>The actual '''Fourier series''' is the '''synthesis formula:'''<math display="block">f(x) = \sum_{n=-\infty}^\infty c_n\, e^{i 2\pi \tfrac{n}{P}x},\quad \textstyle x \in [-P/2, P/2].</math>On an unbounded interval, <math>P\to\infty,</math> the constituent frequencies are a continuum''':''' <math>\tfrac{n}{P} \to \xi \in \mathbb R,</math><ref>{{harvnb|Khare|Butola|Rajora|2023|pp=13–14}}</ref><ref>{{harvnb|Kaiser|1994|p=29}}</ref><ref>{{harvnb|Rahman|2011|p=11}}</ref> and <math>c_n</math> is replaced by a function''':'''<ref>{{harvnb|Dym|McKean|1985}}</ref>{{Equation box 1|title =Fourier transform |
|||
:<math>f(x) = \int_{-\infty}^{\infty} \hat{f}(\xi)\ e^{2 \pi i x \xi}\,d\xi, </math> for every real number ''x''. |
|||
|indent =:|cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA |
|||
|equation = {{NumBlk|| |
|||
<math>\widehat{f}(\xi) = \int_{-\infty}^{\infty} f(x)\ e^{-i 2\pi \xi x}\,dx.</math> |
|||
|{{EquationRef|Eq.1}}}} |
|||
}} |
|||
Evaluating the Fourier transform for all values of <math>\xi</math> produces the ''frequency-domain'' function. Though, in general, the integral can diverge at some frequencies, if <math>f(x)</math> decays with all derivatives, i.e., |
|||
For other common conventions and notations, see [[Fourier transform#Other conventions|Other conventions]] and [[Fourier transform#Other notations|Other notations]] below. The [[#Fourier transform on Euclidean space|Fourier transform on Euclidean space]] is treated separately, in which the variable ''x'' often represents position and ''ξ'' momentum. |
|||
<math display="block">\lim_{|x|\to\infty} f^{(n)}(x) = 0, \quad \forall n\in \mathbb{N},</math> then <math>\widehat f</math> converges for all frequencies and, by the [[Riemann–Lebesgue lemma]], <math>\widehat f</math> also decays with all derivatives. |
|||
The complex number <math>\widehat{f}(\xi)</math>, in polar coordinates, conveys both [[amplitude]] and [[phase offset|phase]] of frequency <math>\xi.</math> The intuitive interpretation of {{EquationNote|Eq.1}} is that the effect of multiplying <math>f(x)</math> by <math>e^{-i 2\pi \xi x}</math> is to subtract <math>\xi</math> from every frequency component of function <math>f(x).</math><ref group="note">A possible source of confusion is the [[#Frequency shifting|frequency-shifting property]]; i.e. the transform of function <math>f(x)e^{-i 2\pi \xi_0 x}</math> is <math>\widehat{f}(\xi+\xi_0).</math> The value of this function at <math>\xi=0</math> is <math>\widehat{f}(\xi_0),</math> meaning that a frequency <math>\xi_0</math> has been shifted to zero (also see [[Negative frequency#Simplifying the Fourier transform|Negative frequency]]).</ref> Only the component that was at frequency <math>\xi</math> can produce a non-zero value of the infinite integral, because (at least formally) all the other shifted components are oscillatory and integrate to zero. (see {{slink||Example}}) |
|||
==Introduction== |
|||
{{See also|Fourier analysis}} |
|||
The motivation for the Fourier transform comes from the study of [[Fourier series]]. In the study of Fourier series, complicated periodic functions are written as the sum of simple waves mathematically represented by [[sine]]s and [[cosine]]s. Due to the properties of sine and cosine it is possible to recover the amount of each wave in the sum by an integral. In many cases it is desirable to use [[Euler's formula]], which states that ''e''<sup>2''πiθ''</sup> = cos 2''πθ'' + ''i'' sin 2''πθ'', to write Fourier series in terms of the basic waves ''e''<sup>2''πiθ''</sup>. This has the advantage of simplifying many of the formulas involved and providing a formulation for Fourier series that more closely resembles the definition followed in this article. This passage from sines and cosines to [[complex exponentials]] makes it necessary for the Fourier coefficients to be complex valued. The usual interpretation of this complex number is that it gives you both the [[amplitude]] (or size) of the wave present in the function and the [[phase (waves)|phase]] (or the initial angle) of the wave. This passage also introduces the need for negative "frequencies". If θ were measured in seconds then the waves ''e''<sup>2''πiθ''</sup> and ''e''<sup>−2''πiθ''</sup> would both complete one cycle per second, but they represent different frequencies in the Fourier transform. Hence, frequency no longer measures the number of cycles per unit time, but is closely related. |
|||
The corresponding synthesis formula is: |
|||
We may use Fourier series to motivate the Fourier transform as follows. Suppose that ''ƒ'' is a function which is zero outside of some interval [−''L''/2, ''L''/2]. Then for any ''T'' ≥ ''L'' we may expand ''ƒ'' in a Fourier series on the interval [−T/2,T/2], where the "amount" (denoted by ''c''<sub>''n''</sub>) of the wave ''e''<sup>2''πinx/T''</sup> in the Fourier series of ''ƒ'' is given by |
|||
{{Equation box 1|title = Inverse transform |
|||
:<math>\hat{f}(n/T)=c_n=\int_{-T/2}^{T/2} e^{-2\pi i nx/T}f(x)\,dx</math> |
|||
|indent =:|cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA |
|||
|equation = {{NumBlk|| |
|||
<math>f(x) = \int_{-\infty}^{\infty} \widehat f(\xi)\ e^{i 2 \pi \xi x}\,d\xi,\quad \forall\ x \in \mathbb R.</math> |
|||
|{{EquationRef|Eq.2}}}} |
|||
}} |
|||
{{EquationNote|Eq.2}} is a representation of <math>f(x)</math> as a weighted summation of complex exponential functions. |
|||
and ''ƒ'' should be given by the formula |
|||
This is also known as the [[Fourier inversion theorem]], and was first introduced in [[Joseph Fourier|Fourier's]] ''Analytical Theory of Heat''.<ref>{{harvnb|Fourier|1822|p=525}}</ref><ref>{{harvnb|Fourier|1878|p=408}}</ref><ref>{{harvtxt|Jordan|1883}} proves on pp. 216–226 the [[Fourier inversion theorem#Fourier integral theorem|Fourier integral theorem]] before studying Fourier series.</ref><ref>{{harvnb|Titchmarsh|1986|p=1}}</ref> |
|||
:<math>f(x)=\frac{1}{T}\sum_{n=-\infty}^\infty \hat{f}(n/T) e^{2\pi i nx/T}.</math> |
|||
The functions <math>f</math> and <math>\widehat{f}</math> are referred to as a '''Fourier transform pair'''.<ref>{{harvnb|Rahman|2011|p=10}}.</ref> A common notation for designating transform pairs is''':'''<ref>{{harvnb|Oppenheim|Schafer|Buck|1999|p=58}}</ref> |
|||
If we let ''ξ''<sub>''n''</sub> = ''n''/''T'', and we let Δ''ξ'' = (''n'' + 1)/''T'' − ''n''/''T'' = 1/''T'', then this last sum becomes the [[Riemann sum]] |
|||
<math display="block">f(x)\ \stackrel{\mathcal{F}}{\longleftrightarrow}\ \widehat f(\xi),</math> for example <math>\operatorname{rect}(x)\ \stackrel{\mathcal{F}}{\longleftrightarrow}\ \operatorname{sinc}(\xi).</math> |
|||
=== Lebesgue integrable functions === |
|||
:<math>f(x)=\sum_{n=-\infty}^\infty \hat{f}(n/T) e^{2\pi i x\xi_n}\Delta\xi.</math> |
|||
{{see also|Lp space#Lp spaces and Lebesgue integrals}} |
|||
A [[measurable function]] <math>f:\mathbb R\to\mathbb C</math> is called (Lebesgue) integrable if the [[Lebesgue integral]] of its absolute value is finite: |
|||
By letting ''T'' → ∞ this Riemann sum converges to the integral for the inverse Fourier transform given in the Definition section. Under suitable conditions this argument may be made precise {{harv|Stein|Shakarchi|2003}}. Hence, as in the case of Fourier series, the Fourier transform can be thought of as a function that measures how much of each individual frequency is present in our function, and we can recombine these waves by using an integral (or "continuous sum") to reproduce the original function. |
|||
<math display="block">\|f\|_1 = \int_{\mathbb R}|f(x)|\,dx < \infty.</math> |
|||
For a Lebesgue integrable function <math>f</math> the Fourier transform is defined by {{EquationNote|Eq.1}}.{{sfn|Stade|2005|pp=298-299}} |
|||
The integral {{EquationNote|Eq.1}} is well-defined for all <math>\xi\in\mathbb R,</math> because of the assumption <math>\|f\|_1<\infty</math>. (It can be shown that the function <math>\widehat f\in L^\infty\cap C(\mathbb R)</math> is bounded and [[uniformly continuous]] in the frequency domain, and moreover, by the [[Riemann–Lebesgue lemma]], it is zero at infinity.) |
|||
The space <math>L^1(\mathbb R)</math> is the space of measurable functions for which the norm <math>\|f\|_1</math> is finite, modulo the [[Equivalence_class|equivalence relation]] of equality [[almost everywhere]]. The Fourier transform is [[Bijection,_injection_and_surjection|one-to-one]] on <math>L^1(\mathbb R)</math>. However, there is no easy characterization of the image, and thus no easy characterization of the inverse transform. In particular, {{EquationNote|Eq.2}} is no longer valid, as it was stated only under the hypothesis that <math>f(x)</math> decayed with all derivatives. |
|||
The following images provide a visual illustration of how the Fourier transform measures whether a frequency is present in a particular function. The function depicted <math>f(t)=\cos(6\pi t)e^{-\pi t^2}</math> oscillates at 3 hertz (if ''t'' measures seconds) and tends quickly to 0. This function was specially chosen to have a real Fourier transform which can easily be plotted. The first image contains its graph. In order to calculate <math>\hat{f}(3)</math> |
|||
we must integrate ''e''<sup>−2''πi(''3''t'')</sup>''ƒ''(''t''). The second image shows the plot of the real and imaginary parts of this function. The real part of the integrand is almost always positive, this is because when ''ƒ''(''t'') is negative, then the real part of ''e''<sup>−2''πi(''3''t'')</sup> is negative as well. Because they oscillate at the same rate, when ''ƒ''(''t'') is positive, so is the real part of ''e''<sup>−2''πi(''3''t'')</sup>. The result is that when you integrate the real part of the integrand you get a relatively large number (in this case 0.5). On the other hand, when you try to measure a frequency that is not present, as in the case when we look at <math>\hat{f}(5)</math>, the integrand oscillates enough so that the integral is very small. The general situation may be a bit more complicated than this, but this in spirit is how the Fourier transform measures how much of an individual frequency is present in a function ''ƒ''(''t''). |
|||
<gallery> |
|||
Image:Function ocsillating at 3 hertz.svg|Original function showing oscillation 3 hertz. |
|||
Image:Onfreq.svg| Real and imaginary parts of integrand for Fourier transform at 3 hertz |
|||
Image:Offfreq.svg| Real and imaginary parts of integrand for Fourier transform at 5 hertz |
|||
Image:Fourier transform of oscillating function.svg| Fourier transform with 3 and 5 hertz labeled. |
|||
</gallery> |
|||
Moreover, while {{EquationNote|Eq.1}} defines the Fourier transform for (complex-valued) functions in <math>L^1(\mathbb R)</math>, it is easy to see that it is not well-defined for other integrability classes, most importantly the space of [[square-integrable function]]s <math>L^2(\mathbb R)</math>. For example, the function <math>f(x)=(1+x^2)^{-1/2}</math> is in <math>L^2</math> but not <math>L^1</math>, so the integral {{EquationNote|Eq.1}} diverges. However, the Fourier transform on the dense subspace <math>L^1\cap L^2(\mathbb R) \subset L^2(\mathbb R)</math> admits a unique continuous extension to a [[unitary operator]] on <math>L^2(\mathbb R)</math>. This extension is important in part because the Fourier transform preserves the space <math>L^2(\mathbb R)</math>. That is, unlike the case of <math>L^1</math>, both the Fourier transform and its inverse act on same function space <math>L^2(\mathbb R)</math>. |
|||
== Properties of the Fourier transform== |
|||
In such cases, the Fourier transform can be obtained explicitly by regularizing the integral, and then passing to a limit. In practice, the integral is often regarded as an [[improper integral]] instead of a proper Lebesgue integral, but sometimes for convergence one needs to use [[weak limit]] or [[Cauchy principal value|principal value]] instead of the (pointwise) limits implicit in an improper integral. {{harvtxt|Titchmarsh|1986}} and {{harvtxt|Dym|McKean|1985}} each gives three rigorous ways of extending the Fourier transform to square integrable functions using this procedure. A general principle in working with the <math>L^2</math> Fourier transform is that Gaussians are dense in <math>L^1\cap L^2</math>, and the various features of the Fourier transform, such as its unitarity, are easily inferred for Gaussians. Many of the properties of the Fourier transform, can then be proven from two facts about Gaussians:{{sfn|Howe|1980}} |
|||
An ''integrable function'' is a function ''ƒ'' on the real line that is [[Measurable function|Lebesgue-measurable]] and satisfies |
|||
* that <math>e^{-\pi x^2}</math> is its own Fourier transform; and |
|||
* that the Gaussian integral <math>\int_{-\infty}^\infty e^{-\pi x^2}\,dx = 1.</math> |
|||
A feature of the <math>L^1</math> Fourier transform is that it is a homomorphism of Banach algebras from <math>L^1</math> equipped with the convolution operation to the Banach algebra of continuous functions under the <math>L^\infty</math> (supremum) norm. The conventions chosen in this article are those of [[harmonic analysis]], and are characterized as the unique conventions such that the Fourier transform is both [[Unitary operator|unitary]] on {{math|''L''<sup>2</sup>}} and an algebra homomorphism from {{math|''L''<sup>1</sup>}} to {{math|''L''<sup>∞</sup>}}, without renormalizing the Lebesgue measure.<ref>{{harvnb|Folland|1989}}</ref> |
|||
:<math>\int_{-\infty}^\infty |f(x)| \, dx < \infty.</math> |
|||
=== Angular frequency (''ω'') === |
|||
When the independent variable (<math>x</math>) represents ''time'' (often denoted by <math>t</math>), the transform variable (<math>\xi</math>) represents [[frequency]] (often denoted by <math>f</math>). For example, if time is measured in [[second]]s, then frequency is in [[hertz]]. The Fourier transform can also be written in terms of [[angular frequency]], <math>\omega = 2\pi \xi,</math> whose units are [[radian]]s per second. |
|||
The substitution <math>\xi = \tfrac{\omega}{2 \pi}</math> into {{EquationNote|Eq.1}} produces this convention, where function <math>\widehat f</math> is relabeled <math>\widehat {f_1}:</math> |
|||
<math display="block">\begin{align} |
|||
\widehat {f_3}(\omega) &\triangleq \int_{-\infty}^{\infty} f(x)\cdot e^{-i\omega x}\, dx = \widehat{f_1}\left(\tfrac{\omega}{2\pi}\right),\\ |
|||
f(x) &= \frac{1}{2\pi} \int_{-\infty}^{\infty} \widehat{f_3}(\omega)\cdot e^{i\omega x}\, d\omega. |
|||
\end{align} |
|||
</math> |
|||
Unlike the {{EquationNote|Eq.1}} definition, the Fourier transform is no longer a [[unitary transformation]], and there is less symmetry between the formulas for the transform and its inverse. Those properties are restored by splitting the <math>2 \pi</math> factor evenly between the transform and its inverse, which leads to another convention: |
|||
<math display="block">\begin{align} |
|||
\widehat{f_2}(\omega) &\triangleq \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} f(x)\cdot e^{- i\omega x}\, dx = \frac{1}{\sqrt{2\pi}}\ \ \widehat{f_1}\left(\tfrac{\omega}{2\pi}\right), \\ |
|||
f(x) &= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} \widehat{f_2}(\omega)\cdot e^{ i\omega x}\, d\omega. |
|||
\end{align}</math> |
|||
Variations of all three conventions can be created by conjugating the complex-exponential [[integral kernel|kernel]] of both the forward and the reverse transform. The signs must be opposites. |
|||
{| class="wikitable" |
|||
|+ Summary of popular forms of the Fourier transform, one-dimensional |
|||
|- |
|||
! ordinary frequency {{mvar|ξ}} (Hz) |
|||
! unitary |
|||
| <math>\begin{align} |
|||
\widehat{f_1}(\xi)\ &\triangleq\ \int_{-\infty}^{\infty} f(x)\, e^{-i 2\pi \xi x}\, dx = \sqrt{2\pi}\ \ \widehat{f_2}(2 \pi \xi) = \widehat{f_3}(2 \pi \xi) \\ |
|||
f(x) &= \int_{-\infty}^{\infty} \widehat{f_1}(\xi)\, e^{i 2\pi x \xi}\, d\xi \end{align}</math> |
|||
|- |
|||
! rowspan="2" | angular frequency {{mvar|ω}} (rad/s) |
|||
! unitary |
|||
| <math>\begin{align} |
|||
\widehat{f_2}(\omega)\ &\triangleq\ \frac{1}{\sqrt{2\pi}}\ \int_{-\infty}^{\infty} f(x)\, e^{-i \omega x}\, dx = \frac{1}{\sqrt{2\pi}}\ \ \widehat{f_1} \! \left(\frac{\omega}{2 \pi}\right) = \frac{1}{\sqrt{2\pi}}\ \ \widehat{f_3}(\omega) \\ |
|||
f(x) &= \frac{1}{\sqrt{2\pi}}\ \int_{-\infty}^{\infty} \widehat{f_2}(\omega)\, e^{i \omega x}\, d\omega \end{align}</math> |
|||
|- |
|||
! non-unitary |
|||
| <math>\begin{align} |
|||
\widehat{f_3}(\omega) \ &\triangleq\ \int_{-\infty}^{\infty} f(x)\, e^{-i\omega x}\, dx = \widehat{f_1} \left(\frac{\omega}{2 \pi}\right) = \sqrt{2\pi}\ \ \widehat{f_2}(\omega) \\ |
|||
f(x) &= \frac{1}{2 \pi} \int_{-\infty}^{\infty} \widehat{f_3}(\omega)\, e^{i \omega x}\, d\omega \end{align}</math> |
|||
|} |
|||
{| class="wikitable" |
|||
|+ Generalization for {{math|''n''}}-dimensional functions |
|||
|- |
|||
! ordinary frequency {{mvar|ξ}} (Hz) |
|||
! unitary |
|||
| <math>\begin{align} |
|||
\widehat{f_1}(\xi)\ &\triangleq\ \int_{\mathbb{R}^n} f(x) e^{-i 2\pi \xi\cdot x}\, dx = (2 \pi)^\frac{n}{2}\widehat{f_2}(2\pi \xi) = \widehat{f_3}(2\pi \xi) \\ |
|||
f(x) &= \int_{\mathbb{R}^n} \widehat{f_1}(\xi) e^{i 2\pi \xi\cdot x}\, d\xi \end{align}</math> |
|||
|- |
|||
! rowspan="2" | angular frequency {{mvar|ω}} (rad/s) |
|||
! unitary |
|||
| <math>\begin{align} |
|||
\widehat{f_2}(\omega)\ &\triangleq\ \frac{1}{(2 \pi)^\frac{n}{2}} \int_{\mathbb{R}^n} f(x) e^{-i \omega\cdot x}\, dx = \frac{1}{(2 \pi)^\frac{n}{2}} \widehat{f_1} \! \left(\frac{\omega}{2 \pi}\right) = \frac{1}{(2 \pi)^\frac{n}{2}} \widehat{f_3}(\omega) \\ |
|||
f(x) &= \frac{1}{(2 \pi)^\frac{n}{2}} \int_{\mathbb{R}^n} \widehat{f_2}(\omega)e^{i \omega\cdot x}\, d\omega \end{align}</math> |
|||
|- |
|||
! non-unitary |
|||
| <math>\begin{align} |
|||
\widehat{f_3}(\omega) \ &\triangleq\ \int_{\mathbb{R}^n} f(x) e^{-i\omega\cdot x}\, dx = \widehat{f_1} \left(\frac{\omega}{2 \pi}\right) = (2 \pi)^\frac{n}{2} \widehat{f_2}(\omega) \\ |
|||
f(x) &= \frac{1}{(2 \pi)^n} \int_{\mathbb{R}^n} \widehat{f_3}(\omega) e^{i \omega\cdot x}\, d\omega \end{align}</math> |
|||
|} |
|||
== Background == |
|||
=== History === |
|||
{{Main|Fourier analysis#History|Fourier series#History}} |
|||
In 1822, Fourier claimed (see {{Slink|Joseph Fourier|The Analytic Theory of Heat}}) that any function, whether continuous or discontinuous, can be expanded into a series of sines.<ref>{{harvnb|Fourier|1822}}</ref> That important work was corrected and expanded upon by others to provide the foundation for the various forms of the Fourier transform used since. |
|||
[[File:unfasor.gif|thumb|right|Fig.1 When function <math>A \cdot e^{i 2\pi \xi t}</math> is depicted in the complex plane, the vector formed by its [[complex number|imaginary and real parts]] rotates around the origin. Its real part <math>y(t)</math> is a cosine wave.]] |
|||
=== Complex sinusoids === |
|||
In general, the coefficients <math>\widehat f(\xi)</math> are complex numbers, which have two equivalent forms (see [[Euler's formula]]): |
|||
<math display="block"> \widehat f(\xi) = \underbrace{A e^{i \theta}}_{\text{polar coordinate form}} |
|||
= \underbrace{A \cos(\theta) + i A \sin(\theta)}_{\text{rectangular coordinate form}}.</math> |
|||
The product with <math>e^{i 2 \pi \xi x}</math> ({{EquationNote|Eq.2}}) has these forms: |
|||
<math display="block">\begin{aligned}\widehat f(\xi)\cdot e^{i 2 \pi \xi x} |
|||
&= A e^{i \theta} \cdot e^{i 2 \pi \xi x}\\ |
|||
&= \underbrace{A e^{i (2 \pi \xi x+\theta)}}_{\text{polar coordinate form}}\\ |
|||
&= \underbrace{A\cos(2\pi \xi x +\theta) + i A\sin(2\pi \xi x +\theta)}_{\text{rectangular coordinate form}}.\end{aligned}</math> |
|||
It is noteworthy how easily the product was simplified using the polar form, and how easily the rectangular form was deduced by an application of Euler's formula. |
|||
=== Negative frequency === |
|||
{{See also|Negative frequency#Simplifying the Fourier transform|l1=Negative frequency § Simplifying the Fourier transform}} |
|||
Euler's formula introduces the possibility of negative <math>\xi.</math> And {{EquationNote|Eq.1}} is defined <math>\forall \xi \in \mathbb{R}.</math> Only certain complex-valued <math> f(x)</math> have transforms <math> \widehat f =0, \ \forall \ \xi < 0</math> (See [[Analytic signal]]. A simple example is <math> e^{i 2 \pi \xi_0 x}\ (\xi_0 > 0).</math>) But negative frequency is necessary to characterize all other complex-valued <math> f(x),</math> found in [[signal processing]], [[partial differential equations]], [[radar]], [[nonlinear optics]], [[quantum mechanics]], and others. |
|||
For a real-valued <math> f(x),</math> {{EquationNote|Eq.1}} has the symmetry property <math>\widehat f(-\xi) = \widehat {f}^* (\xi)</math> (see {{slink||Conjugation}} below). This redundancy enables {{EquationNote|Eq.2}} to distinguish <math>f(x) = \cos(2 \pi \xi_0 x)</math> from <math>e^{i2 \pi \xi_0 x}.</math> But of course it cannot tell us the actual sign of <math>\xi_0,</math> because <math>\cos(2 \pi \xi_0 x)</math> and <math>\cos(2 \pi (-\xi_0) x)</math> are indistinguishable on just the real numbers line. |
|||
=== Fourier transform for periodic functions === |
|||
The Fourier transform of a periodic function cannot be defined using the integral formula directly. In order for integral in {{EquationNote|Eq.1}} to be defined the function must be [[Absolutely integrable function|absolutely integrable]]. Instead it is common to use [[Fourier series]]. It is possible to extend the definition to include periodic functions by viewing them as [[Distribution (mathematics)#Tempered distributions|tempered distributions]]. |
|||
This makes it possible to see a connection between the [[Fourier series]] and the Fourier transform for periodic functions that have a [[Convergence of Fourier series|convergent Fourier series]]. If <math>f(x)</math> is a [[periodic function]], with period <math>P</math>, that has a convergent Fourier series, then: |
|||
<math display="block"> |
|||
\widehat{f}(\xi) = \sum_{n=-\infty}^\infty c_n \cdot \delta \left(\xi - \tfrac{n}{P}\right), |
|||
</math> |
|||
where <math>c_n</math> are the Fourier series coefficients of <math>f</math>, and <math>\delta</math> is the [[Dirac delta function]]. In other words, the Fourier transform is a [[Dirac comb]] function whose ''teeth'' are multiplied by the Fourier series coefficients. |
|||
=== Sampling the Fourier transform === |
|||
{{Broader|Poisson summation formula}} |
|||
The Fourier transform of an [[Absolutely integrable function|integrable]] function <math>f</math> can be sampled at regular intervals of arbitrary length <math>\tfrac{1}{P}.</math> These samples can be deduced from one cycle of a periodic function <math>f_P</math> which has [[Fourier series]] coefficients proportional to those samples by the [[Poisson summation formula]]: |
|||
<math display="block">f_P(x) \triangleq \sum_{n=-\infty}^{\infty} f(x+nP) = \frac{1}{P}\sum_{k=-\infty}^{\infty} \widehat f\left(\tfrac{k}{P}\right) e^{i2\pi \frac{k}{P} x}, \quad \forall k \in \mathbb{Z}</math> |
|||
The integrability of <math>f</math> ensures the periodic summation converges. Therefore, the samples <math>\widehat f\left(\tfrac{k}{P}\right)</math> can be determined by Fourier series analysis: |
|||
<math display="block">\widehat f\left(\tfrac{k}{P}\right) = \int_{P} f_P(x) \cdot e^{-i2\pi \frac{k}{P} x} \,dx.</math> |
|||
When <math>f(x)</math> has [[compact support]], <math>f_P(x)</math> has a finite number of terms within the interval of integration. When <math>f(x)</math> does not have compact support, numerical evaluation of <math>f_P(x)</math> requires an approximation, such as tapering <math>f(x)</math> or truncating the number of terms. |
|||
== Units == |
|||
{{see also|Spectral density#Units}} |
|||
The frequency variable must have inverse units to the units of the original function's domain (typically named <math>t</math> or <math>x</math>). For example, if <math>t</math> is measured in seconds, <math>\xi</math> should be in cycles per second or [[hertz]]. If the scale of time is in units of <math>2\pi</math> seconds, then another Greek letter <math>\omega</math> is typically used instead to represent [[angular frequency]] (where <math>\omega=2\pi \xi</math>) in units of [[radian]]s per second. If using <math>x</math> for units of length, then <math>\xi</math> must be in inverse length, e.g., [[wavenumber]]s. That is to say, there are two versions of the real line: one which is the [[Range of a function|range]] of <math>t</math> and measured in units of <math>t,</math> and the other which is the range of <math>\xi</math> and measured in inverse units to the units of <math>t.</math> These two distinct versions of the real line cannot be equated with each other. Therefore, the Fourier transform goes from one space of functions to a different space of functions: functions which have a different domain of definition. |
|||
In general, <math>\xi</math> must always be taken to be a [[linear form]] on the space of its domain, which is to say that the second real line is the [[dual space]] of the first real line. See the article on [[linear algebra]] for a more formal explanation and for more details. This point of view becomes essential in generalizations of the Fourier transform to general [[symmetry group]]s, including the case of Fourier series. |
|||
That there is no one preferred way (often, one says "no canonical way") to compare the two versions of the real line which are involved in the Fourier transform—fixing the units on one line does not force the scale of the units on the other line—is the reason for the plethora of rival conventions on the definition of the Fourier transform. The various definitions resulting from different choices of units differ by various constants. |
|||
In other conventions, the Fourier transform has {{mvar|i}} in the exponent instead of {{math|−''i''}}, and vice versa for the inversion formula. This convention is common in modern physics<ref>{{harvnb|Arfken|1985}}</ref> and is the default for [https://www.wolframalpha.com Wolfram Alpha], and does not mean that the frequency has become negative, since there is no canonical definition of positivity for frequency of a complex wave. It simply means that <math>\hat f(\xi)</math> is the amplitude of the wave <math>e^{-i 2\pi \xi x}</math> instead of the wave <math>e^{i 2\pi \xi x}</math> (the former, with its minus sign, is often seen in the time dependence for [[Sinusoidal plane-wave solutions of the electromagnetic wave equation]], or in the [[Wave function#Time dependence|time dependence for quantum wave functions]]). Many of the identities involving the Fourier transform remain valid in those conventions, provided all terms that explicitly involve {{math|''i''}} have it replaced by {{math|−''i''}}. In [[Electrical engineering]] the letter {{math|''j''}} is typically used for the [[imaginary unit]] instead of {{math|''i''}} because {{math|''i''}} is used for current. |
|||
When using [[dimensionless units]], the constant factors might not even be written in the transform definition. For instance, in [[probability theory]], the characteristic function {{mvar|Φ}} of the probability density function {{mvar|f}} of a random variable {{mvar|X}} of continuous type is defined without a negative sign in the exponential, and since the units of {{mvar|x}} are ignored, there is no 2{{pi}} either: |
|||
<math display="block">\phi (\lambda) = \int_{-\infty}^\infty f(x) e^{i\lambda x} \,dx.</math> |
|||
(In probability theory, and in mathematical statistics, the use of the Fourier—Stieltjes transform is preferred, because so many random variables are not of continuous type, and do not possess a density function, and one must treat not functions but [[Distribution (mathematics)|distributions]], i.e., measures which possess "atoms".) |
|||
From the higher point of view of [[character theory|group characters]], which is much more abstract, all these arbitrary choices disappear, as will be explained in the later section of this article, which treats the notion of the Fourier transform of a function on a [[locally compact abelian group|locally compact Abelian group]]. |
|||
== Properties == |
|||
Let <math>f(x)</math> and <math>h(x)</math> represent ''integrable functions'' [[Lebesgue-measurable]] on the real line satisfying: |
|||
<math display="block">\int_{-\infty}^\infty |f(x)| \, dx < \infty.</math> |
|||
We denote the Fourier transforms of these functions as <math>\hat f(\xi)</math> and <math>\hat h(\xi)</math> respectively. |
|||
=== Basic properties === |
=== Basic properties === |
||
The Fourier transform has the following basic properties:<ref name="Pinsky-2002">{{harvnb|Pinsky|2002}}</ref> |
|||
==== Linearity ==== |
|||
Given integrable functions ''f''(''x''), ''g''(''x''), and ''h''(''x'') denote their Fourier transforms by <math>\hat{f}(\xi)</math>, <math>\hat{g}(\xi)</math>, and <math>\hat{h}(\xi)</math> respectively. The Fourier transform has the following basic properties {{harv|Pinsky|2002}}. |
|||
<math display="block">a\ f(x) + b\ h(x)\ \ \stackrel{\mathcal{F}}{\Longleftrightarrow}\ \ a\ \widehat f(\xi) + b\ \widehat h(\xi);\quad \ a,b \in \mathbb C</math> |
|||
; Linearity |
|||
: For any [[complex number]]s ''a'' and ''b'', if ''h''(''x'') = ''aƒ''(''x'') + ''bg''(''x''), then  <math>\hat{h}(\xi)=a\cdot \hat{f}(\xi) + b\cdot\hat{g}(\xi).</math> |
|||
; Translation |
|||
: For any [[real number]] ''x''<sub>0</sub>, if ''h''(''x'') = ''ƒ''(''x'' − ''x''<sub>0</sub>), then  <math>\hat{h}(\xi)= e^{-2\pi i x_0\xi }\hat{f}(\xi).</math> |
|||
; Modulation |
|||
: For any [[real number]] ''ξ''<sub>0</sub>, if ''h''(''x'') = ''e''<sup>2''πixξ''<sub><small>0</small></sub></sup>''ƒ''(''x''), then  <math>\hat{h}(\xi) = \hat{f}(\xi-\xi_{0})</math>. |
|||
; Scaling |
|||
: For a non-zero [[real number]] ''a'', if ''h''(''x'') = ''ƒ''(''ax''), then  <math>\hat{h}(\xi)=\frac{1}{|a|}\hat{f}\left(\frac{\xi}{a}\right)</math>. The case ''a'' = −1 leads to the ''time-reversal'' property, which states: if ''h''(''x'') = ''ƒ''(−''x''), then  <math>\hat{h}(\xi)=\hat{f}(-\xi)</math>. |
|||
; Conjugation |
|||
: If <math>h(x)=\overline{f(x)}</math>, then  <math>\hat{h}(\xi) = \overline{\hat{f}(-\xi)}.</math> |
|||
:In particular, if ''ƒ'' is real, then one has the ''reality condition''  <math>\hat{f}(-\xi)=\overline{\hat{f}(\xi)}.</math> |
|||
; Convolution |
|||
: If <math>h(x)=\left(f*g\right)(x)</math>, then  <math> \hat{h}(\xi)=\hat{f}(\xi)\cdot \hat{g}(\xi).</math> |
|||
==== Time shifting ==== |
|||
=== Uniform continuity and the Riemann–Lebesgue lemma=== |
|||
[[File:Rectangular function.svg|thumb|The [[rectangular function]] is Lebesgue integrable.]] |
|||
[[File:Sinc function (normalized).svg|thumb|The [[sinc function]], the Fourier transform of the rectangular function, is bounded and continuous, but not Lebesgue integrable.]] |
|||
The Fourier transform of integrable functions have additional properties that do not always hold. The Fourier transforms of integrable functions ''ƒ'' are [[uniformly continuous]] and <math>\|\hat{f}\|_{\infty}\leq \|f\|_1</math> {{harv|Katznelson|1976}}. The Fourier transform of integrable functions also satisfy the ''[[Riemann–Lebesgue lemma]]'' which states that {{harv|Stein|Weiss|1971}} |
|||
<math display="block">f(x-x_0)\ \ \stackrel{\mathcal{F}}{\Longleftrightarrow}\ \ e^{-i 2\pi x_0 \xi}\ \widehat f(\xi);\quad \ x_0 \in \mathbb R</math> |
|||
:<math>\hat{f}(\xi)\to 0\text{ as }|\xi|\to \infty.\,</math> |
|||
==== Frequency shifting ==== |
|||
The Fourier transform <math>\hat f</math> of an integrable function ''ƒ'' is bounded and continuous, but need not be integrable – for example, the Fourier transform of the [[rectangular function]], which is a [[step function]] (and hence integrable) is the [[sinc function]], which is not Lebesgue integrable, though it does have an improper integral: one has an analog to the [[alternating harmonic series]], which is a convergent sum but not [[absolutely convergent]]. |
|||
<math display="block">e^{i 2\pi \xi_0 x} f(x)\ \ \stackrel{\mathcal{F}}{\Longleftrightarrow}\ \ \widehat f(\xi - \xi_0);\quad \ \xi_0 \in \mathbb R</math> |
|||
It is not possible in general to write the ''inverse transform'' as a Lebesgue integral. However, when both ''ƒ'' and <math>\hat f</math> are integrable, the following inverse equality holds true for almost every ''x'': |
|||
==== Time scaling ==== |
|||
:<math>f(x) = \int_{-\infty}^\infty \hat f(\xi) e^{2 i \pi x \xi} \, d\xi.</math> |
|||
<math display="block">f(ax)\ \ \stackrel{\mathcal{F}}{\Longleftrightarrow}\ \ \frac{1}{|a|}\widehat{f}\left(\frac{\xi}{a}\right);\quad \ a \ne 0 </math> |
|||
Almost everywhere, ''ƒ'' is equal to the continuous function given by the right-hand side. If ''ƒ'' is given as continuous function on the line, then equality holds for every ''x''. |
|||
The case <math>a=-1</math> leads to the ''time-reversal property'': |
|||
<math display="block">f(-x)\ \ \stackrel{\mathcal{F}}{\Longleftrightarrow}\ \ \widehat f (-\xi)</math> |
|||
{{Annotated image |
|||
A consequence of the preceding result is that the Fourier transform is injective on ''L''<sup>1</sup>('''R'''). |
|||
| caption=The transform of an even-symmetric real-valued function <math>(f(t) = f_{RE})</math> is also an even-symmetric real-valued function <math>(\hat f_{RE}).</math> The time-shift, <math>(g(t) = g_{RE} + g_{RO}),</math> creates an imaginary component, <math>i\cdot \hat g_{IO}.</math> (see {{slink||Symmmetry}}. |
|||
| image=Fourier_unit_pulse.svg |
|||
| image-width = 300 |
|||
| annotations = |
|||
{{Annotation|20|40|<math>\scriptstyle f(t)</math>}} |
|||
{{Annotation|170|40|<math>\scriptstyle \widehat{f}(\omega)</math>}} |
|||
{{Annotation|20|140|<math>\scriptstyle g(t)</math>}} |
|||
{{Annotation|170|140|<math>\scriptstyle \widehat{g}(\omega)</math>}} |
|||
{{Annotation|130|80|<math>\scriptstyle t</math>}} |
|||
{{Annotation|280|85|<math>\scriptstyle \omega</math>}} |
|||
{{Annotation|130|192|<math>\scriptstyle t</math>}} |
|||
{{Annotation|280|180|<math>\scriptstyle \omega</math>}} |
|||
}} |
|||
==== Symmetry ==== |
|||
===The Plancherel theorem and Parseval's theorem=== |
|||
When the real and imaginary parts of a complex function are decomposed into their [[Even and odd functions#Even–odd decomposition|even and odd parts]], there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:<ref name="ProakisManolakis1996">{{cite book|last1=Proakis|first1=John G. |last2=Manolakis|first2=Dimitris G.|author2-link= Dimitris Manolakis |title=Digital Signal Processing: Principles, Algorithms, and Applications|url=https://archive.org/details/digitalsignalpro00proa|url-access=registration|year=1996|publisher=Prentice Hall|isbn=978-0-13-373762-2|edition=3rd|page=[https://archive.org/details/digitalsignalpro00proa/page/291 291]}}</ref> |
|||
<math> |
|||
Let ''f''(''x'') and ''g''(''x'') be integrable, and let <math>\hat{f}(\xi)</math> and <math>\hat{g}(\xi)</math> be their Fourier transforms. If ''f''(''x'') and ''g''(''x'') are also square-integrable, then we have [[Parseval's theorem]] {{harv|Rudin|1987|loc=p. 187}}''':''' |
|||
\begin{array}{rlcccccccc} |
|||
\mathsf{Time\ domain} & f & = & f_{_{\text{RE}}} & + & f_{_{\text{RO}}} & + & i\ f_{_{\text{IE}}} & + & \underbrace{i\ f_{_{\text{IO}}}} \\ |
|||
&\Bigg\Updownarrow\mathcal{F} & &\Bigg\Updownarrow\mathcal{F} & &\ \ \Bigg\Updownarrow\mathcal{F} & &\ \ \Bigg\Updownarrow\mathcal{F} & &\ \ \Bigg\Updownarrow\mathcal{F}\\ |
|||
\mathsf{Frequency\ domain} & \widehat f & = & \widehat f_{_\text{RE}} & + & \overbrace{i\ \widehat f_{_\text{IO}}\,} & + & i\ \widehat f_{_\text{IE}} & + & \widehat f_{_\text{RO}} |
|||
\end{array} |
|||
</math> |
|||
From this, various relationships are apparent, for example''':''' |
|||
: <math>\int_{-\infty}^{\infty} f(x) \overline{g(x)} \, dx = \int_{-\infty}^\infty \hat{f}(\xi) \overline{\hat{g}(\xi)} \, d\xi,</math> |
|||
*The transform of a real-valued function <math>(f_{_{RE}}+f_{_{RO}})</math> is the ''[[Even and odd functions#Complex-valued functions|conjugate symmetric]]'' function <math>\hat f_{RE}+i\ \hat f_{IO}.</math> Conversely, a ''conjugate symmetric'' transform implies a real-valued time-domain. |
|||
*The transform of an imaginary-valued function <math>(i\ f_{_{IE}}+i\ f_{_{IO}})</math> is the ''[[Even and odd functions#Complex-valued functions|conjugate antisymmetric]]'' function <math>\hat f_{RO}+i\ \hat f_{IE},</math> and the converse is true. |
|||
*The transform of a ''[[Even and odd functions#Complex-valued functions|conjugate symmetric]]'' function <math>(f_{_{RE}}+i\ f_{_{IO}})</math> is the real-valued function <math>\hat f_{RE}+\hat f_{RO},</math> and the converse is true. |
|||
*The transform of a ''[[Even and odd functions#Complex-valued functions|conjugate antisymmetric]]'' function <math>(f_{_{RO}}+i\ f_{_{IE}})</math> is the imaginary-valued function <math>i\ \hat f_{IE}+i\hat f_{IO},</math> and the converse is true. |
|||
==== Conjugation ==== |
|||
where the bar denotes [[complex conjugation]]. |
|||
<math display="block">\bigl(f(x)\bigr)^*\ \ \stackrel{\mathcal{F}}{\Longleftrightarrow}\ \ \left(\widehat{f}(-\xi)\right)^*</math> |
|||
(Note: the ∗ denotes [[Complex conjugate|complex conjugation]].) |
|||
In particular, if <math>f</math> is '''real''', then <math>\widehat f</math> is [[Even and odd functions#Complex-valued functions|even symmetric]] (aka [[Hermitian function]]): |
|||
The [[Plancherel theorem]], which is equivalent to [[Parseval's theorem]], states {{harv|Rudin|1987|loc=p. 186}}''':''' |
|||
<math display="block">\widehat{f}(-\xi)=\bigl(\widehat f(\xi)\bigr)^*.</math> |
|||
And if <math>f</math> is purely imaginary, then <math>\widehat f</math> is [[Even and odd functions#Complex-valued functions|odd symmetric]]: |
|||
:<math>\int_{-\infty}^\infty \left| f(x) \right|^2\, dx = \int_{-\infty}^\infty \left| \hat{f}(\xi) \right|^2\, d\xi. </math> |
|||
<math display="block">\widehat f(-\xi) = -(\widehat f(\xi))^*.</math> |
|||
==== Real and imaginary parts ==== |
|||
The Plancherel theorem makes it possible to define the Fourier transform for functions in ''L''<sup>2</sup>('''R'''), as described in [[Fourier transform#Generalizations|Generalizations]] below. The Plancherel theorem has the interpretation in the sciences that the Fourier transform preserves the energy of the original quantity. It should be noted that depending on the author either of these theorems might be referred to as the Plancherel theorem or as Parseval's theorem. |
|||
<math display="block">\operatorname{Re}\{f(x)\}\ \ \stackrel{\mathcal{F}}{\Longleftrightarrow}\ \ |
|||
\tfrac{1}{2} \left( \widehat f(\xi) + \bigl(\widehat f (-\xi) \bigr)^* \right)</math> |
|||
<math display="block">\operatorname{Im}\{f(x)\}\ \ \stackrel{\mathcal{F}}{\Longleftrightarrow}\ \ |
|||
\tfrac{1}{2i} \left( \widehat f(\xi) - \bigl(\widehat f (-\xi) \bigr)^* \right)</math> |
|||
==== Zero frequency component ==== |
|||
See [[Pontryagin duality]] for a general formulation of this concept in the context of locally compact abelian groups. |
|||
Substituting <math>\xi = 0</math> in the definition, we obtain: |
|||
<math display="block">\widehat{f}(0) = \int_{-\infty}^{\infty} f(x)\,dx.</math> |
|||
The integral of <math>f</math> over its domain is known as the average value or [[DC bias]] of the function. |
|||
===Poisson summation formula=== |
|||
{{Main|Poisson summation formula}} |
|||
=== Uniform continuity and the Riemann–Lebesgue lemma === |
|||
The [[Poisson summation formula]] provides a link between the study of Fourier transforms and Fourier Series. Given an integrable function ''ƒ'' we can consider the periodization of ''ƒ'' given by: |
|||
[[File:Rectangular function.svg|thumb|The [[rectangular function]] is [[Lebesgue integrable]].]] |
|||
[[File:Sinc function (normalized).svg|thumb|The [[sinc function]], which is the Fourier transform of the rectangular function, is bounded and continuous, but not Lebesgue integrable.]] |
|||
The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties. |
|||
The Fourier transform <math>\hat{f}</math> of any integrable function <math>f</math> is [[uniformly continuous]] and<ref name="Katznelson-1976">{{harvnb|Katznelson|1976}}</ref> |
|||
:<math>\bar f(x)=\sum_{k\in\mathbb{Z}} f(x+k),</math> |
|||
<math display="block">\left\|\hat{f}\right\|_\infty \leq \left\|f\right\|_1</math> |
|||
By the ''[[Riemann–Lebesgue lemma]]'',<ref name="Stein-Weiss-1971">{{harvnb|Stein|Weiss|1971}}</ref> |
|||
where the summation is taken over the set of all [[integer]]s ''k''. The Poisson summation formula relates the Fourier series of <math>\bar f</math> to the Fourier transform of ''ƒ''. Specifically it states that the Fourier series of <math>\bar f</math> is given by: |
|||
<math display="block">\hat{f}(\xi) \to 0\text{ as }|\xi| \to \infty.</math> |
|||
However, <math>\hat{f}</math> need not be integrable. For example, the Fourier transform of the [[rectangular function]], which is integrable, is the [[sinc function]], which is not [[Lebesgue integrable]], because its [[improper integral]]s behave analogously to the [[alternating harmonic series]], in converging to a sum without being [[absolutely convergent]]. |
|||
:<math>\bar f(x) \sim \sum_{k\in\mathbb{Z}} \hat{f}(k)e^{2\pi i k x}.</math> |
|||
It is not generally possible to write the ''inverse transform'' as a [[Lebesgue integral]]. However, when both <math>f</math> and <math>\hat{f}</math> are integrable, the inverse equality |
|||
===Convolution theorem === |
|||
<math display="block">f(x) = \int_{-\infty}^\infty \hat f(\xi) e^{i 2\pi x \xi} \, d\xi</math> holds for almost every {{mvar|x}}. As a result, the Fourier transform is [[injective]] on {{math|[[Lp space|''L''<sup>1</sup>('''R''')]]}}. |
|||
{{Main|Convolution theorem}} |
|||
=== Plancherel theorem and Parseval's theorem === |
|||
The Fourier transform translates between [[convolution]] and multiplication of functions. If ''ƒ''(''x'') and ''g''(''x'') are integrable functions with Fourier transforms <math>\hat{f}(\xi)</math> and <math>\hat{g}(\xi)</math> respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms <math>\hat{f}(\xi)</math> and <math>\hat{g}(\xi)</math> (under other conventions for the definition of the Fourier transform a constant factor may appear). |
|||
{{main|Plancherel theorem|Parseval's theorem}} |
|||
Let {{math|''f''(''x'')}} and {{math|''g''(''x'')}} be integrable, and let {{math|''f̂''(''ξ'')}} and {{math|''ĝ''(''ξ'')}} be their Fourier transforms. If {{math|''f''(''x'')}} and {{math|''g''(''x'')}} are also [[square-integrable]], then the Parseval formula follows:<ref>{{harvnb|Rudin|1987|p=187}}</ref> |
|||
<math display="block">\langle f, g\rangle_{L^{2}} = \int_{-\infty}^{\infty} f(x) \overline{g(x)} \,dx = \int_{-\infty}^\infty \hat{f}(\xi) \overline{\hat{g}(\xi)} \,d\xi,</math> |
|||
where the bar denotes [[complex conjugation]]. |
|||
The [[Plancherel theorem]], which follows from the above, states that<ref>{{harvnb|Rudin|1987|p=186}}</ref> |
|||
This means that if''':''' |
|||
<math display="block">\|f\|^2_{L^{2}} = \int_{-\infty}^\infty \left| f(x) \right|^2\,dx = \int_{-\infty}^\infty \left| \hat{f}(\xi) \right|^2\,d\xi. </math> |
|||
Plancherel's theorem makes it possible to extend the Fourier transform, by a continuity argument, to a [[unitary operator]] on {{math|''L''<sup>2</sup>('''R''')}}. On {{math|''L''<sup>1</sup>('''R''') ∩ ''L''<sup>2</sup>('''R''')}}, this extension agrees with original Fourier transform defined on {{math|''L''<sup>1</sup>('''R''')}}, thus enlarging the domain of the Fourier transform to {{math|''L''<sup>1</sup>('''R''') + ''L''<sup>2</sup>('''R''')}} (and consequently to {{math|{{math|''L''{{i sup|''p''}}}}('''R''')}} for {{math|1 ≤ ''p'' ≤ 2}}). Plancherel's theorem has the interpretation in the sciences that the Fourier transform preserves the [[energy]] of the original quantity. The terminology of these formulas is not quite standardised. Parseval's theorem was proved only for Fourier series, and was first proved by Lyapunov. But Parseval's formula makes sense for the Fourier transform as well, and so even though in the context of the Fourier transform it was proved by Plancherel, it is still often referred to as Parseval's formula, or Parseval's relation, or even Parseval's theorem. |
|||
:<math>h(x) = (f*g)(x) = \int_{-\infty}^\infty f(y)g(x - y)\,dy,</math> |
|||
See [[Pontryagin duality]] for a general formulation of this concept in the context of locally compact abelian groups. |
|||
where ∗ denotes the convolution operation, then''':''' |
|||
=== Convolution theorem === |
|||
:<math>\hat{h}(\xi) = \hat{f}(\xi)\cdot \hat{g}(\xi).</math> |
|||
{{Main|Convolution theorem}} |
|||
The Fourier transform translates between [[convolution]] and multiplication of functions. If {{math|''f''(''x'')}} and {{math|''g''(''x'')}} are integrable functions with Fourier transforms {{math|''f̂''(''ξ'')}} and {{math|''ĝ''(''ξ'')}} respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms {{math|''f̂''(''ξ'')}} and {{math|''ĝ''(''ξ'')}} (under other conventions for the definition of the Fourier transform a constant factor may appear). |
|||
In [[LTI system theory|linear time invariant (LTI) system theory]], it is common to interpret ''g''(''x'') as the [[impulse response]] of an LTI system with input ''ƒ''(''x'') and output ''h''(''x''), since substituting the [[Dirac delta function|unit impulse]] for ''ƒ''(''x'') yields ''h''(''x'') = ''g''(''x''). In this case,  <math>\hat{g}(\xi)</math>  represents the [[frequency response]] of the system. |
|||
This means that if: |
|||
Conversely, if ''ƒ''(''x'') can be decomposed as the product of two square integrable functions ''p''(''x'') and ''q''(''x''), then the Fourier transform of ''ƒ''(''x'') is given by the convolution of the respective Fourier transforms <math>\hat{p}(\xi)</math> and <math>\hat{q}(\xi)</math>. |
|||
<math display="block">h(x) = (f*g)(x) = \int_{-\infty}^\infty f(y)g(x - y)\,dy,</math> |
|||
where {{math|∗}} denotes the convolution operation, then: |
|||
<math display="block">\hat{h}(\xi) = \hat{f}(\xi)\, \hat{g}(\xi).</math> |
|||
In [[LTI system theory|linear time invariant (LTI) system theory]], it is common to interpret {{math|''g''(''x'')}} as the [[impulse response]] of an LTI system with input {{math|''f''(''x'')}} and output {{math|''h''(''x'')}}, since substituting the [[Dirac delta function|unit impulse]] for {{math|''f''(''x'')}} yields {{math|1=''h''(''x'') = ''g''(''x'')}}. In this case, {{math|''ĝ''(''ξ'')}} represents the [[frequency response]] of the system. |
|||
Conversely, if {{math|''f''(''x'')}} can be decomposed as the product of two square integrable functions {{math|''p''(''x'')}} and {{math|''q''(''x'')}}, then the Fourier transform of {{math|''f''(''x'')}} is given by the convolution of the respective Fourier transforms {{math|''p̂''(''ξ'')}} and {{math|''q̂''(''ξ'')}}. |
|||
=== Cross-correlation theorem === |
=== Cross-correlation theorem === |
||
{{Main|Cross-correlation}} |
{{Main|Cross-correlation|Wiener–Khinchin_theorem}} |
||
In an analogous manner, it can be shown that if ''h''(''x'') is the [[cross-correlation]] of '' |
In an analogous manner, it can be shown that if {{math|''h''(''x'')}} is the [[cross-correlation]] of {{math|''f''(''x'')}} and {{math|''g''(''x'')}}: |
||
<math display="block">h(x) = (f \star g)(x) = \int_{-\infty}^\infty \overline{f(y)}g(x + y)\,dy</math> |
|||
then the Fourier transform of {{math|''h''(''x'')}} is: |
|||
<math display="block">\hat{h}(\xi) = \overline{\hat{f}(\xi)} \, \hat{g}(\xi).</math> |
|||
As a special case, the [[autocorrelation]] of function {{math|''f''(''x'')}} is: |
|||
:<math>h(x)=(f\star g)(x) = \int_{-\infty}^\infty \overline{f(y)}\,g(x+y)\,dy</math> |
|||
<math display="block">h(x) = (f \star f)(x) = \int_{-\infty}^\infty \overline{f(y)}f(x + y)\,dy</math> |
|||
for which |
|||
<math display="block">\hat{h}(\xi) = \overline{\hat{f}(\xi)}\hat{f}(\xi) = \left|\hat{f}(\xi)\right|^2.</math> |
|||
=== Differentiation === |
|||
then the Fourier transform of ''h''(''x'') is: |
|||
Suppose {{math|''f''(''x'')}} is an absolutely continuous differentiable function, and both {{math|''f''}} and its derivative {{math|''f′''}} are integrable. Then the Fourier transform of the derivative is given by |
|||
<math display="block">\widehat{f'\,}(\xi) = \mathcal{F}\left\{ \frac{d}{dx} f(x)\right\} = i 2\pi \xi\hat{f}(\xi).</math> |
|||
More generally, the Fourier transformation of the {{mvar|n}}th derivative {{math|''f''{{isup|(''n'')}}}} is given by |
|||
<math display="block">\widehat{f^{(n)}}(\xi) = \mathcal{F}\left\{ \frac{d^n}{dx^n} f(x) \right\} = (i 2\pi \xi)^n\hat{f}(\xi).</math> |
|||
Analogously, <math>\mathcal{F}\left\{ \frac{d^n}{d\xi^n} \hat{f}(\xi)\right\} = (i 2\pi x)^n f(x)</math>, so <math>\mathcal{F}\left\{ x^n f(x)\right\} = \left(\frac{i}{2\pi}\right)^n \frac{d^n}{d\xi^n} \hat{f}(\xi).</math> |
|||
By applying the Fourier transform and using these formulas, some [[ordinary differential equation]]s can be transformed into algebraic equations, which are much easier to solve. These formulas also give rise to the rule of thumb "{{math|''f''(''x'')}} is smooth [[if and only if]] {{math|''f̂''(''ξ'')}} quickly falls to 0 for {{math|{{abs|''ξ''}} → ∞}}." By using the analogous rules for the inverse Fourier transform, one can also say "{{math|''f''(''x'')}} quickly falls to 0 for {{math|{{abs|''x''}} → ∞}} if and only if {{math|''f̂''(''ξ'')}} is smooth." |
|||
===Eigenfunctions=== |
|||
One important choice of an orthonormal basis for ''L''<sup>2</sup>('''R''') is given by the Hermite functions |
|||
=== Eigenfunctions === |
|||
: <math>{\psi}_n(x) = \frac{2^{1/4}}{\sqrt{n!}} \, e^{-\pi x^2}H_n(2x\sqrt{\pi}),</math> |
|||
{{see also|Mehler kernel|Hermite polynomials#Hermite functions as eigenfunctions of the Fourier transform}} |
|||
The Fourier transform is a linear transform which has eigenfunctions obeying <math>\mathcal{F}[\psi] = \lambda \psi,</math> with <math> \lambda \in \mathbb{C}.</math> |
|||
A set of eigenfunctions is found by noting that the homogeneous differential equation |
|||
where <math>H_n(x)</math> are the "probabilist's" [[Hermite polynomial]]s, defined by ''H<sub>n</sub>''(''x'') = (−1)<sup>''n''</sup>exp(''x''<sup>2</sup>/2) D<sup>''n''</sup> exp(−''x''<sup>2</sup>/2). Under this convention for the Fourier transform, we have that |
|||
<math display="block">\left[ U\left( \frac{1}{2\pi}\frac{d}{dx} \right) + U( x ) \right] \psi(x) = 0</math> |
|||
leads to eigenfunctions <math>\psi(x)</math> of the Fourier transform <math>\mathcal{F}</math> as long as the form of the equation remains invariant under Fourier transform.<ref group=note>The operator <math>U\left( \frac{1}{2\pi}\frac{d}{dx} \right)</math> is defined by replacing <math>x</math> by <math>\frac{1}{2\pi}\frac{d}{dx}</math> in the [[Taylor series|Taylor expansion]] of <math>U(x).</math></ref> In other words, every solution <math>\psi(x)</math> and its Fourier transform <math>\hat\psi(\xi)</math> obey the same equation. Assuming [[Ordinary differential equation#Existence and uniqueness of solutions|uniqueness]] of the solutions, every solution <math>\psi(x)</math> must therefore be an eigenfunction of the Fourier transform. The form of the equation remains unchanged under Fourier transform if <math>U(x)</math> can be expanded in a power series in which for all terms the same factor of either one of <math>\pm 1, \pm i</math> arises from the factors <math>i^n</math> introduced by the [[#Differentiation|differentiation]] rules upon Fourier transforming the homogeneous differential equation because this factor may then be cancelled. The simplest allowable <math>U(x)=x</math> leads to the [[Normal distribution#Fourier transform and characteristic function|standard normal distribution]].<ref>{{harvnb|Folland|1992|p=216}}</ref> |
|||
More generally, a set of eigenfunctions is also found by noting that the [[#Differentiation|differentiation]] rules imply that the [[ordinary differential equation]] |
|||
: <math> \hat\psi_n(\xi) = (-i)^n {\psi}_n(\xi) .</math> |
|||
<math display="block">\left[ W\left( \frac{i}{2\pi}\frac{d}{dx} \right) + W(x) \right] \psi(x) = C \psi(x)</math> |
|||
with <math>C</math> constant and <math>W(x)</math> being a non-constant even function remains invariant in form when applying the Fourier transform <math>\mathcal{F}</math> to both sides of the equation. The simplest example is provided by <math>W(x) = x^2</math> which is equivalent to considering the Schrödinger equation for the [[Quantum harmonic oscillator#Natural length and energy scales|quantum harmonic oscillator]].<ref>{{harvnb|Wolf|1979|p=307ff}}</ref> The corresponding solutions provide an important choice of an orthonormal basis for {{math|[[Square-integrable function|''L''<sup>2</sup>('''R''')]]}} and are given by the "physicist's" [[Hermite polynomials#Hermite functions as eigenfunctions of the Fourier transform|Hermite functions]]. Equivalently one may use |
|||
<math display="block">\psi_n(x) = \frac{\sqrt[4]{2}}{\sqrt{n!}} e^{-\pi x^2}\mathrm{He}_n\left(2x\sqrt{\pi}\right),</math> |
|||
where {{math|He<sub>''n''</sub>(''x'')}} are the "probabilist's" [[Hermite polynomial]]s, defined as |
|||
<math display="block">\mathrm{He}_n(x) = (-1)^n e^{\frac{1}{2}x^2}\left(\frac{d}{dx}\right)^n e^{-\frac{1}{2}x^2}.</math> |
|||
Under this convention for the Fourier transform, we have that |
|||
In other words, the Hermite functions form a complete [[orthonormal]] system of [[eigenfunctions]] for the Fourier transform on ''L''<sup>2</sup>('''R''') {{harv|Pinsky|2002}}. However, this choice of eigenfunctions is not unique. There are only four different [[eigenvalue]]s of the Fourier transform (±1 and ±''i'') and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction. As a consequence of this, it is possible to decompose ''L''<sup>2</sup>('''R''') as a direct sum of four spaces ''H''<sub>0</sub>, ''H''<sub>1</sub>, ''H''<sub>2</sub>, and ''H''<sub>3</sub> where the Fourier transform acts on ''H''<sub>''k''</sub> simply by multiplication by ''i''<sup>''k''</sup>. This approach to define the Fourier transform is due to N. Wiener {{harv|Duoandikoetxea|2001}}. The choice of Hermite functions is convenient because they are exponentially localized in both frequency and time domains, and thus give rise to the [[fractional Fourier transform]] used in time-frequency analysis {{Citation needed|date=October 2008}}. |
|||
<math display="block">\hat\psi_n(\xi) = (-i)^n \psi_n(\xi).</math> |
|||
In other words, the Hermite functions form a complete [[orthonormal]] system of [[eigenfunctions]] for the Fourier transform on {{math|''L''<sup>2</sup>('''R''')}}.<ref name="Pinsky-2002" /><ref>{{harvnb|Folland|1989|p=53}}</ref> However, this choice of eigenfunctions is not unique. Because of <math>\mathcal{F}^4 = \mathrm{id}</math> there are only four different [[eigenvalue]]s of the Fourier transform (the fourth roots of unity ±1 and ±{{mvar|i}}) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction.<ref>{{harvnb|Celeghini|Gadella|del Olmo|2021}}</ref> As a consequence of this, it is possible to decompose {{math|''L''<sup>2</sup>('''R''')}} as a direct sum of four spaces {{math|''H''<sub>0</sub>}}, {{math|''H''<sub>1</sub>}}, {{math|''H''<sub>2</sub>}}, and {{math|''H''<sub>3</sub>}} where the Fourier transform acts on {{math|He<sub>''k''</sub>}} simply by multiplication by {{math|''i''<sup>''k''</sup>}}. |
|||
== Fourier transform on Euclidean space == |
|||
Since the complete set of Hermite functions {{math|''ψ<sub>n</sub>''}} provides a resolution of the identity they diagonalize the Fourier operator, i.e. the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed: |
|||
The Fourier transform can be in any arbitrary number of dimensions ''n''. As with the one-dimensional case there are many conventions, for an integrable function ''ƒ''(''x'') this article takes the definition''':''' |
|||
<math display="block">\mathcal{F}[f](\xi) = \int dx f(x) \sum_{n \geq 0} (-i)^n \psi_n(x) \psi_n(\xi) ~.</math> |
|||
This approach to define the Fourier transform was first proposed by [[Norbert Wiener]].<ref name="Duoandikoetxea-2001">{{harvnb|Duoandikoetxea|2001}}</ref> Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely the [[fractional Fourier transform]] used in time–frequency analysis.<ref name="Boashash-2003">{{harvnb|Boashash|2003}}</ref> In [[physics]], this transform was introduced by [[Edward Condon]].<ref>{{harvnb|Condon|1937}}</ref> This change of basis functions becomes possible because the Fourier transform is a unitary transform when using the right [[#Other conventions|conventions]]. Consequently, under the proper conditions it may be expected to result from a self-adjoint generator <math>N</math> via<ref>{{harvnb|Wolf|1979|p=320}}</ref> |
|||
:<math>\hat{f}(\xi) = \mathcal{F}(f)(\xi) = \int_{\R^n} f(x) e^{-2\pi i x\cdot\xi} \, dx</math> |
|||
<math display="block">\mathcal{F}[\psi] = e^{-i t N} \psi.</math> |
|||
The operator <math>N</math> is the [[Quantum harmonic oscillator#Ladder operator method|number operator]] of the quantum harmonic oscillator written as<ref name="auto">{{harvnb|Wolf|1979|p=312}}</ref><ref>{{harvnb|Folland|1989|p=52}}</ref> |
|||
where ''x'' and ''ξ'' are ''n''-dimensional [[vector (mathematics)|vectors]], and {{nowrap|''x'' '''·''' ''ξ''}} is the [[dot product]] of the vectors. The dot product is sometimes written as <math>\left\langle x,\xi \right\rangle</math>. |
|||
<math display="block">N \equiv \frac{1}{2}\left(x - \frac{\partial}{\partial x}\right)\left(x + \frac{\partial}{\partial x}\right) = \frac{1}{2}\left(-\frac{\partial^2}{\partial x^2} + x^2 - 1\right).</math> |
|||
It can be interpreted as the [[symmetry in quantum mechanics|generator]] of [[Mehler kernel#Fractional Fourier transform|fractional Fourier transforms]] for arbitrary values of {{mvar|t}}, and of the conventional continuous Fourier transform <math>\mathcal{F}</math> for the particular value <math>t = \pi/2,</math> with the [[Mehler kernel#Physics version|Mehler kernel]] implementing the corresponding [[active and passive transformation#In abstract vector spaces|active transform]]. The eigenfunctions of <math> N</math> are the [[Hermite polynomials#Hermite functions|Hermite functions]] <math>\psi_n(x)</math> which are therefore also eigenfunctions of <math>\mathcal{F}.</math> |
|||
All of the basic properties listed above hold for the ''n''-dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the Riemann-Lebesgue lemma holds. {{harv|Stein|Weiss|1971}} |
|||
Upon extending the Fourier transform to [[distribution (mathematics)|distributions]] the [[Dirac comb#Fourier transform|Dirac comb]] is also an eigenfunction of the Fourier transform. |
|||
===Uncertainty principle=== |
|||
Generally speaking, the more concentrated ''f''(''x'') is, the more spread out its Fourier transform <math>\hat{f}(\xi)</math>  must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we "squeeze" a function in ''x'', its Fourier transform "stretches out" in ''ξ''. It is not possible to arbitrarily concentrate both a function and its Fourier transform. |
|||
=== Inversion and periodicity === |
|||
The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an '''Uncertainty Principle''', and is formalized by viewing a function and its Fourier transform as [[conjugate variables]] with respect to the [[symplectic form]] on the [[time–frequency representation|time–frequency domain]]: from the point of view of the [[linear canonical transformation]], the Fourier transform is rotation by 90° in the time–frequency domain, and preserves the symplectic form. |
|||
{{Further|Fourier inversion theorem|Fractional Fourier transform}} |
|||
Under suitable conditions on the function <math>f</math>, it can be recovered from its Fourier transform <math>\hat{f}</math>. Indeed, denoting the Fourier transform operator by <math>\mathcal{F}</math>, so <math>\mathcal{F} f := \hat{f}</math>, then for suitable functions, applying the Fourier transform twice simply flips the function: <math>\left(\mathcal{F}^2 f\right)(x) = f(-x)</math>, which can be interpreted as "reversing time". Since reversing time is two-periodic, applying this twice yields <math>\mathcal{F}^4(f) = f</math>, so the Fourier transform operator is four-periodic, and similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times: <math>\mathcal{F}^3\left(\hat{f}\right) = f</math>. In particular the Fourier transform is invertible (under suitable conditions). |
|||
Suppose ''ƒ''(''x'') is an integrable and [[square-integrable]] function. Without loss of generality, assume that ''ƒ''(''x'') is normalized: |
|||
More precisely, defining the ''parity operator'' <math>\mathcal{P}</math> such that <math>(\mathcal{P} f)(x) = f(-x)</math>, we have: |
|||
:<math>\int_{-\infty}^\infty |f(x)|^2 \,dx=1.</math> |
|||
<math display="block">\begin{align} |
|||
\mathcal{F}^0 &= \mathrm{id}, \\ |
|||
\mathcal{F}^1 &= \mathcal{F}, \\ |
|||
\mathcal{F}^2 &= \mathcal{P}, \\ |
|||
\mathcal{F}^3 &= \mathcal{F}^{-1} = \mathcal{P} \circ \mathcal{F} = \mathcal{F} \circ \mathcal{P}, \\ |
|||
\mathcal{F}^4 &= \mathrm{id} |
|||
\end{align}</math> |
|||
These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equality [[almost everywhere]]?) and defining equality of operators – that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of the [[Fourier inversion theorem]]. |
|||
This fourfold periodicity of the Fourier transform is similar to a rotation of the plane by 90°, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90° in the [[time–frequency domain]] (considering time as the {{mvar|x}}-axis and frequency as the {{mvar|y}}-axis), and the Fourier transform can be generalized to the [[fractional Fourier transform]], which involves rotations by other angles. This can be further generalized to [[linear canonical transformation]]s, which can be visualized as the action of the [[special linear group]] {{math|[[SL2(R)|SL<sub>2</sub>('''R''')]]}} on the time–frequency plane, with the preserved symplectic form corresponding to the [[#Uncertainty principle|uncertainty principle]], below. This approach is particularly studied in [[signal processing]], under [[time–frequency analysis]]. |
|||
It follows from the [[Plancherel theorem]] that <math>\hat{f}(\xi)</math>  is also normalized. |
|||
=== Connection with the Heisenberg group === |
|||
The spread around ''x'' = 0 may be measured by the ''dispersion about zero'' {{harv|Pinsky|2002}} defined by |
|||
The [[Heisenberg group]] is a certain [[group (mathematics)|group]] of [[unitary operator]]s on the [[Hilbert space]] {{math|''L''<sup>2</sup>('''R''')}} of square integrable complex valued functions {{mvar|f}} on the real line, generated by the translations {{math|1=(''T<sub>y</sub> f'')(''x'') = ''f'' (''x'' + ''y'')}} and multiplication by {{math|''e''<sup>''i''2π''ξx''</sup>}}, {{math|1=(''M<sub>ξ</sub> f'')(''x'') = ''e''<sup>''i''2π''ξx''</sup> ''f'' (''x'')}}. These operators do not commute, as their (group) commutator is |
|||
<math display="block">\left(M^{-1}_\xi T^{-1}_y M_\xi T_yf\right)(x) = e^{i 2\pi\xi y}f(x)</math> |
|||
which is multiplication by the constant (independent of {{mvar|x}}) {{math|''e''<sup>''i''2π''ξy''</sup> ∈ ''U''(1)}} (the [[circle group]] of unit modulus complex numbers). As an abstract group, the Heisenberg group is the three-dimensional [[Lie group]] of triples {{math|(''x'', ''ξ'', ''z'') ∈ '''R'''<sup>2</sup> × ''U''(1)}}, with the group law |
|||
<math display="block">\left(x_1, \xi_1, t_1\right) \cdot \left(x_2, \xi_2, t_2\right) = \left(x_1 + x_2, \xi_1 + \xi_2, t_1 t_2 e^{i 2\pi \left(x_1 \xi_1 + x_2 \xi_2 + x_1 \xi_2\right)}\right).</math> |
|||
Denote the Heisenberg group by {{math|''H''<sub>1</sub>}}. The above procedure describes not only the group structure, but also a standard [[unitary representation]] of {{math|''H''<sub>1</sub>}} on a Hilbert space, which we denote by {{math|''ρ'' : ''H''<sub>1</sub> → ''B''(''L''<sup>2</sup>('''R'''))}}. Define the linear automorphism of {{math|'''R'''<sup>2</sup>}} by |
|||
:<math>D_0(f)=\int_{-\infty}^\infty x^2|f(x)|^2\,dx.</math> |
|||
<math display="block">J \begin{pmatrix} |
|||
x \\ |
|||
\xi |
|||
\end{pmatrix} = \begin{pmatrix} |
|||
-\xi \\ |
|||
x |
|||
\end{pmatrix}</math> |
|||
so that {{math|1=''J''{{isup|2}} = −''I''}}. This {{mvar|J}} can be extended to a unique automorphism of {{math|''H''<sub>1</sub>}}: |
|||
<math display="block">j\left(x, \xi, t\right) = \left(-\xi, x, te^{-i 2\pi\xi x}\right).</math> |
|||
According to the [[Stone–von Neumann theorem]], the unitary representations {{mvar|ρ}} and {{math|''ρ'' ∘ ''j''}} are unitarily equivalent, so there is a unique intertwiner {{math|''W'' ∈ ''U''(''L''<sup>2</sup>('''R'''))}} such that |
|||
In probability terms, this is the [[Moment (mathematics)|second moment]] of <math>|f(x)|^2\,\!</math> about zero. |
|||
<math display="block">\rho \circ j = W \rho W^*.</math> |
|||
This operator {{mvar|W}} is the Fourier transform. |
|||
Many of the standard properties of the Fourier transform are immediate consequences of this more general framework.<ref>{{harvnb|Howe|1980}}</ref> For example, the square of the Fourier transform, {{math|''W''{{isup|2}}}}, is an intertwiner associated with {{math|1=''J''{{isup|2}} = −''I''}}, and so we have {{math|1=(''W''{{i sup|2}}''f'')(''x'') = ''f'' (−''x'')}} is the reflection of the original function {{mvar|f}}. |
|||
The Uncertainty principle states that, if ''ƒ''(''x'') is absolutely continuous and the functions ''x''·''ƒ''(''x'') and ''ƒ''′(''x'') are square integrable, then |
|||
== Complex domain == |
|||
:<math>D_0(f)D_0(\hat{f}) \geq \frac{1}{16\pi^2}</math> {{harv|Pinsky|2002}}. |
|||
The [[integral]] for the Fourier transform |
|||
<math display="block"> \hat f (\xi) = \int _{-\infty}^\infty e^{-i 2\pi \xi t} f(t) \, dt </math> |
|||
can be studied for [[complex number|complex]] values of its argument {{mvar|ξ}}. Depending on the properties of {{mvar|f}}, this might not converge off the real axis at all, or it might converge to a [[complex analysis|complex]] [[analytic function]] for all values of {{math|''ξ'' {{=}} ''σ'' + ''iτ''}}, or something in between.<ref>{{harvnb|Paley|Wiener|1934}}</ref> |
|||
The [[Paley–Wiener theorem]] says that {{mvar|f}} is smooth (i.e., {{mvar|n}}-times differentiable for all positive integers {{mvar|n}}) and compactly supported if and only if {{math|''f̂'' (''σ'' + ''iτ'')}} is a [[holomorphic function]] for which there exists a [[constant (mathematics)|constant]] {{math|''a'' > 0}} such that for any [[integer]] {{math|''n'' ≥ 0}}, |
|||
The equality is attained only in the case <math>f(x)=C_1 \, e^{{- \pi x^2}/{\sigma^2}}</math> (hence <math>\quad \hat{f}(\xi)= \sigma C_1 \, e^{-\pi\sigma^2\xi^2}</math> ) where ''σ'' > 0 is arbitrary and ''C''<sub>1</sub> is such that ''ƒ'' is ''L''<sup>2</sup>–normalized {{harv|Pinsky|2002}}. In other words, where ''ƒ'' is a (normalized) [[Gaussian function]], centered at zero. |
|||
<math display="block"> \left\vert \xi ^n \hat f(\xi) \right\vert \leq C e^{a\vert\tau\vert} </math> |
|||
for some constant {{mvar|C}}. (In this case, {{mvar|f}} is supported on {{math|[−''a'', ''a'']}}.) This can be expressed by saying that {{math|''f̂''}} is an [[entire function]] which is [[rapidly decreasing]] in {{mvar|σ}} (for fixed {{mvar|τ}}) and of exponential growth in {{mvar|τ}} (uniformly in {{mvar|σ}}).<ref>{{harvnb|Gelfand|Vilenkin|1964}}</ref> |
|||
(If {{mvar|f}} is not smooth, but only {{math|''L''<sup>2</sup>}}, the statement still holds provided {{math|''n'' {{=}} 0}}.<ref>{{harvnb|Kirillov|Gvishiani|1982}}</ref>) The space of such functions of a [[complex analysis|complex variable]] is called the Paley—Wiener space. This theorem has been generalised to semisimple [[Lie group]]s.<ref>{{harvnb|Clozel|Delorme|1985|pp=331–333}}</ref> |
|||
If {{mvar|f}} is supported on the half-line {{math|''t'' ≥ 0}}, then {{mvar|f}} is said to be "causal" because the [[impulse response function]] of a physically realisable [[Filter (mathematics)|filter]] must have this property, as no effect can precede its cause. [[Raymond Paley|Paley]] and Wiener showed that then {{math|''f̂''}} extends to a [[holomorphic function]] on the complex lower half-plane {{math|''τ'' < 0}} which tends to zero as {{mvar|τ}} goes to infinity.<ref>{{harvnb|de Groot|Mazur|1984|p=146}}</ref> The converse is false and it is not known how to characterise the Fourier transform of a causal function.<ref>{{harvnb|Champeney|1987|p=80}}</ref> |
|||
=== Laplace transform === |
|||
{{See also|Laplace transform#Fourier transform}} |
|||
The Fourier transform {{math|''f̂''(''ξ'')}} is related to the [[Laplace transform]] {{math|''F''(''s'')}}, which is also used for the solution of [[differential equation]]s and the analysis of [[Filter (signal processing)|filter]]s. |
|||
It may happen that a function {{mvar|f}} for which the Fourier integral does not converge on the real axis at all, nevertheless has a complex Fourier transform defined in some region of the [[complex plane]]. |
|||
For example, if {{math|''f''(''t'')}} is of exponential growth, i.e., |
|||
<math display="block"> \vert f(t) \vert < C e^{a\vert t\vert} </math> |
|||
for some constants {{math|''C'', ''a'' ≥ 0}}, then<ref name="Kolmogorov-Fomin-1999">{{harvnb|Kolmogorov|Fomin|1999}}</ref> |
|||
<math display="block"> \hat f (i\tau) = \int _{-\infty}^\infty e^{ 2\pi \tau t} f(t) \, dt, </math> |
|||
convergent for all {{math|2π''τ'' < −''a''}}, is the [[two-sided Laplace transform]] of {{mvar|f}}. |
|||
The more usual version ("one-sided") of the Laplace transform is |
|||
<math display="block"> F(s) = \int_0^\infty f(t) e^{-st} \, dt.</math> |
|||
If {{mvar|f}} is also causal, and analytical, then: <math> \hat f(i\tau) = F(-2\pi\tau).</math> Thus, extending the Fourier transform to the complex domain means it includes the Laplace transform as a special case in the case of causal functions—but with the change of variable {{math|''s'' {{=}} ''i''2π''ξ''}}. |
|||
From another, perhaps more classical viewpoint, the Laplace transform by its form involves an additional exponential regulating term which lets it converge outside of the imaginary line where the Fourier transform is defined. As such it can converge for at most exponentially divergent series and integrals, whereas the original Fourier decomposition cannot, enabling analysis of systems with divergent or critical elements. Two particular examples from linear signal processing are the construction of allpass filter networks from critical comb and mitigating filters via exact pole-zero cancellation on the unit circle. Such designs are common in audio processing, where highly nonlinear phase response is sought for, as in reverb. |
|||
Furthermore, when extended pulselike impulse responses are sought for signal processing work, the easiest way to produce them is to have one circuit which produces a divergent time response, and then to cancel its divergence through a delayed opposite and compensatory response. There, only the delay circuit in-between admits a classical Fourier description, which is critical. Both the circuits to the side are unstable, and do not admit a convergent Fourier decomposition. However, they do admit a Laplace domain description, with identical half-planes of convergence in the complex plane (or in the discrete case, the Z-plane), wherein their effects cancel. |
|||
In modern mathematics the Laplace transform is conventionally subsumed under the aegis Fourier methods. Both of them are subsumed by the far more general, and more abstract, idea of [[harmonic analysis]]. |
|||
=== Inversion === |
|||
Still with <math>\xi = \sigma+ i\tau</math>, if <math>\widehat f</math> is complex analytic for {{math|''a'' ≤ ''τ'' ≤ ''b''}}, then |
|||
<math display="block"> \int _{-\infty}^\infty \hat f (\sigma + ia) e^{ i 2\pi \xi t} \, d\sigma = \int _{-\infty}^\infty \hat f (\sigma + ib) e^{ i 2\pi \xi t} \, d\sigma </math> |
|||
by [[Cauchy's integral theorem]]. Therefore, the Fourier inversion formula can use integration along different lines, parallel to the real axis.<ref>{{harvnb|Wiener|1949}}</ref> |
|||
Theorem: If {{math|1=''f''(''t'') = 0}} for {{math|''t'' < 0}}, and {{math|{{abs|''f''(''t'')}} < ''Ce''<sup>''a''{{abs|''t''}}</sup>}} for some constants {{math|''C'', ''a'' > 0}}, then |
|||
<math display="block"> f(t) = \int_{-\infty}^\infty \hat f(\sigma + i\tau) e^{i 2 \pi \xi t} \, d\sigma,</math> |
|||
for any {{math|''τ'' < −{{sfrac|''a''|2π}}}}. |
|||
This theorem implies the [[inverse Laplace transform#Mellin's_inverse_formula|Mellin inversion formula]] for the Laplace transformation,<ref name="Kolmogorov-Fomin-1999" /> |
|||
<math display="block"> f(t) = \frac 1 {i 2\pi} \int_{b-i\infty}^{b+i\infty} F(s) e^{st}\, ds</math> |
|||
for any {{math|''b'' > ''a''}}, where {{math|''F''(''s'')}} is the Laplace transform of {{math|''f''(''t'')}}. |
|||
The hypotheses can be weakened, as in the results of Carleson and Hunt, to {{math|''f''(''t'') ''e''<sup>−''at''</sup>}} being {{math|''L''<sup>1</sup>}}, provided that {{mvar|f}} be of bounded variation in a closed neighborhood of {{mvar|t}} (cf. [[Dini test]]), the value of {{mvar|f}} at {{mvar|t}} be taken to be the [[arithmetic mean]] of the left and right limits, and that the integrals be taken in the sense of Cauchy principal values.<ref>{{harvnb|Champeney|1987|p=63}}</ref> |
|||
{{math|''L''<sup>2</sup>}} versions of these inversion formulas are also available.<ref>{{harvnb|Widder|Wiener|1938|p=537}}</ref> |
|||
== Fourier transform on Euclidean space == |
|||
The Fourier transform can be defined in any arbitrary number of dimensions {{mvar|n}}. As with the one-dimensional case, there are many conventions. For an integrable function {{math|''f''('''x''')}}, this article takes the definition: |
|||
<math display="block">\hat{f}(\boldsymbol{\xi}) = \mathcal{F}(f)(\boldsymbol{\xi}) = \int_{\R^n} f(\mathbf{x}) e^{-i 2\pi \boldsymbol{\xi}\cdot\mathbf{x}} \, d\mathbf{x}</math> |
|||
where {{math|'''x'''}} and {{math|'''ξ'''}} are {{mvar|n}}-dimensional [[vector (mathematics)|vectors]], and {{math|'''x''' · '''ξ'''}} is the [[dot product]] of the vectors. Alternatively, {{math|'''ξ'''}} can be viewed as belonging to the [[dual space|dual vector space]] <math>\R^{n\star}</math>, in which case the dot product becomes the [[tensor contraction|contraction]] of {{math|'''x'''}} and {{math|'''ξ'''}}, usually written as {{math|{{angbr|'''x''', '''ξ'''}}}}. |
|||
All of the basic properties listed above hold for the {{mvar|n}}-dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the [[Riemann–Lebesgue lemma]] holds.<ref name="Stein-Weiss-1971" /> |
|||
=== Uncertainty principle === |
|||
{{Further|Uncertainty principle}} |
|||
Generally speaking, the more concentrated {{math|''f''(''x'')}} is, the more spread out its Fourier transform {{math|''f̂''(''ξ'')}} must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we squeeze a function in {{mvar|x}}, its Fourier transform stretches out in {{mvar|ξ}}. It is not possible to arbitrarily concentrate both a function and its Fourier transform. |
|||
The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an [[uncertainty principle]] by viewing a function and its Fourier transform as [[conjugate variables]] with respect to the [[symplectic form]] on the [[time–frequency representation|time–frequency domain]]: from the point of view of the [[linear canonical transformation]], the Fourier transform is rotation by 90° in the time–frequency domain, and preserves the [[Symplectic vector space|symplectic form]]. |
|||
Suppose {{math|''f''(''x'')}} is an integrable and [[square-integrable]] function. Without loss of generality, assume that {{math|''f''(''x'')}} is normalized: |
|||
<math display="block">\int_{-\infty}^\infty |f(x)|^2 \,dx=1.</math> |
|||
It follows from the [[Plancherel theorem]] that {{math|''f̂''(''ξ'')}} is also normalized. |
|||
The spread around {{math|''x'' {{=}} 0}} may be measured by the ''dispersion about zero''<ref>{{harvnb|Pinsky|2002|p=131}}</ref> defined by |
|||
<math display="block">D_0(f)=\int_{-\infty}^\infty x^2|f(x)|^2\,dx.</math> |
|||
In probability terms, this is the [[Moment (mathematics)|second moment]] of {{math|{{abs|''f''(''x'')}}<sup>2</sup>}} about zero. |
|||
The uncertainty principle states that, if {{math|''f''(''x'')}} is absolutely continuous and the functions {{math|''x''·''f''(''x'')}} and {{math|''f''{{′}}(''x'')}} are square integrable, then<ref name="Pinsky-2002" /> |
|||
<math display="block">D_0(f)D_0(\hat{f}) \geq \frac{1}{16\pi^2}.</math> |
|||
The equality is attained only in the case |
|||
<math display="block">\begin{align} f(x) &= C_1 \, e^{-\pi \frac{x^2}{\sigma^2} }\\ |
|||
\therefore \hat{f}(\xi) &= \sigma C_1 \, e^{-\pi\sigma^2\xi^2} \end{align} </math> |
|||
where {{math|''σ'' > 0}} is arbitrary and {{math|1=''C''<sub>1</sub> = {{sfrac|{{radic|2|4}}|{{sqrt|''σ''}}}}}} so that {{mvar|f}} is {{math|''L''<sup>2</sup>}}-normalized.<ref name="Pinsky-2002" /> In other words, where {{mvar|f}} is a (normalized) [[Gaussian function]] with variance {{math|''σ''<sup>2</sup>/2{{pi}}}}, centered at zero, and its Fourier transform is a Gaussian function with variance {{math|''σ''<sup>−2</sup>/2{{pi}}}}. |
|||
In fact, this inequality implies that: |
In fact, this inequality implies that: |
||
<math display="block">\left(\int_{-\infty}^\infty (x-x_0)^2|f(x)|^2\,dx\right)\left(\int_{-\infty}^\infty(\xi-\xi_0)^2\left|\hat{f}(\xi)\right|^2\,d\xi\right)\geq \frac{1}{16\pi^2}</math> |
|||
for any {{math|''x''<sub>0</sub>}}, {{math|''ξ''<sub>0</sub> ∈ '''R'''}}.<ref name="Stein-Shakarchi-2003">{{harvnb|Stein|Shakarchi|2003}}</ref> |
|||
In [[quantum mechanics]], the [[momentum]] and position [[wave function]]s are Fourier transform pairs, up to a factor of the [[Planck constant]]. With this constant properly taken into account, the inequality above becomes the statement of the [[Heisenberg uncertainty principle]].<ref>{{harvnb|Stein|Shakarchi|2003|p=158}}</ref> |
|||
: <math>\left(\int_{-\infty}^\infty (x-x_0)^2|f(x)|^2\,dx\right)\left(\int_{-\infty}^\infty(\xi-\xi_0)^2|\hat{f}(\xi)|^2\,d\xi\right)\geq \frac{1}{16\pi^2}</math> |
|||
A stronger uncertainty principle is the [[Hirschman uncertainty|Hirschman uncertainty principle]], which is expressed as: |
|||
for any <math>x_0, \, \xi_0</math> in '''R''' {{harv|Stein|Shakarchi|2003}}. |
|||
<math display="block">H\left(\left|f\right|^2\right)+H\left(\left|\hat{f}\right|^2\right)\ge \log\left(\frac{e}{2}\right)</math> |
|||
where {{math|''H''(''p'')}} is the [[differential entropy]] of the [[probability density function]] {{math|''p''(''x'')}}: |
|||
<math display="block">H(p) = -\int_{-\infty}^\infty p(x)\log\bigl(p(x)\bigr) \, dx</math> |
|||
where the logarithms may be in any base that is consistent. The equality is attained for a Gaussian, as in the previous case. |
|||
=== Sine and cosine transforms === |
|||
In [[quantum mechanics]], the [[momentum]] and position [[wave function]]s are Fourier transform pairs, to within a factor of [[Planck's constant]]. With this constant properly taken into account, the inequality above becomes the statement of the [[Heisenberg uncertainty principle]] {{harv|Stein|Shakarchi|2003}}. |
|||
{{Main|Sine and cosine transforms}} |
|||
Fourier's original formulation of the transform did not use complex numbers, but rather sines and cosines. Statisticians and others still use this form. An absolutely integrable function {{mvar|f}} for which Fourier inversion holds can be expanded in terms of genuine frequencies (avoiding negative frequencies, which are sometimes considered hard to interpret physically<ref>{{harvnb|Chatfield|2004|p=113}}</ref>) {{mvar|λ}} by |
|||
===Spherical harmonics=== |
|||
<math display="block">f(t) = \int_0^\infty \bigl( a(\lambda ) \cos( 2\pi \lambda t) + b(\lambda ) \sin( 2\pi \lambda t)\bigr) \, d\lambda.</math> |
|||
This is called an expansion as a trigonometric integral, or a Fourier integral expansion. The coefficient functions {{mvar|a}} and {{mvar|b}} can be found by using variants of the Fourier cosine transform and the Fourier sine transform (the normalisations are, again, not standardised): |
|||
Let the set of [[Homogeneous polynomial|homogeneous]] [[Harmonic function|harmonic]] [[polynomial]]s of degree ''k'' on '''R'''<sup>''n''</sup> be denoted by '''A'''<sub>''k''</sub>. The set '''A'''<sub>''k''</sub> consists of the [[solid spherical harmonics]] of degree ''k''. The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, if ''f''(''x'') = ''e''<sup>−''π''|''x''|<sup>2</sup></sup>''P''(''x'') for some ''P''(''x'') in '''A'''<sub>''k''</sub>, then <math>\hat{f}(\xi)=i^{-k}f(\xi)</math>. Let the set '''H'''<sub>''k''</sub> be the closure in ''L''<sup>2</sup>('''R'''<sup>''n''</sup>) of linear combinations of functions of the form ''f''(|''x''|)''P''(''x'') where ''P''(''x'') is in '''A'''<sub>''k''</sub>. The space ''L''<sup>2</sup>('''R'''<sup>''n''</sup>) is then a direct sum of the spaces '''H'''<sub>''k''</sub> and the Fourier transform maps each space '''H'''<sub>''k''</sub> to itself and is possible to characterize the action of the Fourier transform on each space '''H'''<sub>''k''</sub> {{harv|Stein|Weiss|1971}}. Let ''ƒ''(''x'') = ''ƒ''<sub>0</sub>(|''x''|)''P''(''x'') (with ''P''(''x'') in '''A'''<sub>''k''</sub>), then <math>\hat{f}(\xi)=F_0(|\xi|)P(\xi)</math> where |
|||
<math display="block"> a (\lambda) = 2\int_{-\infty}^\infty f(t) \cos(2\pi\lambda t) \, dt</math> |
|||
and |
|||
<math display="block"> b (\lambda) = 2\int_{-\infty}^\infty f(t) \sin(2\pi\lambda t) \, dt. </math> |
|||
Older literature refers to the two transform functions, the Fourier cosine transform, {{mvar|a}}, and the Fourier sine transform, {{mvar|b}}. |
|||
:<math>F_0(r)=2\pi i^{-k}r^{-(n+2k-2)/2}\int_0^\infty f_0(s)J_{(n+2k-2)/2}(2\pi rs)s^{(n+2k)/2}\,ds.</math> |
|||
The function {{mvar|f}} can be recovered from the sine and cosine transform using |
|||
Here ''J''<sub>(''n'' + 2''k'' − 2)/2</sub> denotes the [[Bessel function]] of the first kind with order (''n'' + 2''k'' − 2)/2. When ''k'' = 0 this gives a useful formula for the Fourier transform of a radial function {{harv|Grafakos|2004}}. |
|||
<math display="block"> f(t) = 2\int_0 ^{\infty} \int_{-\infty}^{\infty} f(\tau) \cos\bigl( 2\pi \lambda(\tau-t)\bigr) \, d\tau \, d\lambda.</math> |
|||
together with trigonometric identities. This is referred to as Fourier's integral formula.<ref name="Kolmogorov-Fomin-1999" /><ref>{{harvnb|Fourier|1822|p=441}}</ref><ref>{{harvnb|Poincaré|1895|p=102}}</ref><ref>{{harvnb|Whittaker|Watson|1927|p=188}}</ref> |
|||
=== Spherical harmonics === |
|||
===Restriction problems=== |
|||
Let the set of [[Homogeneous polynomial|homogeneous]] [[Harmonic function|harmonic]] [[polynomial]]s of degree {{mvar|k}} on {{math|'''R'''<sup>''n''</sup>}} be denoted by {{math|'''A'''<sub>''k''</sub>}}. The set {{math|'''A'''<sub>''k''</sub>}} consists of the [[solid spherical harmonics]] of degree {{mvar|k}}. The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, if {{math|1=''f''(''x'') = ''e''<sup>−π{{abs|''x''}}<sup>2</sup></sup>''P''(''x'')}} for some {{math|''P''(''x'')}} in {{math|'''A'''<sub>''k''</sub>}}, then {{math|1=''f̂''(''ξ'') = ''i''{{isup|−''k''}} ''f''(''ξ'')}}. Let the set {{math|'''H'''<sub>''k''</sub>}} be the closure in {{math|''L''<sup>2</sup>('''R'''<sup>''n''</sup>)}} of linear combinations of functions of the form {{math|''f''({{abs|''x''}})''P''(''x'')}} where {{math|''P''(''x'')}} is in {{math|'''A'''<sub>''k''</sub>}}. The space {{math|''L''<sup>2</sup>('''R'''<sup>''n''</sup>)}} is then a direct sum of the spaces {{math|'''H'''<sub>''k''</sub>}} and the Fourier transform maps each space {{math|'''H'''<sub>''k''</sub>}} to itself and is possible to characterize the action of the Fourier transform on each space {{math|'''H'''<sub>''k''</sub>}}.<ref name="Stein-Weiss-1971" /> |
|||
In higher dimensions it becomes interesting to study ''restriction problems'' for the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a general ''class'' of square integrable functions. As such, the restriction of the Fourier transform of an ''L''<sup>2</sup>('''R'''<sup>''n''</sup>) function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems in ''L''<sup>''p''</sup> for 1 < ''p'' < 2. Surprisingly, it is possible in some cases to define the restriction of a Fourier transform to a set ''S'', provided ''S'' has non-zero curvature. The case when ''S'' is the unit sphere in '''R'''<sup>''n''</sup> is of particular interest. In this case the Tomas-[[Elias Stein|Stein]] restriction theorem states that the restriction of the Fourier transform to the unit sphere in '''R'''<sup>''n''</sup> is a bounded operator on ''L''<sup>''p''</sup> provided 1 ≤ ''p'' ≤ {{nowrap|(2''n'' + 2) / (''n'' + 3)}}. |
|||
Let {{math|1=''f''(''x'') = ''f''<sub>0</sub>({{abs|''x''}})''P''(''x'')}} (with {{math|''P''(''x'')}} in {{math|'''A'''<sub>''k''</sub>}}), then |
|||
One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable sets ''E''<sub>''R''</sub> indexed by ''R'' ∈ (0,∞): such as balls of radius ''R'' centered at the origin, or cubes of side 2''R''. For a given integrable function ''ƒ'', consider the function ''ƒ''<sub>R</sub> defined by: |
|||
<math display="block">\hat{f}(\xi)=F_0(|\xi|)P(\xi)</math> |
|||
where |
|||
<math display="block">F_0(r) = 2\pi i^{-k}r^{-\frac{n+2k-2}{2}} \int_0^\infty f_0(s)J_\frac{n+2k-2}{2}(2\pi rs)s^\frac{n+2k}{2}\,ds.</math> |
|||
Here {{math|''J''<sub>(''n'' + 2''k'' − 2)/2</sub>}} denotes the [[Bessel function]] of the first kind with order {{math|{{sfrac|''n'' + 2''k'' − 2|2}}}}. When {{math|''k'' {{=}} 0}} this gives a useful formula for the Fourier transform of a radial function.<ref>{{harvnb|Grafakos|2004}}</ref> This is essentially the [[Hankel transform]]. Moreover, there is a simple recursion relating the cases {{math|''n'' + 2}} and {{mvar|n}}<ref>{{harvnb|Grafakos|Teschl|2013}}</ref> allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one. |
|||
:<math>f_R(x) = \int_{E_R}\hat{f}(\xi) e^{2\pi ix\cdot\xi}\, d\xi, \quad x \in \mathbb{R}^n.</math> |
|||
=== Restriction problems === |
|||
Suppose in addition that ''ƒ'' is in ''L<sup>p</sup>''('''R'''<sup>''n''</sup>). For ''n'' = 1 and {{nowrap|1 < ''p'' < ∞}}, if one takes ''E''<sub>''R''</sub> = (−R, R), then ''ƒ''<sub>R</sub> converges to ''ƒ'' in ''L<sup>p</sup>'' as ''R'' tends to infinity, by the boundedness of the [[Hilbert transform]]. Naively one may hope the same holds true for ''n'' > 1. In the case that ''E''<sub>''R''</sub> is taken to be a cube with side length ''R'', then convergence still holds. Another natural candidate is the Euclidean ball ''E''<sub>''R''</sub> = {ξ : |ξ| < R}. In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded in ''L<sup>p</sup>''('''R'''<sup>''n''</sup>). For ''n'' ≥ 2 it is a celebrated theorem of [[Charles Fefferman]] that the multiplier for the unit ball is never bounded unless ''p'' = 2 {{harv|Duoandikoetxea|2001}}. In fact, when {{nowrap|''p'' ≠ 2}}, this shows that not only may ''ƒ''<sub>R</sub> fail to converge to ''ƒ'' in ''L<sup>p</sup>'', but for some functions ''ƒ'' ∈ ''L<sup>p</sup>''('''R'''<sup>''n''</sup>), ''ƒ''<sub>R</sub> is not even an element of ''L<sup>p</sup>''. |
|||
In higher dimensions it becomes interesting to study ''restriction problems'' for the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a general ''class'' of square integrable functions. As such, the restriction of the Fourier transform of an {{math|''L''<sup>2</sup>('''R'''<sup>''n''</sup>)}} function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems in {{math|''L''{{isup|''p''}}}} for {{math|1 < ''p'' < 2}}. It is possible in some cases to define the restriction of a Fourier transform to a set {{mvar|S}}, provided {{mvar|S}} has non-zero curvature. The case when {{mvar|S}} is the unit sphere in {{math|'''R'''<sup>''n''</sup>}} is of particular interest. In this case the Tomas–[[Elias Stein|Stein]] restriction theorem states that the restriction of the Fourier transform to the unit sphere in {{math|'''R'''<sup>''n''</sup>}} is a bounded operator on {{math|''L''{{isup|''p''}}}} provided {{math|1 ≤ ''p'' ≤ {{sfrac|2''n'' + 2|''n'' + 3}}}}. |
|||
One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable sets {{math|''E''<sub>''R''</sub>}} indexed by {{math|''R'' ∈ (0,∞)}}: such as balls of radius {{mvar|R}} centered at the origin, or cubes of side {{math|2''R''}}. For a given integrable function {{mvar|f}}, consider the function {{mvar|f<sub>R</sub>}} defined by: |
|||
==Generalizations== |
|||
<math display="block">f_R(x) = \int_{E_R}\hat{f}(\xi) e^{i 2\pi x\cdot\xi}\, d\xi, \quad x \in \mathbb{R}^n.</math> |
|||
=== Fourier transform on other function spaces === |
|||
It is possible to extend the definition of the Fourier transform to other spaces of functions. Since compactly supported smooth functions are integrable and dense in ''L''<sup>2</sup>('''R'''), the [[Plancherel theorem]] allows us to extend the definition of the Fourier transform to general functions in ''L''<sup>2</sup>('''R''') by continuity arguments. Further <math> \mathcal{F}</math>: ''L''<sup>2</sup>('''R''') → ''L''<sup>2</sup>('''R''') is a [[unitary operator]] {{harv|Stein|Weiss|1971|loc=Thm. 2.3}}. Many of the properties remain the same for the Fourier transform. The [[Hausdorff–Young inequality]] can be used to extend the definition of the Fourier transform to include functions in ''L''<sup>''p''</sup>('''R''') for 1 ≤ ''p'' ≤ 2. |
|||
Unfortunately, further extensions become more technical. The Fourier transform of functions in ''L''<sup>''p''</sup> for the range 2 < ''p'' < ∞ requires the study of distributions {{harv|Katznelson|1976}}. In fact, it can be shown that there are functions in ''L''<sup>''p''</sup> with ''p''>2 so that the Fourier transform is not defined as a function {{harv|Stein|Weiss|1971}}. |
|||
Suppose in addition that {{math|''f'' ∈ ''L''{{isup|''p''}}('''R'''<sup>''n''</sup>)}}. For {{math|''n'' {{=}} 1}} and {{math|1 < ''p'' < ∞}}, if one takes {{math|''E<sub>R</sub>'' {{=}} (−''R'', ''R'')}}, then {{mvar|f<sub>R</sub>}} converges to {{mvar|f}} in {{math|''L''{{isup|''p''}}}} as {{mvar|R}} tends to infinity, by the boundedness of the [[Hilbert transform]]. Naively one may hope the same holds true for {{math|''n'' > 1}}. In the case that {{mvar|E<sub>R</sub>}} is taken to be a cube with side length {{mvar|R}}, then convergence still holds. Another natural candidate is the Euclidean ball {{math|''E''<sub>''R''</sub> {{=}} {''ξ'' : {{abs|''ξ''}} < ''R''{{)}}}}. In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded in {{math|''L''{{isup|''p''}}('''R'''<sup>''n''</sup>)}}. For {{math|''n'' ≥ 2}} it is a celebrated theorem of [[Charles Fefferman]] that the multiplier for the unit ball is never bounded unless {{math|''p'' {{=}} 2}}.<ref name="Duoandikoetxea-2001" /> In fact, when {{math|''p'' ≠ 2}}, this shows that not only may {{mvar|f<sub>R</sub>}} fail to converge to {{mvar|f}} in {{math|''L''{{isup|''p''}}}}, but for some functions {{math|''f'' ∈ ''L''{{isup|''p''}}('''R'''<sup>''n''</sup>)}}, {{mvar|f<sub>R</sub>}} is not even an element of {{math|''L''{{isup|''p''}}}}. |
|||
===Fourier–Stieltjes transform=== |
|||
<!-- this section is being linked from [[Characteristi function (probability distribution)]] as [[Fourier_transform#Fourier.E2.80.93Stieltjes_transform]] --> |
|||
== Fourier transform on function spaces == |
|||
{{see also|Riesz–Thorin theorem}} |
|||
The definition of the Fourier transform naturally extends from <math>L^1(\mathbb R)</math> to <math>L^1(\mathbb R^n)</math> as, |
|||
<math display="block">\hat{f}(\xi) = \int_{\mathbb{R}^n} f(x)e^{-i 2\pi \xi\cdot x}\,dx,</math> |
|||
for {{math|''f'' ∈ ''L''<sup>1</sup>('''R'''<sup>''n''</sup>)}} whereby the [[Riemann–Lebesgue lemma]] may be formulated as the Fourier transform {{math|{{mathcal|F}} : ''L''<sup>1</sup>('''R'''<sup>''n''</sup>) → ''L''<sup>∞</sup>('''R'''<sup>''n''</sup>)}}. This operator is [[bounded operator|bounded]] as |
|||
<math display="block">\left\vert\hat{f}(\xi)\right\vert \leq \int_{\mathbb{R}^n} \vert f(x)\vert \,dx,</math> |
|||
which shows that its [[operator norm]] is bounded by {{math|1}}. The image of {{math|''L''<sup>1</sup>}} is a strict subset of {{math|''C''<sub>0</sub>('''R'''<sup>''n''</sup>)}}, the [[Function_space#Functional_analysis|space of continuous functions]] which vanish at infinity. |
|||
Similarly to the case of one variable, the Fourier transform can be defined on <math>L^2(\mathbb R^n)</math>. Since the [[Function_space#Functional_analysis|space of compactly supported smooth functions]] {{math|''C''{{su|b=c|p=∞|lh=1}}('''R'''<sup>''n''</sup>)}} is dense in {{math|''L''<sup>2</sup>('''R'''<sup>''n''</sup>)}}, the [[Plancherel theorem]] allows one to extend the definition of the Fourier transform to general functions in {{math|''L''<sup>2</sup>('''R'''<sup>''n''</sup>)}} by continuity arguments. The Fourier transform in {{math|''L''<sup>2</sup>('''R'''<sup>''n''</sup>)}} is no longer given by an ordinary Lebesgue integral, although it can be computed by an [[improper integral]], i.e., |
|||
:<math>\hat\mu(\xi)=\int_{\mathbb{R}^n} \mathrm{e}^{-2\pi i x \cdot \xi}\,d\mu.</math> |
|||
<math display="block">\hat{f}(\xi) = \lim_{R\to\infty}\int_{|x|\le R} f(x) e^{-i 2\pi\xi\cdot x}\,dx</math> |
|||
where the limit is taken in the {{math|''L''<sup>2</sup>}} sense.<ref>More generally, one can take a sequence of functions that are in the intersection of {{math|''L''<sup>1</sup>}} and {{math|''L''<sup>2</sup>}} and that converges to {{mvar|f}} in the {{math|''L''<sup>2</sup>}}-norm, and define the Fourier transform of {{mvar|f}} as the {{math|''L''<sup>2</sup>}} -limit of the Fourier transforms of these functions.</ref><ref>{{cite web|url=https://statweb.stanford.edu/~candes/teaching/math262/Lectures/Lecture03.pdf|title=Applied Fourier Analysis and Elements of Modern Signal Processing Lecture 3 |date= January 12, 2016|access-date=2019-10-11}}</ref> |
|||
Furthermore, {{math|{{mathcal|F}} : ''L''<sup>2</sup>('''R'''<sup>''n''</sup>) → ''L''<sup>2</sup>('''R'''<sup>''n''</sup>)}} is a [[unitary operator]].<ref>{{harvnb|Stein|Weiss|1971|loc=Thm. 2.3}}</ref> For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for any {{math|''f'', ''g'' ∈ ''L''<sup>2</sup>('''R'''<sup>''n''</sup>)}} we have |
|||
This transform continues to enjoy many of the properties of the Fourier transform of integrable functions. One notable difference is that the Riemann–Lebesgue lemma fails for measures {{harv|Katznelson|1976}}. In the case that ''dμ'' = ''ƒ''(''x'') ''dx'', then the formula above reduces to the usual definition for the Fourier transform of ''ƒ''. In the case that ''μ'' is the probability distribution associated to a random variable ''X'', the Fourier-Stieltjes transform is closely related to the [[Characteristic function (probability theory)|characteristic function]], but the typical conventions in probability theory take ''e''<sup>''ix·ξ''</sup> instead of ''e''<sup>−2''πix·ξ''</sup> {{harv|Pinsky|2002}}. In the case when the distribution has a [[probability density function]] this definition reduces to the Fourier transform applied to the probability density function, again with a different choice of constants. |
|||
<math display="block">\int_{\mathbb{R}^n} f(x)\mathcal{F}g(x)\,dx = \int_{\mathbb{R}^n} \mathcal{F}f(x)g(x)\,dx. </math> |
|||
In particular, the image of {{math|''L''<sup>2</sup>('''R'''<sup>''n''</sup>)}} is itself under the Fourier transform. |
|||
The Fourier transform may be used to give a characterization of continuous measures. [[Bochner's theorem]] characterizes which functions may arise as the Fourier–Stieltjes transform of a measure {{harv|Katznelson|1976}}. |
|||
=== On other ''L''<sup>''p''</sup> === |
|||
Furthermore, the [[Dirac delta function]] is not a function but it is a finite [[Borel measure]]. Its Fourier transform is a constant function (whose specific value depends upon the form of the Fourier transform used). |
|||
For <math>1<p<2</math>, the Fourier transform can be defined on <math>L^p(\mathbb R)</math> by [[Marcinkiewicz interpolation]], which amounts to decomposing such functions into a fat tail part in {{math|''L''<sup>2</sup>}} plus a fat body part in {{math|''L''<sup>1</sup>}}. In each of these spaces, the Fourier transform of a function in {{math|''L''{{isup|''p''}}('''R'''<sup>''n''</sup>)}} is in {{math|''L''{{isup|''q''}}('''R'''<sup>''n''</sup>)}}, where {{math|1=''q'' = {{sfrac|''p''|''p'' − 1}}}} is the [[Hölder conjugate]] of {{mvar|p}} (by the [[Hausdorff–Young inequality]]). However, except for {{math|1=''p'' = 2}}, the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions in {{math|''L''{{isup|''p''}}}} for the range {{math|2 < ''p'' < ∞}} requires the study of distributions.<ref name="Katznelson-1976" /> In fact, it can be shown that there are functions in {{math|''L''{{isup|''p''}}}} with {{math|''p'' > 2}} so that the Fourier transform is not defined as a function.<ref name="Stein-Weiss-1971" /> |
|||
===Tempered distributions=== |
=== Tempered distributions === |
||
{{Main|Tempered distributions}} |
{{Main|Distribution (mathematics)#Tempered distributions and Fourier transform}} |
||
One might consider enlarging the domain of the Fourier transform from {{math|''L''<sup>1</sup> + ''L''<sup>2</sup>}} by considering [[generalized function]]s, or distributions. A distribution on {{math|'''R'''<sup>''n''</sup>}} is a continuous linear functional on the space {{math|''C''{{su|b=c|p=∞|lh=1}}('''R'''<sup>''n''</sup>)}} of compactly supported smooth functions, equipped with a suitable topology. The strategy is then to consider the action of the Fourier transform on {{math|''C''{{su|b=c|p=∞|lh=1}}('''R'''<sup>''n''</sup>)}} and pass to distributions by duality. The obstruction to doing this is that the Fourier transform does not map {{math|''C''{{su|b=c|p=∞|lh=1}}('''R'''<sup>''n''</sup>)}} to {{math|''C''{{su|b=c|p=∞|lh=1}}('''R'''<sup>''n''</sup>)}}. In fact the Fourier transform of an element in {{math|''C''{{su|b=c|p=∞|lh=1}}('''R'''<sup>''n''</sup>)}} can not vanish on an open set; see the above discussion on the uncertainty principle. |
|||
The Fourier transform maps the space of [[Schwartz space|Schwartz functions]] to itself, and gives a [[homeomorphism]] of the space to itself {{harv|Stein|Weiss|1971}}. Because of this it is possible to define the Fourier transform of [[tempered distributions]]. These include all the integrable functions mentioned above and have the added advantage that the Fourier transform of any tempered distribution is again a tempered distribution. |
|||
The Fourier transform can also be defined for [[tempered distribution]]s <math>\mathcal S'(\mathbb R^n)</math>, dual to the space of [[Schwartz function]]s <math>\mathcal S(\mathbb R^n)</math>. A Schwartz function is a smooth function that decays at infinity, along with all of its derivatives, hence <math>C_{c}^{\infty}(\mathbb{R}^n)\subseteq \mathcal S(\mathbb R^n)</math>. The Fourier transform is an automorphism on the Schwartz space, as a topological vector space, and thus induces an automorphism on its dual, the space of tempered distributions.<ref name="Stein-Weiss-1971" /> The tempered distributions include well-behaved functions of polynomial growth, distributions of compact support as well as all the integrable functions mentioned above. |
|||
The following two facts provide some motivation for the definition of the Fourier transform of a distribution. First let ''ƒ'' and ''g'' be integrable functions, and let <math>\hat{f}</math> and <math>\hat{g}</math> be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula {{harv|Stein|Weiss|1971}}, |
|||
For the definition of the Fourier transform of a tempered distribution, let {{mvar|f}} and {{mvar|g}} be integrable functions, and let {{math|''f̂''}} and {{math|''ĝ''}} be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula,<ref name="Stein-Weiss-1971" /> |
|||
:<math>\int_{\mathbb{R}^n}\hat{f}(x)g(x)\,dx=\int_{\mathbb{R}^n}f(x)\hat{g}(x)\,dx.</math> |
|||
<math display="block">\int_{\mathbb{R}^n}\hat{f}(x)g(x)\,dx=\int_{\mathbb{R}^n}f(x)\hat{g}(x)\,dx.</math> |
|||
Secondly, every integrable function ''ƒ'' defines a distribution ''T<sub>ƒ</sub>'' by the relation |
|||
:<math>T_f(\varphi)=\int_{\mathbb{R}^n}f(x)\varphi(x)\,dx</math> for all Schwartz functions ''φ''. |
|||
Every integrable function {{mvar|f}} defines (induces) a distribution {{mvar|T<sub>f</sub>}} by the relation |
|||
<math display="block">T_f(\phi)=\int_{\mathbb{R}^n}f(x)\phi(x)\,dx,\quad \forall \phi\in\mathcal S(\mathbb R^n).</math> |
|||
So it makes sense to define the Fourier transform of a tempered distribution <math>T_{f}\in\mathcal S'(\mathbb R)</math> by the duality: |
|||
<math display="block">\langle \widehat T_{f}, \phi\rangle = \langle T_{f},\widehat \phi\rangle,\quad \forall \phi\in\mathcal S(\mathbb R^n).</math> |
|||
Extending this to all tempered distributions {{mvar|T}} gives the general definition of the Fourier transform. |
|||
Distributions can be differentiated and the above-mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions. |
|||
:<math>\hat{T}(\varphi)=T(\hat{\varphi})</math> for all Schwartz functions ''φ''. |
|||
== Generalizations == |
|||
It follows that <math>\hat{T}_f=T_{\hat{f}}</math>. |
|||
=== Fourier–Stieltjes transform === |
|||
Distributions can be differentiated and the above mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions. |
|||
{{see also|Bochner–Minlos theorem|Riesz–Markov–Kakutani representation theorem}} |
|||
The Fourier transform of a [[finite measure|finite]] [[Borel measure]] {{mvar|μ}} on {{math|'''R'''<sup>''n''</sup>}} is given by:{{sfn|Pinsky|2002|p=256}} |
|||
<math display="block">\hat\mu(\xi)=\int_{\mathbb{R}^n} e^{-i 2\pi x \cdot \xi}\,d\mu,</math> |
|||
and called the ''Fourier-Stieltjes transform'' due to its connection with the [[Riemann–Stieltjes_integral#Application_to_functional_analysis|Riemann-Stieltjes integral]] representation of [[Radon_measure|(Radon) measures]].{{sfn|Edwards|1982|pp=53,67,72-73}} |
|||
One notable difference with the Fourier transform of integrable functions is that the [[Riemann–Lebesgue lemma]] fails for measures.<ref name="Katznelson-1976" /> In the case that {{math|''dμ'' {{=}} ''f''(''x'') ''dx''}}, then the formula above reduces to the usual definition for the Fourier transform of {{mvar|f}}. In the case that {{mvar|μ}} is the probability distribution associated to a random variable {{mvar|X}}, the Fourier–Stieltjes transform is closely related to the [[Characteristic function (probability theory)|characteristic function]], but the typical conventions in probability theory take {{math|''e''<sup>''iξx''</sup>}} instead of {{math|''e''<sup>−''i''2π''ξx''</sup>}}.<ref name="Pinsky-2002" /> In the case when the distribution has a [[probability density function]] this definition reduces to the Fourier transform applied to the probability density function, again with a different choice of constants. |
|||
The Fourier transform may be used to give a characterization of measures. [[Bochner's theorem]] characterizes which functions may arise as the Fourier–Stieltjes transform of a positive measure on the circle.<ref name="Katznelson-1976" /> |
|||
Furthermore, the [[Dirac_delta_function#As_a_measure|Dirac delta function]] is a finite Borel measure. Its Fourier transform is a constant function (whose value depends on the form of the Fourier transform used). |
|||
=== Locally compact abelian groups === |
=== Locally compact abelian groups === |
||
{{Main|Pontryagin duality}} |
{{Main|Pontryagin duality}} |
||
The Fourier transform may be generalized to any locally compact abelian group. |
The Fourier transform may be generalized to any locally compact abelian group. A locally compact abelian group is an [[abelian group]] that is at the same time a [[locally compact]] [[Hausdorff space|Hausdorff topological space]] so that the group operation is continuous. If {{mvar|G}} is a locally compact abelian group, it has a translation invariant measure {{mvar|μ}}, called [[Haar measure]]. For a locally compact abelian group {{mvar|G}}, the set of irreducible, i.e. one-dimensional, unitary representations are called its [[character group|characters]]. With its natural group structure and the topology of uniform convergence on compact sets (that is, the topology induced by the [[compact-open topology]] on the space of all continuous functions from <math>G</math> to the [[circle group]]), the set of characters {{mvar|Ĝ}} is itself a locally compact abelian group, called the ''Pontryagin dual'' of {{mvar|G}}. For a function {{mvar|f}} in {{math|''L''<sup>1</sup>(''G'')}}, its Fourier transform is defined by<ref name="Katznelson-1976" /> |
||
<math display="block">\hat{f}(\xi) = \int_G \xi(x)f(x)\,d\mu\quad \text{for any }\xi \in \hat{G}.</math> |
|||
The Riemann–Lebesgue lemma holds in this case; {{math|''f̂''(''ξ'')}} is a function vanishing at infinity on {{mvar|Ĝ}}. |
|||
The Fourier transform on {{nobr|{{mvar|T}} {{=}} R/Z}} is an example; here {{mvar|T}} is a locally compact abelian group, and the Haar measure {{mvar|μ}} on {{mvar|T}} can be thought of as the Lebesgue measure on [0,1). Consider the representation of {{mvar|T}} on the complex plane {{mvar|C}} that is a 1-dimensional complex vector space. There are a group of representations (which are irreducible since {{mvar|C}} is 1-dim) <math>\{e_{k}: T \rightarrow GL_{1}(C) = C^{*} \mid k \in Z\}</math> where <math>e_{k}(x) = e^{i 2\pi kx}</math> for <math>x\in T</math>. |
|||
The character of such representation, that is the trace of <math>e_{k}(x)</math> for each <math>x\in T</math> and <math>k\in Z</math>, is <math>e^{i 2\pi kx}</math> itself. In the case of representation of finite group, the character table of the group {{mvar|G}} are rows of vectors such that each row is the character of one irreducible representation of {{mvar|G}}, and these vectors form an orthonormal basis of the space of class functions that map from {{mvar|G}} to {{mvar|C}} by Schur's lemma. Now the group {{mvar|T}} is no longer finite but still compact, and it preserves the orthonormality of character table. Each row of the table is the function <math>e_{k}(x)</math> of <math>x\in T,</math> and the inner product between two class functions (all functions being class functions since {{mvar|T}} is abelian) <math>f,g \in L^{2}(T, d\mu)</math> is defined as <math display="inline">\langle f, g \rangle = \frac{1}{|T|}\int_{[0,1)}f(y)\overline{g}(y)d\mu(y)</math> with the normalizing factor <math>|T|=1</math>. The sequence <math>\{e_{k}\mid k\in Z\}</math> is an orthonormal basis of the space of class functions <math>L^{2}(T,d\mu)</math>. |
|||
For any representation {{mvar|V}} of a finite group {{mvar|G}}, <math>\chi_{v}</math> can be expressed as the span <math display="inline">\sum_{i} \left\langle \chi_{v},\chi_{v_{i}} \right\rangle \chi_{v_{i}}</math> (<math>V_{i}</math> are the irreps of {{mvar|G}}), such that <math display="inline">\left\langle \chi_{v}, \chi_{v_{i}} \right\rangle = \frac{1}{|G|}\sum_{g\in G}\chi_{v}(g)\overline{\chi}_{v_{i}}(g)</math>. Similarly for <math>G = T</math> and <math>f\in L^{2}(T, d\mu)</math>, <math display="inline">f(x) = \sum_{k\in Z}\hat{f}(k)e_{k}</math>. The Pontriagin dual <math>\hat{T}</math> is <math>\{e_{k}\}(k\in Z)</math> and for <math>f \in L^{2}(T, d\mu)</math>, <math display="inline">\hat{f}(k) = \frac{1}{|T|}\int_{[0,1)}f(y)e^{-i 2\pi ky}dy</math> is its Fourier transform for <math>e_{k} \in \hat{T}</math>. |
|||
:<math>\hat{f}(\xi)=\int_G \xi(x)f(x)\,d\mu\qquad\text{for any }\xi\in\hat G.</math> |
|||
=== |
=== Gelfand transform === |
||
{{Main|Gelfand representation}} |
{{Main|Gelfand representation}} |
||
The Fourier transform |
The Fourier transform is also a special case of [[Gelfand transform]]. In this particular context, it is closely related to the Pontryagin duality map defined above. |
||
Given |
Given an abelian [[locally compact space|locally compact]] [[Hausdorff space|Hausdorff]] [[topological group]] {{mvar|G}}, as before we consider space {{math|''L''<sup>1</sup>(''G'')}}, defined using a Haar measure. With convolution as multiplication, {{math|''L''<sup>1</sup>(''G'')}} is an abelian [[Banach algebra]]. It also has an [[Involution (mathematics)|involution]] * given by |
||
<math display="block">f^*(g) = \overline{f\left(g^{-1}\right)}.</math> |
|||
Taking the completion with respect to the largest possibly {{math|''C''*}}-norm gives its enveloping {{math|''C''*}}-algebra, called the group {{math|''C''*}}-algebra {{math|''C''*(''G'')}} of {{mvar|G}}. (Any {{math|''C''*}}-norm on {{math|''L''<sup>1</sup>(''G'')}} is bounded by the {{math|''L''<sup>1</sup>}} norm, therefore their supremum exists.) |
|||
=== Non-abelian groups === |
|||
The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is [[compact space|compact]]. Unlike the Fourier transform on an abelian group, which is scalar-valued, the Fourier transform on a non-abelian group is operator-valued {{harv|Hewitt|Ross|1971|loc=Chapter 8}}. The Fourier transform on compact groups is a major tool in [[representation theory]] {{harv|Knapp|2001}} and [[non-commutative harmonic analysis]]. |
|||
Given any abelian {{math|''C''*}}-algebra {{mvar|A}}, the Gelfand transform gives an isomorphism between {{mvar|A}} and {{math|''C''<sub>0</sub>(''A''^)}}, where {{math|''A''^}} is the multiplicative linear functionals, i.e. one-dimensional representations, on {{mvar|A}} with the weak-* topology. The map is simply given by |
|||
Let ''G'' be a compact [[Hausdorff space|Hausdorff]] [[topological group]]. Let Σ denote the collection of all isomorphism classes of finite-dimensional irreducible [[unitary representation]]s, along with a definite choice of representation ''U''<sup>(σ)</sup> on the [[Hilbert space]] ''H''<sub>σ</sub> of finite dimension ''d''<sub>σ</sub> for each σ ∈ Σ. If μ is a finite [[Borel measure]] on ''G'', then the Fourier–Stieltjes transform of μ is the operator on ''H''<sub>σ</sub> defined by |
|||
<math display="block">a \mapsto \bigl( \varphi \mapsto \varphi(a) \bigr)</math> |
|||
It turns out that the multiplicative linear functionals of {{math|''C''*(''G'')}}, after suitable identification, are exactly the characters of {{mvar|G}}, and the Gelfand transform, when restricted to the dense subset {{math|''L''<sup>1</sup>(''G'')}} is the Fourier–Pontryagin transform. |
|||
=== Compact non-abelian groups === |
|||
:<math>\langle \hat{\mu}\xi,\eta\rangle_{H_\sigma} = \int_G \langle \overline{U}^{(\sigma)}_g\xi,\eta\rangle\,d\mu(g)</math> |
|||
The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is [[compact space|compact]]. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators.<ref>{{harvnb|Hewitt|Ross|1970|loc=Chapter 8}}</ref> The Fourier transform on compact groups is a major tool in [[representation theory]]<ref>{{harvnb|Knapp|2001}}</ref> and [[non-commutative harmonic analysis]]. |
|||
Let {{mvar|G}} be a compact [[Hausdorff space|Hausdorff]] [[topological group]]. Let {{math|Σ}} denote the collection of all isomorphism classes of finite-dimensional irreducible [[unitary representation]]s, along with a definite choice of representation {{math|''U''{{isup|(''σ'')}}}} on the [[Hilbert space]] {{math|''H<sub>σ</sub>''}} of finite dimension {{math|''d<sub>σ</sub>''}} for each {{math|''σ'' ∈ Σ}}. If {{mvar|μ}} is a finite [[Borel measure]] on {{mvar|G}}, then the Fourier–Stieltjes transform of {{mvar|μ}} is the operator on {{math|''H<sub>σ</sub>''}} defined by |
|||
where <math>\scriptstyle{\overline{U}^{(\sigma)}}</math> is the complex-conjugate representation of ''U''<sup>(σ)</sup> acting on ''H''<sub>σ</sub>. As in the abelian case, if μ is [[absolutely continuous]] with respect to the [[Haar measure|left-invariant probability measure]] λ on ''G'', then it is [[Radon–Nikodym theorem|represented]] as |
|||
<math display="block">\left\langle \hat{\mu}\xi,\eta\right\rangle_{H_\sigma} = \int_G \left\langle \overline{U}^{(\sigma)}_g\xi,\eta\right\rangle\,d\mu(g)</math> |
|||
:<math>d\mu = fd\lambda</math> |
|||
where {{math|{{overline|''U''}}{{isup|(''σ'')}}}} is the complex-conjugate representation of {{math|''U''<sup>(''σ'')</sup>}} acting on {{math|''H<sub>σ</sub>''}}. If {{mvar|μ}} is [[absolutely continuous]] with respect to the [[Haar measure|left-invariant probability measure]] {{mvar|λ}} on {{mvar|G}}, [[Radon–Nikodym theorem|represented]] as |
|||
for some ''ƒ'' ∈ [[Lp space|L<sup>1</sup>(λ)]]. In this case, one identifies the Fourier transform of ''ƒ'' with the Fourier–Stieltjes transform of μ. |
|||
<math display="block">d\mu = f \, d\lambda</math> |
|||
for some {{math|''f'' ∈ [[Lp space|''L''<sup>1</sup>(''λ'')]]}}, one identifies the Fourier transform of {{mvar|f}} with the Fourier–Stieltjes transform of {{mvar|μ}}. |
|||
The mapping |
|||
The mapping <math>\mu\mapsto\hat{\mu}</math> defines an isomorphism between the [[Banach space]] ''M''(''G'') of finite Borel measures (see [[rca space]]) and a closed subspace of the Banach space |
|||
<math display="block">\mu\mapsto\hat{\mu}</math> |
|||
'''C'''<sub>∞</sub>(Σ) consisting of all sequences ''E'' = (''E''<sub>σ</sub>) indexed by Σ of (bounded) linear operators ''E''<sub>σ</sub> : ''H''<sub>σ</sub> → ''H''<sub>σ</sub> for which the norm |
|||
defines an isomorphism between the [[Banach space]] {{math|''M''(''G'')}} of finite Borel measures (see [[rca space]]) and a closed subspace of the Banach space {{math|'''C'''<sub>∞</sub>(Σ)}} consisting of all sequences {{math|''E'' {{=}} (''E<sub>σ</sub>'')}} indexed by {{math|Σ}} of (bounded) linear operators {{math|''E<sub>σ</sub>'' : ''H<sub>σ</sub>'' → ''H<sub>σ</sub>''}} for which the norm |
|||
:<math>\|E\| = \sup_{\sigma\in\Sigma}\|E_\sigma\|</math> |
|||
<math display="block">\|E\| = \sup_{\sigma\in\Sigma}\left\|E_\sigma\right\|</math> |
|||
is finite. The "[[convolution theorem]]" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isomorphism of [[C* algebra|C<sup>*</sup> algebras]] into a subspace of '''C'''<sub>∞</sub>(Σ), in which ''M''(''G'') is equipped with the product given by [[convolution]] of measures and '''C'''<sub>∞</sub>(Σ) the product given by multiplication of operators in each index σ. |
|||
is finite. The "[[convolution theorem]]" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism of [[C*-algebra]]s into a subspace of {{math|'''C'''<sub>∞</sub>(Σ)}}. Multiplication on {{math|''M''(''G'')}} is given by [[convolution]] of measures and the involution * defined by |
|||
<math display="block">f^*(g) = \overline{f\left(g^{-1}\right)},</math> |
|||
and {{math|'''C'''<sub>∞</sub>(Σ)}} has a natural {{math|''C''*}}-algebra structure as Hilbert space operators. |
|||
The [[ |
The [[Peter–Weyl theorem]] holds, and a version of the Fourier inversion formula ([[Plancherel's theorem]]) follows: if {{math|''f'' ∈ ''L''<sup>2</sup>(''G'')}}, then |
||
<math display="block">f(g) = \sum_{\sigma\in\Sigma} d_\sigma \operatorname{tr}\left(\hat{f}(\sigma)U^{(\sigma)}_g\right)</math> |
|||
where the summation is understood as convergent in the L<sup>2</sup> sense. |
where the summation is understood as convergent in the {{math|''L''<sup>2</sup>}} sense. |
||
The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development of [[noncommutative geometry]].{{Citation needed|date=May 2009}} In this context, a categorical generalization of the Fourier transform to noncommutative groups is [[ |
The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development of [[noncommutative geometry]].{{Citation needed|date=May 2009}} In this context, a categorical generalization of the Fourier transform to noncommutative groups is [[Tannaka–Krein duality]], which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions. |
||
== Alternatives == |
== Alternatives == |
||
In [[signal processing]] terms, a function (of time) is a representation of a signal with perfect ''time resolution |
In [[signal processing]] terms, a function (of time) is a representation of a signal with perfect ''time resolution'', but no frequency information, while the Fourier transform has perfect ''frequency resolution'', but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), and [[standing wave]]s are not localized in time – a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notably [[transient (acoustics)|transients]], or any signal of finite extent. |
||
As alternatives to the Fourier transform, in [[ |
As alternatives to the Fourier transform, in [[time–frequency analysis]], one uses time–frequency transforms or time–frequency distributions to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the [[short-time Fourier transform]], [[fractional Fourier transform]], Synchrosqueezing Fourier transform,<ref>{{cite journal |last1=Correia |first1=L. B. |last2=Justo |first2=J. F. |last3=Angélico |first3=B. A. |title=Polynomial Adaptive Synchrosqueezing Fourier Transform: A method to optimize multiresolution |journal=Digital Signal Processing |date=2024 |volume=150 |page=104526 |doi=10.1016/j.dsp.2024.104526|bibcode=2024DSPRJ.15004526C }}</ref> or other functions to represent signals, as in [[wavelet transform]]s and [[chirplet transform]]s, with the wavelet analog of the (continuous) Fourier transform being the [[continuous wavelet transform]].<ref name="Boashash-2003" /> |
||
== |
== Example == |
||
The following figures provide a visual illustration of how the Fourier transform's integral measures whether a frequency is present in a particular function. The first image depicts the function <math>f(t) = \cos(2\pi\ 3 t) \ e^{-\pi t^2},</math> which is a 3 [[hertz|Hz]] cosine wave (the first term) shaped by a [[Gaussian function|Gaussian]] [[Envelope (waves)|envelope function]] (the second term) that smoothly turns the wave on and off. The next 2 images show the product <math>f(t) e^{-i 2\pi 3 t},</math> which must be integrated to calculate the Fourier transform at +3 Hz. The real part of the integrand has a non-negative average value, because the alternating signs of <math>f(t)</math> and <math>\operatorname{Re}(e^{-i 2\pi 3 t})</math> oscillate at the same rate and in phase, whereas <math>f(t)</math> and <math>\operatorname{Im} (e^{-i 2\pi 3 t})</math> oscillate at the same rate but with orthogonal phase. The absolute value of the Fourier transform at +3 Hz is 0.5, which is relatively large. When added to the Fourier transform at -3 Hz (which is identical because we started with a real signal), we find that the amplitude of the 3 Hz frequency component is 1. |
|||
===Analysis of differential equations=== |
|||
[[File:Onfreq.png|left|thumb|695x695px|Original function, which has a strong 3 Hz component. Real and imaginary parts of the integrand of its Fourier transform at +3 Hz.]] |
|||
Fourier transforms, and the closely related [[Laplace transform]]s are widely used in solving [[differential equations]]. The Fourier transform is compatible with [[derivative|differentiation]] in the following sense: if ''f''(''x'') is a differentiable function with Fourier transform <math>\hat{f}(\xi)</math>, then the Fourier transform of its derivative is given by <math>2\pi i\xi\hat{f}(\xi)</math>. This can be used to transform differential equations into algebraic equations. Note that this technique only applies to problems whose domain is the whole set of real numbers. By extending the Fourier transform to functions of several variables [[partial differential equation]]s with domain '''R'''<sup>n</sup> can also be translated into algebraic equations. |
|||
{{clear}}However, when you try to measure a frequency that is not present, both the real and imaginary component of the integral vary rapidly between positive and negative values. For instance, the red curve is looking for 5 Hz. The absolute value of its integral is nearly zero, indicating that almost no 5 Hz component was in the signal. The general situation is usually more complicated than this, but heuristically this is how the Fourier transform measures how much of an individual frequency is present in a function <math> f(t).</math><gallery widths="360px" heights="360px"> |
|||
File:Offfreq i2p.svg| Real and imaginary parts of the integrand for its Fourier transform at +5 Hz. |
|||
File:Fourier transform of oscillating function.svg| Magnitude of its Fourier transform, with +3 and +5 Hz labeled. |
|||
</gallery> |
|||
To re-enforce an earlier point, the reason for the response at <math>\xi=-3</math> Hz is because <math>\cos(2\pi 3t)</math> and <math>\cos(2\pi(-3)t)</math> are indistinguishable. The transform of <math>e^{i2\pi 3t}\cdot e^{-\pi t^2}</math> would have just one response, whose amplitude is the integral of the smooth envelope: <math>e^{-\pi t^2},</math> whereas <math>\operatorname{Re}(f(t)\cdot e^{-i2\pi 3t})</math> is <math>e^{-\pi t^2} (1 + \cos(2\pi 6t))/2.</math> |
|||
=== NMR, FT-IR and MRI === |
|||
== Applications == |
|||
The Fourier transform is also used in [[nuclear magnetic resonance]] (NMR) and in other kinds of [[spectroscopy]], e.g. infrared (FT-IR). In NMR an exponentially-shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in [[magnetic resonance imaging]] (MRI). |
|||
{{see also|Spectral density#Applications}} |
|||
[[File:Commutative diagram illustrating problem solving via the Fourier transform.svg|thumb|400px|Some problems, such as certain differential equations, become easier to solve when the Fourier transform is applied. In that case the solution to the original problem is recovered using the inverse Fourier transform.]] |
|||
== Domain and range of the Fourier transform == |
|||
Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation of [[derivative|differentiation]] in the time domain corresponds to multiplication by the frequency,<ref group="note">Up to an imaginary constant factor whose magnitude depends on what Fourier transform convention is used.</ref> so some [[differential equation]]s are easier to analyze in the frequency domain. Also, [[convolution]] in the time domain corresponds to ordinary multiplication in the frequency domain (see [[Convolution theorem]]). After performing the desired operations, transformation of the result can be made back to the time domain. [[Harmonic analysis]] is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics. |
|||
It is often desirable to have the most general domain for the Fourier transform as possible. The definition of Fourier transform as an integral naturally restricts the domain to the space of integrable functions. Unfortunately, there is no simple characterizations of which functions are Fourier transforms of integrable functions {{harv|Stein|Weiss|1971}}. It is possible to extend the domain of the Fourier transform in various ways, as discussed in generalizations above. The following list details some of the more common domains and ranges on which the Fourier transform is defined. |
|||
=== Analysis of differential equations === |
|||
* The space of [[Schwartz function]]s is closed under the Fourier transform. Schwartz functions are rapidly decaying functions and do not include all functions which are relevant for the Fourier transform. More details may be found in {{harv|Stein|Weiss|1971}}. |
|||
Perhaps the most important use of the Fourier transformation is to solve [[partial differential equation]]s. |
|||
Many of the equations of the mathematical physics of the nineteenth century can be treated this way. Fourier studied the heat equation, which in one dimension and in dimensionless units is |
|||
<math display="block">\frac{\partial^2 y(x, t)}{\partial^2 x} = \frac{\partial y(x, t)}{\partial t}.</math> |
|||
The example we will give, a slightly more difficult one, is the wave equation in one dimension, |
|||
<math display="block">\frac{\partial^2y(x, t)}{\partial^2 x} = \frac{\partial^2y(x, t)}{\partial^2t}.</math> |
|||
As usual, the problem is not to find a solution: there are infinitely many. The problem is that of the so-called "boundary problem": find a solution which satisfies the "boundary conditions" |
|||
* The space ''L''<sup>''p''</sup> maps into the space ''L''<sup>''q''</sup>, where 1/''p'' + 1/''q'' = 1 and 1 ≤ ''p'' ≤ 2 ([[Hausdorff–Young inequality]]). |
|||
<math display="block">y(x, 0) = f(x),\qquad \frac{\partial y(x, 0)}{\partial t} = g(x).</math> |
|||
Here, {{mvar|f}} and {{mvar|g}} are given functions. For the heat equation, only one boundary condition can be required (usually the first one). But for the wave equation, there are still infinitely many solutions {{mvar|y}} which satisfy the first boundary condition. But when one imposes both conditions, there is only one possible solution. |
|||
* In particular, the space ''L''<sup>2</sup> is closed under the Fourier transform, but here the Fourier transform is no longer defined by integration. |
|||
It is easier to find the Fourier transform {{mvar|ŷ}} of the solution than to find the solution directly. This is because the Fourier transformation takes differentiation into multiplication by the Fourier-dual variable, and so a partial differential equation applied to the original function is transformed into multiplication by polynomial functions of the dual variables applied to the transformed function. After {{mvar|ŷ}} is determined, we can apply the inverse Fourier transformation to find {{mvar|y}}. |
|||
* The space ''L''<sup>1</sup> of Lebesgue integrable functions maps into ''C''<sub>0</sub>, the space of continuous functions that tend to zero at infinity – not just into the space <math>L^\infty</math> of bounded functions (the [[Riemann–Lebesgue lemma]]). |
|||
Fourier's method is as follows. First, note that any function of the forms |
|||
* The set of [[tempered distributions]] is closed under the Fourier transform. Tempered distributions are also a form of generalization of functions. It is in this generality that one can define the Fourier transform of objects like the [[Dirac comb]]. |
|||
<math display="block"> \cos\bigl(2\pi\xi(x\pm t)\bigr) \text{ or } \sin\bigl(2\pi\xi(x \pm t)\bigr)</math> |
|||
satisfies the wave equation. These are called the elementary solutions. |
|||
Second, note that therefore any integral |
|||
== Other notations == |
|||
<math display="block">\begin{align} |
|||
y(x, t) = \int_{0}^{\infty} d\xi \Bigl[ &a_+(\xi)\cos\bigl(2\pi\xi(x + t)\bigr) + a_-(\xi)\cos\bigl(2\pi\xi(x - t)\bigr) +{} \\ |
|||
&b_+(\xi)\sin\bigl(2\pi\xi(x + t)\bigr) + b_-(\xi)\sin\left(2\pi\xi(x - t)\right) \Bigr] |
|||
\end{align}</math> |
|||
satisfies the wave equation for arbitrary {{math|''a''<sub>+</sub>, ''a''<sub>−</sub>, ''b''<sub>+</sub>, ''b''<sub>−</sub>}}. This integral may be interpreted as a continuous linear combination of solutions for the linear equation. |
|||
Now this resembles the formula for the Fourier synthesis of a function. In fact, this is the real inverse Fourier transform of {{math|''a''<sub>±</sub>}} and {{math|''b''<sub>±</sub>}} in the variable {{mvar|x}}. |
|||
Other common notations for <math> \hat{f}(\xi)</math> are''':''' <math>\tilde{f}(\xi)</math>, <math>\mathcal F(\xi)</math>, <math>\mathcal{F}\left(f\right)(\xi)</math>, <math>\left(\mathcal{F}f\right)(\xi)</math>, <math>\mathcal{F}(f)</math>, <math>\mathcal F(\omega)</math>, <math>\mathcal F(j\omega)</math>, <math>\mathcal{F}\{f\}</math> and <math>\mathcal{F} \left(f(t)\right).</math> Though less commonly other notations are used. Denote the Fourier transform by a capital letter corresponding to the letter of function being transformed (such as ''f''(''x'') and ''F''(''ξ'')) is especially common in the sciences and engineering. In electronics, the omega (''ω'') is often used instead of ''ξ'' due to its interpretation as angular frequency, sometimes it is written as ''F(jω)'', where ''j'' is the [[imaginary unit]], to indicate its relationship with the [[Laplace transform]], and sometimes it is replaced with ''2πf'' in order to use ordinary frequency. |
|||
The third step is to examine how to find the specific unknown coefficient functions {{math|''a''<sub>±</sub>}} and {{math|''b''<sub>±</sub>}} that will lead to {{mvar|y}} satisfying the boundary conditions. We are interested in the values of these solutions at {{math|1=''t'' = 0}}. So we will set {{math|1=''t'' = 0}}. Assuming that the conditions needed for Fourier inversion are satisfied, we can then find the Fourier sine and cosine transforms (in the variable {{mvar|x}}) of both sides and obtain |
|||
The interpretation of the complex function <math>\hat{f}(\xi)</math> may be aided by expressing it in [[polar coordinate]] form''':''' <math>\hat{f}(\xi)=A(\xi)e^{i\varphi(\xi)}</math> in terms of the two real functions ''A''(''ξ'') and φ(''ξ'') where''':''' |
|||
<math display="block"> 2\int_{-\infty}^\infty y(x,0) \cos(2\pi\xi x) \, dx = a_+ + a_-</math> |
|||
and |
|||
<math display="block">2\int_{-\infty}^\infty y(x,0) \sin(2\pi\xi x) \, dx = b_+ + b_-.</math> |
|||
Similarly, taking the derivative of {{mvar|y}} with respect to {{mvar|t}} and then applying the Fourier sine and cosine transformations yields |
|||
:<math>A(\xi) = |\hat{f}(\xi)|, \, </math> |
|||
<math display="block">2\int_{-\infty}^\infty \frac{\partial y(u,0)}{\partial t} \sin (2\pi\xi x) \, dx = (2\pi\xi)\left(-a_+ + a_-\right)</math> |
|||
and |
|||
<math display="block">2\int_{-\infty}^\infty \frac{\partial y(u,0)}{\partial t} \cos (2\pi\xi x) \, dx = (2\pi\xi)\left(b_+ - b_-\right).</math> |
|||
These are four linear equations for the four unknowns {{math|''a''<sub>±</sub>}} and {{math|''b''<sub>±</sub>}}, in terms of the Fourier sine and cosine transforms of the boundary conditions, which are easily solved by elementary algebra, provided that these transforms can be found. |
|||
is the [[amplitude]] and |
|||
In summary, we chose a set of elementary solutions, parametrized by {{mvar|ξ}}, of which the general solution would be a (continuous) linear combination in the form of an integral over the parameter {{mvar|ξ}}. But this integral was in the form of a Fourier integral. The next step was to express the boundary conditions in terms of these integrals, and set them equal to the given functions {{mvar|f}} and {{mvar|g}}. But these expressions also took the form of a Fourier integral because of the properties of the Fourier transform of a derivative. The last step was to exploit Fourier inversion by applying the Fourier transformation to both sides, thus obtaining expressions for the coefficient functions {{math|''a''<sub>±</sub>}} and {{math|''b''<sub>±</sub>}} in terms of the given boundary conditions {{mvar|f}} and {{mvar|g}}. |
|||
:<math>\varphi (\xi) = \arg \big( \hat{f}(\xi) \big), </math> |
|||
From a higher point of view, Fourier's procedure can be reformulated more conceptually. Since there are two variables, we will use the Fourier transformation in both {{mvar|x}} and {{mvar|t}} rather than operate as Fourier did, who only transformed in the spatial variables. Note that {{mvar|ŷ}} must be considered in the sense of a distribution since {{math|''y''(''x'', ''t'')}} is not going to be {{math|''L''<sup>1</sup>}}: as a wave, it will persist through time and thus is not a transient phenomenon. But it will be bounded and so its Fourier transform can be defined as a distribution. The operational properties of the Fourier transformation that are relevant to this equation are that it takes differentiation in {{mvar|x}} to multiplication by {{math|''i''2π''ξ''}} and differentiation with respect to {{mvar|t}} to multiplication by {{math|''i''2π''f''}} where {{mvar|f}} is the frequency. Then the wave equation becomes an algebraic equation in {{mvar|ŷ}}: |
|||
is the [[phase (waves)|phase]] (see [[Arg (mathematics)|arg function]]). |
|||
<math display="block">\xi^2 \hat y (\xi, f) = f^2 \hat y (\xi, f).</math> |
|||
This is equivalent to requiring {{math|1=''ŷ''(''ξ'', ''f'') = 0}} unless {{math|1=''ξ'' = ±''f''}}. Right away, this explains why the choice of elementary solutions we made earlier worked so well: obviously {{math|1=''f̂'' = ''δ''(''ξ'' ± ''f'')}} will be solutions. Applying Fourier inversion to these delta functions, we obtain the elementary solutions we picked earlier. But from the higher point of view, one does not pick elementary solutions, but rather considers the space of all distributions which are supported on the (degenerate) conic {{math|1=''ξ''{{isup|2}} − ''f''{{isup|2}} = 0}}. |
|||
We may as well consider the distributions supported on the conic that are given by distributions of one variable on the line {{math|1=''ξ'' = ''f''}} plus distributions on the line {{math|''ξ'' {{=}} −''f''}} as follows: if {{mvar|Φ}} is any test function, |
|||
Then the inverse transform can be written''':''' |
|||
<math display="block">\iint \hat y \phi(\xi,f) \, d\xi \, df = \int s_+ \phi(\xi,\xi) \, d\xi + \int s_- \phi(\xi,-\xi) \, d\xi,</math> |
|||
where {{math|''s''<sub>+</sub>}}, and {{math|''s''<sub>−</sub>}}, are distributions of one variable. |
|||
Then Fourier inversion gives, for the boundary conditions, something very similar to what we had more concretely above (put {{math|1=''Φ''(''ξ'', ''f'') = ''e''<sup>''i''2π(''xξ''+''tf'')</sup>}}, which is clearly of polynomial growth): |
|||
:<math>f(x) = \int _{-\infty}^{\infty} A(\xi)\ e^{ i(2\pi \xi x +\varphi (\xi))}\,d\nu,</math> |
|||
<math display="block"> y(x,0) = \int\bigl\{s_+(\xi) + s_-(\xi)\bigr\} e^{i 2\pi \xi x+0} \, d\xi </math> |
|||
and |
|||
<math display="block"> \frac{\partial y(x,0)}{\partial t} = \int\bigl\{s_+(\xi) - s_-(\xi)\bigr\} i 2\pi \xi e^{i 2\pi\xi x+0} \, d\xi.</math> |
|||
Now, as before, applying the one-variable Fourier transformation in the variable {{mvar|x}} to these functions of {{mvar|x}} yields two equations in the two unknown distributions {{math|''s''<sub>±</sub>}} (which can be taken to be ordinary functions if the boundary conditions are {{math|''L''<sup>1</sup>}} or {{math|''L''<sup>2</sup>}}). |
|||
which is a recombination of all the '''frequency components''' of ''ƒ''(''x''). Each component is a complex [[sinusoid]] of the form ''e''<sup>''2πixξ''</sup> whose [[amplitude]] is ''A''(''ξ'') and whose initial [[phase angle]] (at ''x'' = 0) is ''φ''(''ξ''). |
|||
From a calculational point of view, the drawback of course is that one must first calculate the Fourier transforms of the boundary conditions, then assemble the solution from these, and then calculate an inverse Fourier transform. Closed form formulas are rare, except when there is some geometric symmetry that can be exploited, and the numerical calculations are difficult because of the oscillatory nature of the integrals, which makes convergence slow and hard to estimate. For practical calculations, other methods are often used. |
|||
The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted <math>\mathcal{F}</math> and <math>\mathcal{F}(f)</math> is used to denote the Fourier transform of the function ''f''. This mapping is linear, which means that <math>\mathcal{F}</math> can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function ''f'') can be used to write <math>\mathcal{F} f</math> instead of <math>\mathcal{F}(f)</math>. Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value ''ξ'' for its variable, and this is denoted either as <math>\mathcal{F}(f)(\xi)</math> or as <math>(\mathcal{F} f)(\xi)</math>. Notice that in the former case, it is implicitly understood that <math>\mathcal{F}</math> is applied first to ''f'' and then the resulting function is evaluated at ''ξ'', not the other way around. |
|||
The twentieth century has seen the extension of these methods to all linear partial differential equations with polynomial coefficients, and by extending the notion of Fourier transformation to include Fourier integral operators, some non-linear equations as well. |
|||
In mathematics and various applied sciences it is often necessary to distinguish between a function ''f'' and the value of ''f'' when its variable equals ''x'', denoted ''f(x)''. This means that a notation like <math>\mathcal{F}(f(x))</math> formally can be interpreted as the Fourier transform of the values of ''f'' at ''x''. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example, <math>\mathcal{F}( \mathrm{rect}(x) ) = \mathrm{sinc}(\xi)</math> is sometimes used to express that the Fourier transform of a rectangular function is a sinc function, or <math>\mathcal{F}(f(x+x_{0})) = \mathcal{F}(f(x)) e^{2\pi i \xi x_{0}}</math> is used to express the shift property of the Fourier transform. Notice, that the last example is only correct under the assumption that the transformed function is a function of ''x'', not of ''x''<sub>0</sub>. |
|||
=== Fourier-transform spectroscopy === |
|||
== Other conventions == |
|||
{{Main|Fourier-transform spectroscopy}} |
|||
The Fourier transform is also used in [[nuclear magnetic resonance]] (NMR) and in other kinds of [[spectroscopy]], e.g. infrared ([[Fourier-transform infrared spectroscopy|FTIR]]). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in [[magnetic resonance imaging]] (MRI) and [[mass spectrometry]]. |
|||
=== Quantum mechanics === |
|||
There are three common conventions for defining the Fourier transform. The Fourier transform is often written in terms of [[angular frequency]]''':''' ''ω'' = ''2πξ'' whose units are [[radians]] per second. |
|||
The Fourier transform is useful in [[quantum mechanics]] in at least two different ways. To begin with, the basic conceptual structure of quantum mechanics postulates the existence of pairs of [[complementary variables]], connected by the [[Heisenberg uncertainty principle]]. For example, in one dimension, the spatial variable {{mvar|q}} of, say, a particle, can only be measured by the quantum mechanical "[[position operator]]" at the cost of losing information about the momentum {{mvar|p}} of the particle. Therefore, the physical state of the particle can either be described by a function, called "the wave function", of {{mvar|q}} or by a function of {{mvar|p}} but not by a function of both variables. The variable {{mvar|p}} is called the conjugate variable to {{mvar|q}}. In classical mechanics, the physical state of a particle (existing in one dimension, for simplicity of exposition) would be given by assigning definite values to both {{mvar|p}} and {{mvar|q}} simultaneously. Thus, the set of all possible physical states is the two-dimensional real vector space with a {{mvar|p}}-axis and a {{mvar|q}}-axis called the [[phase space]]. |
|||
In contrast, quantum mechanics chooses a polarisation of this space in the sense that it picks a subspace of one-half the dimension, for example, the {{mvar|q}}-axis alone, but instead of considering only points, takes the set of all complex-valued "wave functions" on this axis. Nevertheless, choosing the {{mvar|p}}-axis is an equally valid polarisation, yielding a different representation of the set of possible physical states of the particle. Both representations of the wavefunction are related by a Fourier transform, such that |
|||
The substitution ''ξ'' = ''ω''/(2π) into the formulas above produces this convention''':''' |
|||
<math display="block">\phi(p) = \int dq\, \psi (q) e^{-i pq/h} ,</math> |
|||
or, equivalently, |
|||
<math display="block">\psi(q) = \int dp \, \phi (p) e^{i pq/h}.</math> |
|||
Physically realisable states are {{math|''L''<sup>2</sup>}}, and so by the [[Plancherel theorem]], their Fourier transforms are also {{math|''L''<sup>2</sup>}}. (Note that since {{mvar|q}} is in units of distance and {{mvar|p}} is in units of momentum, the presence of the Planck constant in the exponent makes the exponent [[Nondimensionalization|dimensionless]], as it should be.) |
|||
:<math>\hat{f}(\omega) = \int_{\mathbb{R}^n} f(x) e^{- i\omega\cdot x}\,dx </math> |
|||
Therefore, the Fourier transform can be used to pass from one way of representing the state of the particle, by a wave function of position, to another way of representing the state of the particle: by a wave function of momentum. Infinitely many different polarisations are possible, and all are equally valid. Being able to transform states from one representation to another by the Fourier transform is not only convenient but also the underlying reason of the Heisenberg [[#Uncertainty principle|uncertainty principle]]. |
|||
Under this convention, the inverse transform becomes: |
|||
The other use of the Fourier transform in both quantum mechanics and [[quantum field theory]] is to solve the applicable wave equation. In non-relativistic quantum mechanics, [[Schrödinger's equation]] for a time-varying wave function in one-dimension, not subject to external forces, is |
|||
:<math>f(x) = \frac{1}{(2\pi)^n} \int_{\mathbb{R}^n} \hat{f}(\omega)e^{ i\omega \cdot x}\,d\omega. </math> |
|||
<math display="block">-\frac{\partial^2}{\partial x^2} \psi(x,t) = i \frac h{2\pi} \frac{\partial}{\partial t} \psi(x,t).</math> |
|||
This is the same as the heat equation except for the presence of the imaginary unit {{mvar|i}}. Fourier methods can be used to solve this equation. |
|||
Unlike the convention followed in this article, when the Fourier transform is defined this way, it is no longer a [[unitary transformation]] on ''L''<sup>2</sup>('''R'''<sup>n</sup>). There is also less symmetry between the formulas for the Fourier transform and its inverse. |
|||
In the presence of a potential, given by the potential energy function {{math|''V''(''x'')}}, the equation becomes |
|||
Another popular convention is to split the factor of (2''π'')<sup>''n''</sup> evenly between the Fourier transform and its inverse, which leads to definitions''':''' |
|||
<math display="block">-\frac{\partial^2}{\partial x^2} \psi(x,t) + V(x)\psi(x,t) = i \frac h{2\pi} \frac{\partial}{\partial t} \psi(x,t).</math> |
|||
The "elementary solutions", as we referred to them above, are the so-called "stationary states" of the particle, and Fourier's algorithm, as described above, can still be used to solve the boundary value problem of the future evolution of {{mvar|ψ}} given its values for {{math|''t'' {{=}} 0}}. Neither of these approaches is of much practical use in quantum mechanics. Boundary value problems and the time-evolution of the wave function is not of much practical interest: it is the stationary states that are most important. |
|||
:<math> \hat{f}(\omega) = \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n} f(x) e^{- i\omega\cdot x}\,dx </math> |
|||
In relativistic quantum mechanics, Schrödinger's equation becomes a wave equation as was usual in classical physics, except that complex-valued waves are considered. A simple example, in the absence of interactions with other particles or fields, is the free one-dimensional Klein–Gordon–Schrödinger–Fock equation, this time in dimensionless units, |
|||
:<math>f(x) = \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n} \hat{f}(\omega) e^{ i\omega \cdot x}\,d\omega. </math> |
|||
<math display="block">\left (\frac{\partial^2}{\partial x^2} +1 \right) \psi(x,t) = \frac{\partial^2}{\partial t^2} \psi(x,t).</math> |
|||
This is, from the mathematical point of view, the same as the wave equation of classical physics solved above (but with a complex-valued wave, which makes no difference in the methods). This is of great use in quantum field theory: each separate Fourier component of a wave can be treated as a separate harmonic oscillator and then quantized, a procedure known as "second quantization". Fourier methods have been adapted to also deal with non-trivial interactions. |
|||
Under this convention, the Fourier transform is again a unitary transformation on ''L''<sup>2</sup>('''R'''<sup>''n''</sup>). It also restores the symmetry between the Fourier transform and its inverse. |
|||
Finally, the [[Quantum harmonic oscillator#Ladder operator method|number operator]] of the [[quantum harmonic oscillator]] can be interpreted, for example via the [[Mehler kernel#Physics version|Mehler kernel]], as the [[Symmetry in quantum mechanics|generator]] of the [[#Eigenfunctions|Fourier transform]] <math>\mathcal{F}</math>.<ref name="auto"/> |
|||
Variations of all three conventions can be created by conjugating the complex-exponential [[kernel_(mathematics)#In_integral_calculus|kernel]] of both the forward and the reverse transform. The signs must be opposites. Other than that, the choice is (again) a matter of convention. |
|||
=== Signal processing === |
|||
{|class="wikitable" |
|||
The Fourier transform is used for the spectral analysis of time-series. The subject of statistical signal processing does not, however, usually apply the Fourier transformation to the signal itself. Even if a real signal is indeed transient, it has been found in practice advisable to model a signal by a function (or, alternatively, a stochastic process) which is stationary in the sense that its characteristic properties are constant over all time. The Fourier transform of such a function does not exist in the usual sense, and it has been found more useful for the analysis of signals to instead take the Fourier transform of its autocorrelation function. |
|||
|+ Summary of popular forms of the Fourier transform |
|||
|- |
|||
! ordinary frequency ''ξ'' (hertz) |
|||
! unitary |
|||
| <math> \hat{f}_1(\xi)\ \stackrel{\mathrm{def}}{=}\ \int_{\mathbb{R}^n} f(x) e^{-2 \pi i x\cdot\xi}\, dx = \hat{f}_2(2 \pi \xi)=(2 \pi)^{n/2}\hat{f}_3(2 \pi \xi) </math><br /> |
|||
<math> f(x) = \int_{\mathbb{R}^n} \hat{f}_1(\xi) e^{2 \pi i x\cdot \xi}\, d\xi \ </math> |
|||
|- |
|||
! rowspan="2" | angular frequency ''ω'' (rad/s) |
|||
! non-unitary |
|||
| <math> \hat{f}_2(\omega) \ \stackrel{\mathrm{def}}{=}\int_{\mathbb{R}^n} f(x) e^{-i\omega\cdot x} \, dx \ = \hat{f}_1 \left ( \frac{\omega}{2 \pi} \right ) = (2 \pi)^{n/2}\ \hat{f}_3(\omega) </math> <br /> |
|||
<math> f(x) = \frac{1}{(2 \pi)^n} \int_{\mathbb{R}^n} \hat{f}_2(\omega) e^{i \omega\cdot x} \, d \omega \ </math> |
|||
|- |
|||
! unitary |
|||
| <math> \hat{f}_3(\omega) \ \stackrel{\mathrm{def}}{=}\ \frac{1}{(2 \pi)^{n/2}} \int_{\mathbb{R}^n} f(x) \ e^{-i \omega\cdot x}\, dx = \frac{1}{(2 \pi)^{n/2}} \hat{f}_1\left(\frac{\omega}{2 \pi} \right) = \frac{1}{(2 \pi)^{n/2}} \hat{f}_2(\omega) </math> <br /> |
|||
<math> f(x) = \frac{1}{(2 \pi)^{n/2}} \int_{\mathbb{R}^n} \hat{f}_3(\omega)e^{i \omega\cdot x}\, d \omega \ </math> |
|||
|} |
|||
The autocorrelation function {{mvar|R}} of a function {{mvar|f}} is defined by |
|||
The ordinary-frequency convention (which is used in this article) is the one most often found in the [[mathematics]] literature.{{Citation needed|date=July 2009}} In the [[physics]] literature, the two angular-frequency conventions are more commonly used.{{Citation needed|date=July 2009}} |
|||
<math display="block">R_f (\tau) = \lim_{T\rightarrow \infty} \frac{1}{2T} \int_{-T}^T f(t) f(t+\tau) \, dt. </math> |
|||
This function is a function of the time-lag {{mvar|τ}} elapsing between the values of {{mvar|f}} to be correlated. |
|||
As discussed above, the [[Characteristic function (probability theory)|characteristic function]] of a random variable is the same as the [[Fourier_transform#Fourier–Stieltjes_transform|Fourier–Stieltjes transform]] of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined <math>E(e^{it\cdot X})=\int e^{it\cdot x}d\mu_X(x)</math>. As in the case of the "non-unitary angular frequency" convention above, there is no factor of 2''π'' appearing in either of the integral, or in the exponential. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponential. |
|||
For most functions {{mvar|f}} that occur in practice, {{mvar|R}} is a bounded even function of the time-lag {{mvar|τ}} and for typical noisy signals it turns out to be uniformly continuous with a maximum at {{math|''τ'' {{=}} 0}}. |
|||
==Tables of important Fourier transforms== |
|||
The following tables record some closed form Fourier transforms. For functions ''ƒ''(''x'') , ''g''(''x'') and ''h''(''x'') denote their Fourier transforms by <math>\hat{f}</math>, <math>\hat{g}</math>, and <math>\hat{h}</math> respectively. Only the three most common conventions are included. |
|||
It is sometimes useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse. |
|||
The autocorrelation function, more properly called the autocovariance function unless it is normalized in some appropriate fashion, measures the strength of the correlation between the values of {{mvar|f}} separated by a time lag. This is a way of searching for the correlation of {{mvar|f}} with its own past. It is useful even for other statistical tasks besides the analysis of signals. For example, if {{math|''f''(''t'')}} represents the temperature at time {{mvar|t}}, one expects a strong correlation with the temperature at a time lag of 24 hours. |
|||
===Functional relationships=== |
|||
The Fourier transforms in this table may be found in {{harv|Erdélyi|1954}} or the appendix of {{harv|Kammler|2000}} |
|||
It possesses a Fourier transform, |
|||
<math display="block"> P_f(\xi) = \int_{-\infty}^\infty R_f (\tau) e^{-i 2\pi \xi\tau} \, d\tau. </math> |
|||
This Fourier transform is called the [[Spectral density#Power spectral density|power spectral density]] function of {{mvar|f}}. (Unless all periodic components are first filtered out from {{mvar|f}}, this integral will diverge, but it is easy to filter out such periodicities.) |
|||
The power spectrum, as indicated by this density function {{mvar|P}}, measures the amount of variance contributed to the data by the frequency {{mvar|ξ}}. In electrical signals, the variance is proportional to the average power (energy per unit time), and so the power spectrum describes how much the different frequencies contribute to the average power of the signal. This process is called the spectral analysis of time-series and is analogous to the usual analysis of variance of data that is not a time-series ([[ANOVA]]). |
|||
Knowledge of which frequencies are "important" in this sense is crucial for the proper design of filters and for the proper evaluation of measuring apparatuses. It can also be useful for the scientific analysis of the phenomena responsible for producing the data. |
|||
The power spectrum of a signal can also be approximately measured directly by measuring the average power that remains in a signal after all the frequencies outside a narrow band have been filtered out. |
|||
Spectral analysis is carried out for visual signals as well. The power spectrum ignores all phase relations, which is good enough for many purposes, but for video signals other types of spectral analysis must also be employed, still using the Fourier transform as a tool. |
|||
== Other notations == |
|||
Other common notations for <math>\hat f(\xi)</math> include: |
|||
<math display="block">\tilde{f}(\xi),\ F(\xi),\ \mathcal{F}\left(f\right)(\xi),\ \left(\mathcal{F}f\right)(\xi),\ \mathcal{F}(f),\ \mathcal{F}\{f\},\ \mathcal{F} \bigl(f(t)\bigr),\ \mathcal{F} \bigl\{f(t)\bigr\}.</math> |
|||
In the sciences and engineering it is also common to make substitutions like these: |
|||
<math display="block">\xi \rightarrow f, \quad x \rightarrow t, \quad f \rightarrow x,\quad \hat f \rightarrow X. </math> |
|||
So the transform pair <math>f(x)\ \stackrel{\mathcal{F}}{\Longleftrightarrow}\ \hat{f}(\xi)</math> can become <math>x(t)\ \stackrel{\mathcal{F}}{\Longleftrightarrow}\ X(f)</math> |
|||
A disadvantage of the capital letter notation is when expressing a transform such as <math>\widehat{f\cdot g}</math> or <math>\widehat{f'},</math> which become the more awkward <math>\mathcal{F}\{f\cdot g\}</math> and <math>\mathcal{F} \{ f' \} . </math> |
|||
In some contexts such as particle physics, the same symbol <math>f</math> may be used for both for a function as well as it Fourier transform, with the two only distinguished by their [[Argument of a function|argument]] I.e. <math>f(k_1 + k_2)</math> would refer to the Fourier transform because of the momentum argument, while <math>f(x_0 + \pi \vec r)</math> would refer to the original function because of the positional argument. Although tildes may be used as in <math>\tilde{f}</math> to indicate Fourier transforms, tildes may also be used to indicate a modification of a quantity with a more [[Lorentz invariant]] form, such as <math>\tilde{dk} = \frac{dk}{(2\pi)^32\omega}</math>, so care must be taken. Similarly, <math>\hat f</math> often denotes the [[Hilbert transform]] of <math>f</math>. |
|||
The interpretation of the complex function {{math|''f̂''(''ξ'')}} may be aided by expressing it in [[polar coordinate]] form |
|||
<math display="block">\hat f(\xi) = A(\xi) e^{i\varphi(\xi)}</math> |
|||
in terms of the two real functions {{math|''A''(''ξ'')}} and {{math|''φ''(''ξ'')}} where: |
|||
<math display="block">A(\xi) = \left|\hat f(\xi)\right|,</math> |
|||
is the [[amplitude]] and |
|||
<math display="block">\varphi (\xi) = \arg \left( \hat f(\xi) \right), </math> |
|||
is the [[phase (waves)|phase]] (see [[Arg (mathematics)|arg function]]). |
|||
Then the inverse transform can be written: |
|||
<math display="block">f(x) = \int _{-\infty}^\infty A(\xi)\ e^{ i\bigl(2\pi \xi x +\varphi (\xi)\bigr)}\,d\xi,</math> |
|||
which is a recombination of all the frequency components of {{math|''f''(''x'')}}. Each component is a complex [[sinusoid]] of the form {{math|''e''<sup>2π''ixξ''</sup>}} whose amplitude is {{math|''A''(''ξ'')}} and whose initial [[phase (waves)|phase angle]] (at {{math|1=''x'' = 0}}) is {{math|''φ''(''ξ'')}}. |
|||
The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted {{mathcal|F}} and {{math|{{mathcal|F}}(''f'')}} is used to denote the Fourier transform of the function {{mvar|f}}. This mapping is linear, which means that {{mathcal|F}} can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function {{math|''f''}}) can be used to write {{math|{{mathcal|F}} ''f''}} instead of {{math|{{mathcal|F}}(''f'')}}. Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value {{mvar|ξ}} for its variable, and this is denoted either as {{math|{{mathcal|F}} ''f''(''ξ'')}} or as {{math|({{mathcal|F}} ''f'')(''ξ'')}}. Notice that in the former case, it is implicitly understood that {{mathcal|F}} is applied first to {{mvar|f}} and then the resulting function is evaluated at {{mvar|ξ}}, not the other way around. |
|||
In mathematics and various applied sciences, it is often necessary to distinguish between a function {{mvar|f}} and the value of {{mvar|f}} when its variable equals {{mvar|x}}, denoted {{math|''f''(''x'')}}. This means that a notation like {{math|{{mathcal|F}}(''f''(''x''))}} formally can be interpreted as the Fourier transform of the values of {{mvar|f}} at {{mvar|x}}. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example, |
|||
<math display="block">\mathcal F\bigl( \operatorname{rect}(x) \bigr) = \operatorname{sinc}(\xi)</math> |
|||
is sometimes used to express that the Fourier transform of a [[rectangular function]] is a [[sinc function]], or |
|||
<math display="block">\mathcal F\bigl(f(x + x_0)\bigr) = \mathcal F\bigl(f(x)\bigr)\, e^{i 2\pi x_0 \xi}</math> |
|||
is used to express the shift property of the Fourier transform. |
|||
Notice, that the last example is only correct under the assumption that the transformed function is a function of {{mvar|x}}, not of {{math|''x''<sub>0</sub>}}. |
|||
As discussed above, the [[Characteristic function (probability theory)|characteristic function]] of a random variable is the same as the [[#Fourier–Stieltjes transform|Fourier–Stieltjes transform]] of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined |
|||
<math display="block">E\left(e^{it\cdot X}\right)=\int e^{it\cdot x} \, d\mu_X(x).</math> |
|||
As in the case of the "non-unitary angular frequency" convention above, the factor of 2{{pi}} appears in neither the normalizing constant nor the exponent. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponent. |
|||
== Computation methods == |
|||
The appropriate computation method largely depends how the original mathematical function is represented and the desired form of the output function. In this section we consider both functions of a continuous variable, <math>f(x),</math> and functions of a discrete variable (i.e. ordered pairs of <math>x</math> and <math>f</math> values). For discrete-valued <math>x,</math> the transform integral becomes a summation of sinusoids, which is still a continuous function of frequency (<math>\xi</math> or <math>\omega</math>). When the sinusoids are harmonically-related (i.e. when the <math>x</math>-values are spaced at integer multiples of an interval), the transform is called [[discrete-time Fourier transform]] (DTFT). |
|||
=== Discrete Fourier transforms and fast Fourier transforms === |
|||
Sampling the DTFT at equally-spaced values of frequency is the most common modern method of computation. Efficient procedures, depending on the frequency resolution needed, are described at {{slink|Discrete-time Fourier transform|Sampling the DTFT|nopage=n}}. The [[discrete Fourier transform]] (DFT), used there, is usually computed by a [[fast Fourier transform]] (FFT) algorithm. |
|||
=== Analytic integration of closed-form functions === |
|||
Tables of [[closed-form expression|closed-form]] Fourier transforms, such as {{slink||Square-integrable functions, one-dimensional}} and {{slink|Discrete-time Fourier transform|Table of discrete-time Fourier transforms|nopage=y}}, are created by mathematically evaluating the Fourier analysis integral (or summation) into another closed-form function of frequency (<math>\xi</math> or <math>\omega</math>).<ref name="Zwillinger-2014">{{harvnb|Gradshteyn|Ryzhik|Geronimus|Tseytlin|2015}}</ref> When mathematically possible, this provides a transform for a continuum of frequency values. |
|||
Many computer algebra systems such as [[Matlab]] and [[Mathematica]] that are capable of [[symbolic integration]] are capable of computing Fourier transforms analytically. For example, to compute the Fourier transform of {{math|1=cos(6π''t'') ''e''<sup>−π''t''<sup>2</sup></sup>}} one might enter the command {{code|integrate cos(6*pi*t) exp(−pi*t^2) exp(-i*2*pi*f*t) from -inf to inf}} into [[Wolfram Alpha]].<ref group=note>The direct command {{code|fourier transform of cos(6*pi*t) exp(−pi*t^2)}} would also work for Wolfram Alpha, although the options for the convention (see {{Section link|2=Other_conventions}}) must be changed away from the default option, which is actually equivalent to {{code|integrate cos(6*pi*t) exp(−pi*t^2) exp(i*omega*t) /sqrt(2*pi) from -inf to inf}}.</ref> |
|||
=== Numerical integration of closed-form continuous functions === |
|||
Discrete sampling of the Fourier transform can also be done by [[numerical integration]] of the definition at each value of frequency for which transform is desired.<ref>{{harvnb|Press|Flannery|Teukolsky|Vetterling|1992}}</ref><ref>{{harvnb|Bailey|Swarztrauber|1994}}</ref><ref>{{harvnb|Lado|1971}}</ref> The numerical integration approach works on a much broader class of functions than the analytic approach. |
|||
=== Numerical integration of a series of ordered pairs === |
|||
If the input function is a series of ordered pairs, numerical integration reduces to just a summation over the set of data pairs.<ref>{{harvnb|Simonen|Olkkonen|1985}}</ref> The DTFT is a common subcase of this more general situation. |
|||
== Tables of important Fourier transforms == |
|||
The following tables record some closed-form Fourier transforms. For functions {{math|''f''(''x'')}} and {{math|''g''(''x'')}} denote their Fourier transforms by {{math|''f̂''}} and {{math|''ĝ''}}. Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse. |
|||
=== Functional relationships, one-dimensional === |
|||
The Fourier transforms in this table may be found in {{harvtxt|Erdélyi|1954}} or {{harvtxt|Kammler|2000|loc=appendix}}. |
|||
{| class="wikitable" |
{| class="wikitable" |
||
! !! Function !! Fourier transform |
! !! Function !! Fourier transform {{br}} unitary, ordinary frequency !! Fourier transform {{br}} unitary, angular frequency !! Fourier transform {{br}} non-unitary, angular frequency !!Remarks |
||
|- |
|- |
||
| |
| |
||
|<math> f(x)\,</math> |
|||
|<math>\begin{align} &\widehat{f}(\xi) \triangleq \widehat {f_1}(\xi) \\&= \int_{-\infty}^\infty f(x) e^{-i 2\pi \xi x}\, dx \end{align}</math> |
|||
|align="center"|<math> \hat{f}(\xi)=</math> |
|||
<math>\int_{-\infty}^ |
|<math>\begin{align} &\widehat{f}(\omega) \triangleq \widehat {f_2}(\omega) \\&= \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(x) e^{-i \omega x}\, dx \end{align}</math> |
||
|<math>\begin{align} &\widehat{f}(\omega) \triangleq \widehat {f_3}(\omega) \\&= \int_{-\infty}^\infty f(x) e^{-i \omega x}\, dx \end{align}</math> |
|||
|Definitions |
|||
|align="center"|<math> \hat{f}(\nu)=</math> |
|||
<math>\int_{-\infty}^{\infty}f(x) e^{-i \nu x}\, dx </math> |
|||
|Definition |
|||
|- |
|- |
||
| 101 |
| 101 |
||
|<math>a\ |
|<math> a\, f(x) + b\, g(x)\,</math> |
||
|<math>a\ |
|<math> a\, \widehat{f}(\xi) + b\, \widehat{g}(\xi)\,</math> |
||
|<math>a\ |
|<math> a\, \widehat{f}(\omega) + b\, \widehat{g}(\omega)\,</math> |
||
|<math>a\ |
|<math> a\, \widehat{f}(\omega) + b\, \widehat{g}(\omega)\,</math> |
||
|Linearity |
|Linearity |
||
|- |
|- |
||
| 102 |
| 102 |
||
|<math>f(x - a)\,</math> |
|<math> f(x - a)\,</math> |
||
|<math>e^{-2\pi |
|<math> e^{-i 2\pi \xi a} \widehat{f}(\xi)\,</math> |
||
|<math>e^{- i a \omega} \ |
|<math> e^{- i a \omega} \widehat{f}(\omega)\,</math> |
||
|<math>e^{- i a \ |
|<math> e^{- i a \omega} \widehat{f}(\omega)\,</math> |
||
|Shift in time domain |
|Shift in time domain |
||
|- |
|- |
||
| 103 |
| 103 |
||
|<math>e^{ |
|<math> f(x)e^{iax}\,</math> |
||
|<math>\ |
|<math> \widehat{f} \left(\xi - \frac{a}{2\pi}\right)\,</math> |
||
|<math>\ |
|<math> \widehat{f}(\omega - a)\,</math> |
||
|<math>\ |
|<math> \widehat{f}(\omega - a)\,</math> |
||
|Shift in frequency domain, dual of 102 |
|Shift in frequency domain, dual of 102 |
||
|- |
|- |
||
| 104 |
| 104 |
||
|<math>f(a x)\,</math> |
|<math> f(a x)\,</math> |
||
|<math>\frac{1}{|a|} \ |
|<math> \frac{1}{|a|} \widehat{f}\left( \frac{\xi}{a} \right)\,</math> |
||
|<math>\frac{1}{|a|} \ |
|<math> \frac{1}{|a|} \widehat{f}\left( \frac{\omega}{a} \right)\,</math> |
||
|<math>\frac{1}{|a|} \ |
|<math> \frac{1}{|a|} \widehat{f}\left( \frac{\omega}{a} \right)\,</math> |
||
|If |
|Scaling in the time domain. If {{math|{{abs|''a''}}}} is large, then {{math|''f''(''ax'')}} is concentrated around 0 and{{br}}<math> \frac{1}{|a|}\hat{f} \left( \frac{\omega}{a} \right)\,</math>{{br}}spreads out and flattens. |
||
|- |
|- |
||
| 105 |
| 105 |
||
|<math>\ |
|<math> \widehat {f_n}(x)\,</math> |
||
|<math>f(-\xi)\,</math> |
|<math> \widehat {f_1}(x) \ \stackrel{\mathcal{F}_1}{\longleftrightarrow}\ f(-\xi)\,</math> |
||
|<math>f(-\omega)\,</math> |
|<math> \widehat {f_2}(x) \ \stackrel{\mathcal{F}_2}{\longleftrightarrow}\ f(-\omega)\,</math> |
||
|<math>2\pi f(-\ |
|<math> \widehat {f_3}(x) \ \stackrel{\mathcal{F}_3}{\longleftrightarrow}\ 2\pi f(-\omega)\,</math> |
||
|The same transform is applied twice, but ''x'' replaces the frequency variable (''ξ'' or ''ω'') after the first transform. |
|||
|Here <math>\hat{f}</math> needs to be calculated using the same method as Fourier transform column. Results from swapping "dummy" variables of <math>x \,</math> and <math>\xi \,</math> or <math>\omega \,</math> or <math>\nu \,</math>. |
|||
|- |
|- |
||
| 106 |
| 106 |
||
|<math>\frac{d^n f(x)}{dx^n}\,</math> |
|<math> \frac{d^n f(x)}{dx^n}\,</math> |
||
|<math> (2\pi |
|<math> (i 2\pi \xi)^n \widehat{f}(\xi)\,</math> |
||
|<math> (i\omega)^n |
|<math> (i\omega)^n \widehat{f}(\omega)\,</math> |
||
|<math> (i\ |
|<math> (i\omega)^n \widehat{f}(\omega)\,</math> |
||
|n{{superscript|th}}-order derivative. |
|||
| |
|||
As {{math|''f''}} is a [[Schwartz space|Schwartz function]] |
|||
|- |
|||
|106.5 |
|||
|<math>\int_{-\infty}^{x} f(\tau) d \tau</math> |
|||
|<math>\frac{\widehat{f}(\xi)}{i 2 \pi \xi} + C \, \delta(\xi)</math> |
|||
|<math>\frac{\widehat{f} (\omega)}{i\omega} + \sqrt{2 \pi} C \delta(\omega)</math> |
|||
|<math>\frac{\widehat{f} (\omega)}{i\omega} + 2 \pi C \delta(\omega)</math> |
|||
|Integration.<ref>{{Cite web |date=2015 |orig-date=2010 |title=The Integration Property of the Fourier Transform |url=https://www.thefouriertransform.com/transform/integration.php |url-status=live |archive-url=https://web.archive.org/web/20220126171340/https://www.thefouriertransform.com/transform/integration.php |archive-date=2022-01-26 |access-date=2023-08-20 |website=The Fourier Transform .com}}</ref> Note: <math>\delta</math> is the [[Dirac delta function]] and <math>C</math> is the average ([[DC component|DC]]) value of <math>f(x)</math> such that <math>\int_{-\infty}^\infty (f(x) - C) \, dx = 0</math> |
|||
|- |
|- |
||
| 107 |
| 107 |
||
|<math>x^n f(x)\,</math> |
|<math> x^n f(x)\,</math> |
||
|<math>\left (\frac{i}{2\pi}\right)^n \frac{d^n \ |
|<math> \left (\frac{i}{2\pi}\right)^n \frac{d^n \widehat{f}(\xi)}{d\xi^n}\,</math> |
||
|<math>i^n \frac{d^n \ |
|<math> i^n \frac{d^n \widehat{f}(\omega)}{d\omega^n}</math> |
||
|<math>i^n \frac{d^n \ |
|<math> i^n \frac{d^n \widehat{f}(\omega)}{d\omega^n}</math> |
||
|This is the dual of 106 |
|This is the dual of 106 |
||
|- |
|- |
||
| 108 |
| 108 |
||
|<math>(f * g)(x)\,</math> |
|<math> (f * g)(x)\,</math> |
||
|<math>\ |
|<math> \widehat{f}(\xi) \widehat{g}(\xi)\,</math> |
||
|<math>\sqrt{2\pi} \ |
|<math> \sqrt{2\pi}\ \widehat{f}(\omega) \widehat{g}(\omega)\,</math> |
||
|<math>\ |
|<math> \widehat{f}(\omega) \widehat{g}(\omega)\,</math> |
||
|The notation |
|The notation {{math|''f'' ∗ ''g''}} denotes the [[convolution]] of {{mvar|f}} and {{mvar|g}} — this rule is the [[convolution theorem]] |
||
|- |
|- |
||
| 109 |
| 109 |
||
|<math>f(x) g(x)\,</math> |
|<math> f(x) g(x)\,</math> |
||
|<math>(\ |
|<math> \left(\widehat{f} * \widehat{g}\right)(\xi)\,</math> |
||
|<math>(\ |
|<math> \frac{1}\sqrt{2\pi}\left(\widehat{f} * \widehat{g}\right)(\omega)\,</math> |
||
|<math>\frac{1}{2\pi}(\ |
|<math> \frac{1}{2\pi}\left(\widehat{f} * \widehat{g}\right)(\omega)\,</math> |
||
|This is the dual of 108 |
|This is the dual of 108 |
||
|- |
|- |
||
|110 |
| 110 |
||
|For |
|For {{math|''f''(''x'')}} purely real |
||
|<math> \widehat{f}(-\xi) = \overline{\widehat{f}(\xi)}\,</math> |
|||
|<math> \widehat{f}(-\omega) = \overline{\widehat{f}(\omega)}\,</math> |
|||
| |
|||
|<math> \widehat{f}(-\omega) = \overline{\widehat{f}(\omega)}\,</math> |
|||
|Hermitian symmetry. {{math|{{overline|''z''}}}} indicates the [[complex conjugate]]. |
|||
|- |
|- |
||
<!-- A Symmetry section has been added instead of this. |
|||
| 111 |
| 111 |
||
|For |
|For {{math|''f''(''x'')}} purely real and [[even function|even]] |
||
|colspan= |
| colspan=3 align=center |<math>\widehat f </math> is a purely real and [[even function]]. |
||
| |
| |
||
|- |
|||
| 112 |
|||
|For {{math|''f''(''x'')}} purely real and [[odd function|odd]] |
|||
| colspan=3 align=center |<math>\widehat f </math> is a purely [[imaginary number|imaginary]] and [[odd function]]. |
|||
| |
|||
|--> |
|||
| 113 |
|||
|For {{math|''f''(''x'')}} purely imaginary |
|||
|<math> \widehat{f}(-\xi) = -\overline{\widehat{f}(\xi)}\,</math> |
|||
|<math> \widehat{f}(-\omega) = -\overline{\widehat{f}(\omega)}\,</math> |
|||
|<math> \widehat{f}(-\omega) = -\overline{\widehat{f}(\omega)}\,</math> |
|||
|{{math|{{overline|''z''}}}} indicates the [[complex conjugate]]. |
|||
|- |
|||
| 114 |
|||
| <math> \overline{f(x)}</math>|| <math> \overline{\widehat{f}(-\xi)}</math> || <math> \overline{\widehat{f}(-\omega)}</math> || <math> \overline{\widehat{f}(-\omega)}</math> || [[Complex conjugate|Complex conjugation]], generalization of 110 and 113 |
|||
|- |
|||
|115 |
|||
|<math> f(x) \cos (a x)</math> |
|||
|<math> \frac{ \widehat{f}\left(\xi - \frac{a}{2\pi}\right)+\widehat{f}\left(\xi+\frac{a}{2\pi}\right)}{2}</math> |
|||
|<math> \frac{\widehat{f}(\omega-a)+\widehat{f}(\omega+a)}{2}\,</math> |
|||
|<math> \frac{\widehat{f}(\omega-a)+\widehat{f}(\omega+a)}{2}</math> |
|||
|This follows from rules 101 and 103 using [[Euler's formula]]:{{br}}<math>\cos(a x) = \frac{e^{i a x} + e^{-i a x}}{2}.</math> |
|||
|- |
|||
|116 |
|||
|<math> f(x)\sin( ax)</math> |
|||
|<math> \frac{\widehat{f}\left(\xi-\frac{a}{2\pi}\right)-\widehat{f}\left(\xi+\frac{a}{2\pi}\right)}{2i}</math> |
|||
|<math> \frac{\widehat{f}(\omega-a)-\widehat{f}(\omega+a)}{2i}</math> |
|||
|<math> \frac{\widehat{f}(\omega-a)-\widehat{f}(\omega+a)}{2i}</math> |
|||
|This follows from 101 and 103 using [[Euler's formula]]:{{br}}<math>\sin(a x) = \frac{e^{i a x} - e^{-i a x}}{2i}.</math> |
|||
|} |
|} |
||
===Square-integrable functions=== |
=== Square-integrable functions, one-dimensional === |
||
The Fourier transforms in this table may be found in {{ |
The Fourier transforms in this table may be found in {{harvtxt|Campbell|Foster|1948}}, {{harvtxt|Erdélyi|1954}}, or {{harvtxt|Kammler|2000|loc=appendix}}. |
||
{| class="wikitable" |
{| class="wikitable" |
||
! !! Function !! Fourier transform |
! !! Function !! Fourier transform {{br}} unitary, ordinary frequency !! Fourier transform {{br}} unitary, angular frequency !! Fourier transform {{br}} non-unitary, angular frequency !! Remarks |
||
|- |
|- |
||
| |
| |
||
|<math> f(x)\,</math> |
|||
|<math>\begin{align} &\hat{f}(\xi) \triangleq \hat f_1(\xi) \\&= \int_{-\infty}^\infty f(x) e^{-i 2\pi \xi x}\, dx \end{align}</math> |
|||
|align="center"|<math> \hat{f}(\xi)=</math> |
|||
<math>\int_{-\infty}^ |
|<math>\begin{align} &\hat{f}(\omega) \triangleq \hat f_2(\omega) \\&= \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(x) e^{-i \omega x}\, dx \end{align}</math> |
||
|<math>\begin{align} &\hat{f}(\omega) \triangleq \hat f_3(\omega) \\&= \int_{-\infty}^\infty f(x) e^{-i \omega x}\, dx \end{align}</math> |
|||
|Definitions |
|||
<math>\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(x) e^{-i \omega x}\, dx</math> |
|||
|align="center"|<math> \hat{f}(\nu)=</math> |
|||
<math>\int_{-\infty}^{\infty} f(x) e^{-i\nu x}\, dx</math> |
|||
| |
|||
|- |
|- |
||
| 201 |
|{{anchor|rect}} 201 |
||
|<math>\operatorname{rect}(a x) \,</math> |
|<math> \operatorname{rect}(a x) \,</math> |
||
|<math>\frac{1}{|a|}\ |
|<math> \frac{1}{|a|}\, \operatorname{sinc}\left(\frac{\xi}{a}\right)</math> |
||
|<math>\frac{1}{\sqrt{2 \pi a^2}}\ |
|<math> \frac{1}{\sqrt{2 \pi a^2}}\, \operatorname{sinc}\left(\frac{\omega}{2\pi a}\right)</math> |
||
|<math>\frac{1}{|a|}\ |
|<math> \frac{1}{|a|}\, \operatorname{sinc}\left(\frac{\omega}{2\pi a}\right)</math> |
||
|The [[rectangular function|rectangular pulse]] and the ''normalized'' [[sinc function]], here defined as sinc(''x'') = sin('' |
|The [[rectangular function|rectangular pulse]] and the ''normalized'' [[sinc function]], here defined as {{math|1=sinc(''x'') = {{sfrac|sin(π''x'')|π''x''}}}} |
||
|- |
|- |
||
| 202 |
| 202 |
||
|<math> \operatorname{sinc}(a x)\,</math> |
|<math> \operatorname{sinc}(a x)\,</math> |
||
|<math>\frac{1}{|a|}\ |
|<math> \frac{1}{|a|}\, \operatorname{rect}\left(\frac{\xi}{a} \right)\,</math> |
||
|<math>\frac{1}{\sqrt{2\pi a^2}}\ |
|<math> \frac{1}{\sqrt{2\pi a^2}}\, \operatorname{rect}\left(\frac{\omega}{2 \pi a}\right)</math> |
||
|<math>\frac{1}{|a|}\ |
|<math> \frac{1}{|a|}\, \operatorname{rect}\left(\frac{\omega}{2 \pi a}\right)</math> |
||
|Dual of rule 201. The [[rectangular function]] is an ideal [[low-pass filter]], and the [[sinc function]] is the [[ |
|Dual of rule 201. The [[rectangular function]] is an ideal [[low-pass filter]], and the [[sinc function]] is the [[Anticausal system|non-causal]] impulse response of such a filter. The [[sinc function]] is defined here as {{math|1=sinc(''x'') = {{sfrac|sin(π''x'')|π''x''}}}} |
||
|- |
|- |
||
| 203 |
| 203 |
||
|<math> \operatorname{sinc}^2 (a x)</math> |
|<math> \operatorname{sinc}^2 (a x)</math> |
||
|<math> \frac{1}{|a|}\ |
|<math> \frac{1}{|a|}\, \operatorname{tri} \left( \frac{\xi}{a} \right) </math> |
||
|<math> \frac{1}{\sqrt{2\pi a^2}}\ |
|<math> \frac{1}{\sqrt{2\pi a^2}}\, \operatorname{tri} \left( \frac{\omega}{2\pi a} \right) </math> |
||
|<math> \frac{1}{|a|}\ |
|<math> \frac{1}{|a|}\, \operatorname{tri} \left( \frac{\omega}{2\pi a} \right) </math> |
||
| The function tri(''x'') is the [[triangular function]] |
| The function {{math|tri(''x'')}} is the [[triangular function]] |
||
|- |
|- |
||
| 204 |
| 204 |
||
|<math> \operatorname{tri} (a x)</math> |
|<math> \operatorname{tri} (a x)</math> |
||
|<math>\frac{1}{|a|}\ |
|<math> \frac{1}{|a|}\, \operatorname{sinc}^2 \left( \frac{\xi}{a} \right) \,</math> |
||
|<math>\frac{1}{\sqrt{2\pi a^2}} \ |
|<math> \frac{1}{\sqrt{2\pi a^2}} \, \operatorname{sinc}^2 \left( \frac{\omega}{2\pi a} \right) </math> |
||
|<math>\frac{1}{|a|} \ |
|<math> \frac{1}{|a|} \, \operatorname{sinc}^2 \left( \frac{\omega}{2\pi a} \right) </math> |
||
| Dual of rule 203. |
| Dual of rule 203. |
||
|- |
|- |
||
| 205 |
| 205 |
||
|<math> e^{- a x} u(x) \,</math> |
|<math> e^{- a x} u(x) \,</math> |
||
|<math>\frac{1}{a + |
|<math> \frac{1}{a + i 2\pi \xi}</math> |
||
|<math>\frac{1}{\sqrt{2 \pi} (a + i \omega)}</math> |
|<math> \frac{1}{\sqrt{2 \pi} (a + i \omega)}</math> |
||
|<math>\frac{1}{a + i \ |
|<math> \frac{1}{a + i \omega}</math> |
||
|The function ''u''(''x'') is the [[Heaviside step function|Heaviside unit step function]] and ''a'' |
|The function {{math|''u''(''x'')}} is the [[Heaviside step function|Heaviside unit step function]] and {{math|''a'' > 0}}. |
||
|- |
|- |
||
| 206 |
| 206 |
||
|<math>e^{-\alpha x^2}\,</math> |
|<math> e^{-\alpha x^2}\,</math> |
||
|<math>\sqrt{\frac{\pi}{\alpha}}\ |
|<math> \sqrt{\frac{\pi}{\alpha}}\, e^{-\frac{(\pi \xi)^2}{\alpha}}</math> |
||
|<math>\frac{1}{\sqrt{2 \alpha}}\ |
|<math> \frac{1}{\sqrt{2 \alpha}}\, e^{-\frac{\omega^2}{4 \alpha}}</math> |
||
|<math>\sqrt{\frac{\pi}{\alpha}}\ |
|<math> \sqrt{\frac{\pi}{\alpha}}\, e^{-\frac{\omega^2}{4 \alpha}}</math> |
||
|This shows that, for the unitary Fourier transforms, the [[Gaussian function]] |
|This shows that, for the unitary Fourier transforms, the [[Gaussian function]] {{math|''e''<sup>−''αx''<sup>2</sup></sup>}} is its own Fourier transform for some choice of {{mvar|α}}. For this to be integrable we must have {{math|Re(''α'') > 0}}. |
||
|- |
|- |
||
| |
| 208 |
||
|<math> |
|<math> e^{-a|x|} \,</math> |
||
|<math> \frac{2 a}{a^2 + 4 \pi^2 \xi^2} </math> |
|<math> \frac{2 a}{a^2 + 4 \pi^2 \xi^2} </math> |
||
|<math> \sqrt{\frac{2}{\pi}} \ |
|<math> \sqrt{\frac{2}{\pi}} \, \frac{a}{a^2 + \omega^2} </math> |
||
|<math> \frac{2a}{a^2 + \ |
|<math> \frac{2a}{a^2 + \omega^{2}} </math> |
||
|For {{math|Re(''a'') > 0}}. That is, the Fourier transform of a [[Laplace distribution|two-sided decaying exponential function]] is a [[Lorentzian function]]. |
|||
|For ''a>0''. |
|||
|- |
|||
| 208 |
|||
|<math> \frac{J_n (x)}{x} \,</math> |
|||
|<math> \frac{2 i}{n} (-i)^n \cdot U_{n-1} (2 \pi \xi)\,</math><br> |
|||
<math>\cdot \ \sqrt{1 - 4 \pi^2 \xi^2} \operatorname{rect}( \pi \xi ) </math> |
|||
|<math> \sqrt{\frac{2}{\pi}} \frac{i}{n} (-i)^n \cdot U_{n-1} (\omega)\,</math><br> |
|||
<math>\cdot \ \sqrt{1 - \omega^2} \operatorname{rect} \left( \frac{\omega}{2} \right) </math> |
|||
|<math> \frac{2 i}{n} (-i)^n \cdot U_{n-1} (\nu)\,</math><br> |
|||
<math>\cdot \ \sqrt{1 - \nu^2} \operatorname{rect} \left( \frac{\nu}{2} \right) </math> |
|||
| The functions ''J<sub>n</sub>'' (''x'') are the ''n''-th order Bessel functions of the first kind. The functions ''U<sub>n</sub>'' (''x'') are the [[Chebyshev polynomials|Chebyshev polynomial of the second kind]]. See 315 and 316 below. |
|||
|- |
|- |
||
| 209 |
| 209 |
||
|<math>\operatorname{sech}(a x) \,</math> |
|<math> \operatorname{sech}(a x) \,</math> |
||
|<math>\frac{\pi}{a} \operatorname{sech} \left( \frac{\pi^2}{ a} \xi \right)</math> |
|<math> \frac{\pi}{a} \operatorname{sech} \left( \frac{\pi^2}{ a} \xi \right)</math> |
||
|<math>\frac{1}{a}\sqrt{\frac{\pi}{2}}\operatorname{sech}\left( \frac{\pi}{2 a} \omega \right)</math> |
|<math> \frac{1}{a}\sqrt{\frac{\pi}{2}} \operatorname{sech}\left( \frac{\pi}{2 a} \omega \right)</math> |
||
|<math>\frac{\pi}{a}\operatorname{sech}\left( \frac{\pi}{2 a} \ |
|<math> \frac{\pi}{a}\operatorname{sech}\left( \frac{\pi}{2 a} \omega \right)</math> |
||
|[[Hyperbolic function|Hyperbolic secant]] is its own Fourier transform |
|[[Hyperbolic function|Hyperbolic secant]] is its own Fourier transform |
||
|- |
|||
| 210 |
|||
|<math> e^{-\frac{a^2 x^2}2} H_n(a x)\,</math> |
|||
|<math> \frac{\sqrt{2\pi}(-i)^n}{a} e^{-\frac{2\pi^2\xi^2}{a^2}} H_n\left(\frac{2\pi\xi}a\right)</math> |
|||
|<math> \frac{(-i)^n}{a} e^{-\frac{\omega^2}{2 a^2}} H_n\left(\frac \omega a\right)</math> |
|||
|<math> \frac{(-i)^n \sqrt{2\pi}}{a} e^{-\frac{\omega^2}{2 a^2}} H_n\left(\frac \omega a \right)</math> |
|||
|{{math|''H<sub>n</sub>''}} is the {{mvar|n}}th-order [[Hermite polynomial]]. If {{math|''a'' {{=}} 1}} then the Gauss–Hermite functions are [[eigenfunction]]s of the Fourier transform operator. For a derivation, see [[Hermite polynomials#Hermite functions as eigenfunctions of the Fourier transform|Hermite polynomial]]. The formula reduces to 206 for {{math|''n'' {{=}} 0}}. |
|||
|} |
|} |
||
===Distributions=== |
=== Distributions, one-dimensional === |
||
The Fourier transforms in this table may be found in {{ |
The Fourier transforms in this table may be found in {{harvtxt|Erdélyi|1954}} or {{harvtxt|Kammler|2000|loc=appendix}}. |
||
{| class="wikitable" |
{| class="wikitable" |
||
! !! Function !! Fourier transform |
! !! Function !! Fourier transform {{br}} unitary, ordinary frequency !! Fourier transform {{br}} unitary, angular frequency !! Fourier transform {{br}} non-unitary, angular frequency !! Remarks |
||
|- |
|- |
||
| |
| |
||
|<math> f(x)\,</math> |
|||
|<math>\begin{align} &\hat{f}(\xi) \triangleq \hat f_1(\xi) \\&= \int_{-\infty}^\infty f(x) e^{-i 2\pi \xi x}\, dx \end{align}</math> |
|||
|align="center"|<math> \hat{f}(\xi)=</math> |
|||
<math>\int_{-\infty}^ |
|<math>\begin{align} &\hat{f}(\omega) \triangleq \hat f_2(\omega) \\&= \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(x) e^{-i \omega x}\, dx \end{align}</math> |
||
|<math>\begin{align} &\hat{f}(\omega) \triangleq \hat f_3(\omega) \\&= \int_{-\infty}^\infty f(x) e^{-i \omega x}\, dx \end{align}</math> |
|||
|Definitions |
|||
<math>\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(x) e^{-i \omega x}\, dx</math> |
|||
|align="center"|<math> \hat{f}(\nu)=</math> |
|||
<math>\int_{-\infty}^{\infty} f(x) e^{-i\nu x}\, dx</math> |
|||
| |
|||
|- |
|- |
||
| 301 |
| 301 |
||
|<math>1</math> |
|<math> 1</math> |
||
|<math>\delta(\xi)</math> |
|<math> \delta(\xi)</math> |
||
|<math>\sqrt{2\pi}\ |
|<math> \sqrt{2\pi}\, \delta(\omega)</math> |
||
|<math>2\pi\delta(\ |
|<math> 2\pi\delta(\omega)</math> |
||
|The distribution ''δ''(''ξ'') denotes the [[Dirac delta function]]. |
|The distribution {{math|''δ''(''ξ'')}} denotes the [[Dirac delta function]]. |
||
|- |
|- |
||
| 302 |
| 302 |
||
|<math>\delta(x)\,</math> |
|<math> \delta(x)\,</math> |
||
|<math>1</math> |
|<math> 1</math> |
||
|<math>\frac{1}{\sqrt{2\pi}}\,</math> |
|<math> \frac{1}{\sqrt{2\pi}}\,</math> |
||
|<math>1</math> |
|<math> 1</math> |
||
|Dual of rule 301. |
|Dual of rule 301. |
||
|- |
|- |
||
| 303 |
| 303 |
||
|<math>e^{i a x}</math> |
|<math> e^{i a x}</math> |
||
|<math>\delta\left(\xi - \frac{a}{2\pi}\right)</math> |
|<math> \delta\left(\xi - \frac{a}{2\pi}\right)</math> |
||
|<math>\sqrt{2 \pi}\ |
|<math> \sqrt{2 \pi}\, \delta(\omega - a)</math> |
||
|<math>2 \pi\delta(\ |
|<math> 2 \pi\delta(\omega - a)</math> |
||
|This follows from 103 and 301. |
|This follows from 103 and 301. |
||
|- |
|- |
||
| 304 |
| 304 |
||
|<math>\cos (a x)</math> |
|<math> \cos (a x)</math> |
||
|<math>\frac{ |
|<math> \frac{ \delta\left(\xi - \frac{a}{2\pi}\right)+\delta\left(\xi+\frac{a}{2\pi}\right)}{2}</math> |
||
|<math>\sqrt{2 \pi}\ |
|<math> \sqrt{2 \pi}\,\frac{\delta(\omega-a)+\delta(\omega+a)}{2}</math> |
||
|<math>\pi\left(\delta(\ |
|<math> \pi\left(\delta(\omega-a)+\delta(\omega+a)\right)</math> |
||
|This follows from rules 101 and 303 using [[ |
|This follows from rules 101 and 303 using [[Euler's formula]]:{{br}}<math>\cos(a x) = \frac{e^{i a x} + e^{-i a x}}{2}.</math> |
||
|- |
|- |
||
| 305 |
| 305 |
||
|<math>\sin( ax)</math> |
|<math> \sin( ax)</math> |
||
|<math> |
|<math> \frac{\delta\left(\xi-\frac{a}{2\pi}\right)-\delta\left(\xi+\frac{a}{2\pi}\right)}{2i}</math> |
||
|<math> |
|<math> \sqrt{2 \pi}\,\frac{\delta(\omega-a)-\delta(\omega+a)}{2i}</math> |
||
|<math>i\pi\ |
|<math> -i\pi\bigl(\delta(\omega-a)-\delta(\omega+a)\bigr)</math> |
||
|This follows from 101 and 303 using |
|This follows from 101 and 303 using{{br}}<math>\sin(a x) = \frac{e^{i a x} - e^{-i a x}}{2i}.</math> |
||
|- |
|- |
||
| 306 |
| 306 |
||
|<math>\cos ( a x^2 ) </math> |
|<math> \cos \left( a x^2 \right) </math> |
||
|<math> \sqrt{\frac{\pi}{a}} \cos \left( \frac{\pi^2 \xi^2}{a} - \frac{\pi}{4} \right) |
|<math> \sqrt{\frac{\pi}{a}} \cos \left( \frac{\pi^2 \xi^2}{a} - \frac{\pi}{4} \right) </math> |
||
|<math> \frac{1}{\sqrt{2 a}} \cos \left( \frac{\omega^2}{4 a} - \frac{\pi}{4} \right) </math> |
|<math> \frac{1}{\sqrt{2 a}} \cos \left( \frac{\omega^2}{4 a} - \frac{\pi}{4} \right) </math> |
||
|<math> \sqrt{\frac{\pi}{a}} \cos \left( \frac{\ |
|<math> \sqrt{\frac{\pi}{a}} \cos \left( \frac{\omega^2}{4a} - \frac{\pi}{4} \right) </math> |
||
|This follows from 101 and 207 using{{br}}<math>\cos(a x^2) = \frac{e^{i a x^2} + e^{-i a x^2}}{2}.</math> |
|||
| |
|||
|- |
|- |
||
| 307 |
| 307 |
||
|<math>\sin ( a x^2 ) |
|<math> \sin \left( a x^2 \right) </math> |
||
|<math> - \sqrt{\frac{\pi}{a}} |
|<math> - \sqrt{\frac{\pi}{a}} \sin \left( \frac{\pi^2 \xi^2}{a} - \frac{\pi}{4} \right) </math> |
||
|<math> \frac{-1}{\sqrt{2 a}} \sin \left( \frac{\omega^2}{4 a} - \frac{\pi}{4} \right) </math> |
|<math> \frac{-1}{\sqrt{2 a}} \sin \left( \frac{\omega^2}{4 a} - \frac{\pi}{4} \right) </math> |
||
|<math>-\sqrt{\frac{\pi}{a}}\sin \left( \frac{\ |
|<math> -\sqrt{\frac{\pi}{a}}\sin \left( \frac{\omega^2}{4a} - \frac{\pi}{4} \right)</math> |
||
|This follows from 101 and 207 using{{br}}<math>\sin(a x^2) = \frac{e^{i a x^2} - e^{-i a x^2}}{2i}.</math> |
|||
| |
|||
|- |
|- |
||
| |
|308 |
||
|<math>x^ |
|<math> e^{-\pi i\alpha x^2}\,</math> |
||
|<math> |
|<math> \frac{1}{\sqrt{\alpha}}\, e^{-i\frac{\pi}{4}} e^{i\frac{\pi \xi^2}{\alpha}}</math> |
||
|<math> |
|<math> \frac{1}{\sqrt{2\pi \alpha}}\, e^{-i\frac{\pi}{4}} e^{i\frac{\omega^2}{4\pi \alpha}}</math> |
||
|<math> |
|<math> \frac{1}{\sqrt{\alpha}}\, e^{-i\frac{\pi}{4}} e^{i\frac{\omega^2}{4\pi \alpha}}</math> |
||
|Here it is assumed <math>\alpha</math> is real. For the case that alpha is complex see table entry 206 above. |
|||
|Here, ''n'' is a [[natural number]] and <math>\displaystyle\delta^{(n)}(\xi)</math> is the ''n''-th distribution derivative of the Dirac delta function. This rule follows from rules 107 and 301. Combining this rule with 101, we can transform all [[polynomial]]s. |
|||
|- |
|- |
||
| 309 |
| 309 |
||
|<math> |
|<math> x^n\,</math> |
||
|<math> |
|<math> \left(\frac{i}{2\pi}\right)^n \delta^{(n)} (\xi)</math> |
||
|<math> |
|<math> i^n \sqrt{2\pi} \delta^{(n)} (\omega)</math> |
||
|<math> |
|<math> 2\pi i^n\delta^{(n)} (\omega)</math> |
||
|Here |
|Here, {{mvar|n}} is a [[natural number]] and {{math|''δ''{{isup|(''n'')}}(''ξ'')}} is the {{mvar|n}}th distribution derivative of the Dirac delta function. This rule follows from rules 107 and 301. Combining this rule with 101, we can transform all [[polynomial]]s. |
||
|- |
|- |
||
| 310 |
| 310 |
||
|<math> \ |
|<math> \delta^{(n)}(x)</math> |
||
|<math> |
|<math> (i 2\pi \xi)^n</math> |
||
|<math> \frac{ |
|<math> \frac{(i\omega)^n}{\sqrt{2\pi}} </math> |
||
|<math> (i\omega)^n</math> |
|||
| |
|||
|Dual of rule 309. {{math|''δ''{{isup|(''n'')}}(''ξ'')}} is the {{mvar|n}}th distribution derivative of the Dirac delta function. This rule follows from 106 and 302. |
|||
|<math>C_\alpha = \sqrt{\pi} \frac{2^\alpha\Gamma(\alpha/2)}{\Gamma\left(\frac{1-\alpha}{2}\right)}</math> |
|||
|- |
|- |
||
| 311 |
| 311 |
||
|<math> \frac{1}{ |
|<math> \frac{1}{x}</math> |
||
|<math> \ |
|<math> -i\pi\sgn(\xi)</math> |
||
|<math> \frac{ |
|<math> -i\sqrt{\frac{\pi}{2}}\sgn(\omega)</math> |
||
|<math> |
|<math> -i\pi\sgn(\omega)</math> |
||
|Here {{math|sgn(''ξ'')}} is the [[sign function]]. Note that {{math|{{sfrac|1|''x''}}}} is not a distribution. It is necessary to use the [[Cauchy principal value]] when testing against [[Schwartz functions]]. This rule is useful in studying the [[Hilbert transform]]. |
|||
| Special case of 310. |
|||
|- |
|- |
||
| 312 |
| 312 |
||
|<math>\ |
|<math>\begin{align} |
||
&\frac{1}{x^n} \\ |
|||
&:= \frac{(-1)^{n-1}}{(n-1)!}\frac{d^n}{dx^n}\log |x| |
|||
\end{align}</math> |
|||
|<math> -i\pi \frac{(-i 2\pi \xi)^{n-1}}{(n-1)!} \sgn(\xi)</math> |
|||
|The dual of rule 309. This time the Fourier transforms need to be considered as [[Cauchy principal value]]. |
|||
|<math> -i\sqrt{\frac{\pi}{2}}\, \frac{(-i\omega)^{n-1}}{(n-1)!}\sgn(\omega)</math> |
|||
|<math> -i\pi \frac{(-i\omega)^{n-1}}{(n-1)!}\sgn(\omega)</math> |
|||
|{{math|{{sfrac|1|''x''<sup>''n''</sup>}}}} is the [[homogeneous distribution]] defined by the distributional derivative{{br}}<math>\frac{(-1)^{n-1}}{(n-1)!}\frac{d^n}{dx^n}\log|x|</math> |
|||
|- |
|- |
||
| 313 |
| 313 |
||
|<math> |
|<math> |x|^\alpha</math> |
||
|<math>\frac |
|<math> -\frac{2\sin\left(\frac{\pi\alpha}{2}\right)\Gamma(\alpha+1)}{|2\pi\xi|^{\alpha+1}}</math> |
||
|<math>\ |
|<math> \frac{-2}{\sqrt{2\pi}}\, \frac{\sin\left(\frac{\pi\alpha}{2}\right)\Gamma(\alpha+1)}{|\omega|^{\alpha+1}} </math> |
||
|<math>\ |
|<math> -\frac{2\sin\left(\frac{\pi\alpha}{2}\right)\Gamma(\alpha+1)}{|\omega|^{\alpha+1}} </math> |
||
|This formula is valid for {{math|0 > ''α'' > −1}}. For {{math|''α'' > 0}} some singular terms arise at the origin that can be found by differentiating 320. If {{math|Re ''α'' > −1}}, then {{math|{{abs|''x''}}<sup>''α''</sup>}} is a locally integrable function, and so a tempered distribution. The function {{math|''α'' ↦ {{abs|''x''}}<sup>''α''</sup>}} is a holomorphic function from the right half-plane to the space of tempered distributions. It admits a unique meromorphic extension to a tempered distribution, also denoted {{math|{{abs|''x''}}<sup>''α''</sup>}} for {{math|''α'' ≠ −1, −3, ...}} (See [[homogeneous distribution]].) |
|||
|The function ''u''(''x'') is the Heaviside [[Heaviside step function|unit step function]]; this follows from rules 101, 301, and 312. |
|||
|- |
|||
| <!-- Should we call it 313a ? Doesn't necessarily need a number, because it is a special case. --> |
|||
|<math> \frac{1}{\sqrt{|x|}} </math> |
|||
|<math> \frac{1}{\sqrt{|\xi|}} </math> |
|||
|<math> \frac{1}{\sqrt{|\omega|}}</math> |
|||
|<math> \frac{\sqrt{2\pi}}{\sqrt{|\omega|}} </math> |
|||
| Special case of 313. |
|||
|- |
|- |
||
| 314 |
| 314 |
||
|<math> |
|<math> \sgn(x)</math> |
||
|<math>\frac{1}{ |
|<math> \frac{1}{i\pi \xi}</math> |
||
|<math>\ |
|<math> \sqrt{\frac{2}{\pi}} \frac{1}{i\omega } </math> |
||
|<math>\frac{2 |
|<math> \frac{2}{i\omega }</math> |
||
|The dual of rule 311. This time the Fourier transforms need to be considered as a [[Cauchy principal value]]. |
|||
|This function is known as the [[Dirac comb]] function. This result can be derived from 302 and 102, together with the fact that <math>\sum_{n=-\infty}^{\infty} e^{inx}=\sum_{k=-\infty}^{\infty} \delta(x+2\pi k)</math> as distributions. |
|||
|- |
|- |
||
| 315 |
| 315 |
||
|<math> u(x)</math> |
|||
|<math> \frac{1}{2}\left(\frac{1}{i \pi \xi} + \delta(\xi)\right)</math> |
|||
|<math> \sqrt{\frac{\pi}{2}} \left( \frac{1}{i \pi \omega} + \delta(\omega)\right)</math> |
|||
|<math> \pi\left( \frac{1}{i \pi \omega} + \delta(\omega)\right)</math> |
|||
|The function {{math|''u''(''x'')}} is the Heaviside [[Heaviside step function|unit step function]]; this follows from rules 101, 301, and 314. |
|||
|- |
|||
| 316 |
|||
|<math> \sum_{n=-\infty}^{\infty} \delta (x - n T)</math> |
|||
|<math> \frac{1}{T} \sum_{k=-\infty}^{\infty} \delta \left( \xi -\frac{k }{T}\right)</math> |
|||
|<math> \frac{\sqrt{2\pi }}{T}\sum_{k=-\infty}^{\infty} \delta \left( \omega -\frac{2\pi k}{T}\right)</math> |
|||
|<math> \frac{2\pi}{T}\sum_{k=-\infty}^{\infty} \delta \left( \omega -\frac{2\pi k}{T}\right)</math> |
|||
|This function is known as the [[Dirac comb]] function. This result can be derived from 302 and 102, together with the fact that{{br}}<math>\begin{align} |
|||
& \sum_{n=-\infty}^{\infty} e^{inx} \\ |
|||
= {}& 2\pi\sum_{k=-\infty}^{\infty} \delta(x+2\pi k) |
|||
\end{align}</math>{{br}}as distributions. |
|||
|- |
|||
| 317 |
|||
|<math> J_0 (x)</math> |
|<math> J_0 (x)</math> |
||
|<math> \frac{2\, \operatorname{rect}(\pi\xi)}{\sqrt{1 - 4 \pi^2 \xi^2}} </math> |
|<math> \frac{2\, \operatorname{rect}(\pi\xi)}{\sqrt{1 - 4 \pi^2 \xi^2}} </math> |
||
|<math> \sqrt{\frac{2}{\pi}} \ |
|<math> \sqrt{\frac{2}{\pi}} \, \frac{\operatorname{rect}\left( \frac{\omega}{2} \right)}{\sqrt{1 - \omega^2}} </math> |
||
|<math> \frac{2\,\operatorname{rect}\left( |
|<math> \frac{2\,\operatorname{rect}\left(\frac{\omega}{2} \right)}{\sqrt{1 - \omega^2}}</math> |
||
| The function ''J''<sub>0</sub>(''x'') is the zeroth order [[Bessel function]] of first kind. |
| The function {{math|''J''<sub>0</sub>(''x'')}} is the zeroth order [[Bessel function]] of first kind. |
||
|- |
|- |
||
| |
| 318 |
||
|<math>J_n (x)</math> |
|<math> J_n (x)</math> |
||
|<math> \frac{2 (-i)^n T_n (2 \pi \xi) \operatorname{rect}(\pi \xi)}{\sqrt{1 - 4 \pi^2 \xi^2}} </math> |
|<math> \frac{2 (-i)^n T_n (2 \pi \xi) \operatorname{rect}(\pi \xi)}{\sqrt{1 - 4 \pi^2 \xi^2}} </math> |
||
|<math> \sqrt{\frac{2}{\pi}} \frac{ (-i)^n T_n (\omega) \operatorname{rect} \left( |
|<math> \sqrt{\frac{2}{\pi}} \frac{ (-i)^n T_n (\omega) \operatorname{rect} \left( \frac{\omega}{2} \right)}{\sqrt{1 - \omega^2}} </math> |
||
|<math> \frac{2(-i)^n T_n (\ |
|<math> \frac{2(-i)^n T_n (\omega) \operatorname{rect} \left( \frac{\omega}{2} \right)}{\sqrt{1 - \omega^2}} </math> |
||
| This is a generalization of |
| This is a generalization of 317. The function {{math|''J<sub>n</sub>''(''x'')}} is the {{mvar|n}}th order [[Bessel function]] of first kind. The function {{math|''T<sub>n</sub>''(''x'')}} is the [[Chebyshev polynomials|Chebyshev polynomial of the first kind]]. |
||
|- |
|||
| 319 |
|||
|<math> \log \left| x \right|</math> |
|||
|<math> -\frac{1}{2} \frac{1}{\left| \xi \right|} - \gamma \delta \left( \xi \right) </math> |
|||
|<math> -\frac{\sqrt\frac{\pi}{2}}{\left| \omega \right|} - \sqrt{2 \pi} \gamma \delta \left( \omega \right) </math> |
|||
|<math> -\frac{\pi}{\left| \omega \right|} - 2 \pi \gamma \delta \left( \omega \right) </math> |
|||
|{{mvar|γ}} is the [[Euler–Mascheroni constant]]. It is necessary to use a finite part integral when testing {{math|{{sfrac|1|{{abs|''ξ''}}}}}} or {{math|{{sfrac|1|{{abs|''ω''}}}}}}against [[Schwartz functions]]. The details of this might change the coefficient of the delta function. |
|||
|- |
|||
| 320 |
|||
|<math> \left( \mp ix \right)^{-\alpha}</math> |
|||
|<math> \frac{\left(2\pi\right)^\alpha}{\Gamma\left(\alpha\right)}u\left(\pm \xi \right)\left(\pm \xi \right)^{\alpha-1} </math> |
|||
|<math> \frac{\sqrt{2\pi}}{\Gamma\left(\alpha\right)}u\left(\pm\omega\right)\left(\pm\omega\right)^{\alpha-1} </math> |
|||
|<math> \frac{2\pi}{\Gamma\left(\alpha\right)}u\left(\pm\omega\right)\left(\pm\omega\right)^{\alpha-1} </math> |
|||
|This formula is valid for {{math|1 > ''α'' > 0}}. Use differentiation to derive formula for higher exponents. {{mvar|u}} is the Heaviside function. |
|||
|} |
|} |
||
=== Two-dimensional functions === |
=== Two-dimensional functions === |
||
{| class="wikitable" |
{| class="wikitable" |
||
! !! Function !! Fourier transform |
! !! Function !! Fourier transform {{br}} unitary, ordinary frequency !! Fourier transform {{br}} unitary, angular frequency !! Fourier transform {{br}} non-unitary, angular frequency !! Remarks |
||
|- |
|- |
||
| |
|400 |
||
|<math> f(x,y)</math> |
|||
|<math>\begin{align}& \hat{f}(\xi_x, \xi_y)\triangleq \\ & \iint f(x,y) e^{-i 2\pi(\xi_x x+\xi_y y)}\,dx\,dy \end{align}</math> |
|||
<math>\iint f(x,y) e^{- |
|<math>\begin{align}& \hat{f}(\omega_x,\omega_y)\triangleq \\ & \frac{1}{2 \pi} \iint f(x,y) e^{-i (\omega_x x +\omega_y y)}\, dx\,dy \end{align}</math> |
||
|<math>\begin{align}& \hat{f}(\omega_x,\omega_y)\triangleq \\ & \iint f(x,y) e^{-i(\omega_x x+\omega_y y)}\, dx\,dy \end{align}</math> |
|||
|The variables {{mvar|ξ<sub>x</sub>}}, {{mvar|ξ<sub>y</sub>}}, {{mvar|ω<sub>x</sub>}}, {{mvar|ω<sub>y</sub>}} are real numbers. The integrals are taken over the entire plane. |
|||
<math>\frac{1}{2 \pi} \iint f(x,y) e^{-i (\omega_x x +\omega_y y)}\, dxdy</math> |
|||
|align="center"|<math> \hat{f}(\nu_x,\nu_y)=</math> |
|||
<math>\iint f(x,y) e^{-i(\nu_x x+\nu_y y)}\, dxdy</math> |
|||
| The variables ''ξ<sub>x</sub>'', ''ξ<sub>y</sub>'', ''ω<sub>x</sub>'', ''ω<sub>y</sub>'', ''ν<sub>x</sub>'' and ''ν<sub>y</sub>'' are real numbers. The integrals are taken over the entire plane. |
|||
|- |
|- |
||
|401 |
|401 |
||
|<math> e^{-\pi\left(a^2x^2+b^2y^2\right)}</math> |
|||
|<math> \frac{1}{|ab|} e^{-\pi\left(\frac{\xi_x^2}{a^2} + \frac{\xi_y^2}{b^2}\right)}</math> |
|||
|<math> \frac{1}{2\pi\,|ab|} e^{-\frac{1}{4\pi}\left(\frac{\omega_x^2}{a^2} + \frac{\omega_y^2}{b^2}\right)}</math> |
|||
|<math> \frac{1}{|ab|} e^{-\frac{1}{4\pi}\left(\frac{\omega_x^2}{a^2} + \frac{\omega_y^2}{b^2}\right)}</math> |
|||
| |
|Both functions are Gaussians, which may not have unit volume. |
||
|- |
|- |
||
|402 |
|402 |
||
|<math>\ |
|<math> \operatorname{circ}\left(\sqrt{x^2+y^2}\right)</math> |
||
|<math> \frac{J_1\left(2 \pi \sqrt{\xi_x^2+\xi_y^2}\right)}{\sqrt{\xi_x^2+\xi_y^2}}</math> |
|||
|<math> \frac{J_1\left(\sqrt{\omega_x^2+\omega_y^2}\right)}{\sqrt{\omega_x^2+\omega_y^2}}</math> |
|||
|<math> \frac{2\pi J_1\left(\sqrt{\omega_x^2+\omega_y^2}\right)}{\sqrt{\omega_x^2+\omega_y^2}}</math> |
|||
| |
|The function is defined by {{math|1=circ(''r'') = 1}} for {{math|0 ≤ ''r'' ≤ 1}}, and is 0 otherwise. The result is the amplitude distribution of the [[Airy disk]], and is expressed using {{math|''J''<sub>1</sub>}} (the order-1 [[Bessel function]] of the first kind).<ref>{{harvnb|Stein|Weiss|1971|loc=Thm. IV.3.3}}</ref> |
||
|- |
|||
|403 |
|||
|<math> \frac{1}{\sqrt{x^2+y^2}}</math> |
|||
|<math> \frac{1}{\sqrt{\xi_x^2+\xi_y^2}}</math> |
|||
|<math> \frac{1}{\sqrt{\omega_x^2+\omega_y^2}}</math> |
|||
|<math> \frac{2\pi}{\sqrt{\omega_x^2+\omega_y^2}}</math> |
|||
|This is the [[Hankel transform]] of {{math|1=''r''<sup>−1</sup>}}, a 2-D Fourier "self-transform".<ref>{{harvnb|Easton|2010}}</ref> |
|||
|- |
|||
|404 |
|||
|<math> \frac{i}{x+i y}</math> |
|||
|<math> \frac{1}{\xi_x+i\xi_y}</math> |
|||
|<math> \frac{1}{\omega_x+i\omega_y}</math> |
|||
|<math> \frac{2\pi}{\omega_x+i\omega_y}</math> |
|||
|<!--This formula was used in constructing the ground state wavefunction of two-dimensional <math> p_x+ip_y</math> superconductors <ref>Phys. Rev. B 97 (10), 104501 (2018)</ref>--> |
|||
|} |
|} |
||
=== |
===Formulas for general {{math|''n''}}-dimensional functions=== |
||
{| class="wikitable" |
{| class="wikitable" |
||
! !! Function !! Fourier transform |
! !! Function !! Fourier transform {{br}} unitary, ordinary frequency !! Fourier transform {{br}} unitary, angular frequency !! Fourier transform {{br}} non-unitary, angular frequency !! Remarks |
||
|- |
|- |
||
| |
|500 |
||
|<math> f(\mathbf x)\,</math> |
|||
|<math>\begin{align} &\hat{f_1}(\boldsymbol \xi) \triangleq \\ &\int_{\mathbb{R}^n}f(\mathbf x) e^{-i 2\pi \boldsymbol \xi \cdot \mathbf x }\, d \mathbf x \end{align}</math> |
|||
|align="center"|<math> \hat{f}(\xi)=</math> |
|||
<math>\int_{\mathbb{R}^n}f(x) e^{- |
|<math>\begin{align} &\hat{f_2}(\boldsymbol \omega) \triangleq \\ &\frac{1}{{(2 \pi)}^\frac{n}{2}} \int_{\mathbb{R}^n} f(\mathbf x) e^{-i \boldsymbol \omega \cdot \mathbf x}\, d \mathbf x \end{align}</math> |
||
|<math>\begin{align} &\hat{f_3}(\boldsymbol \omega) \triangleq \\ &\int_{\mathbb{R}^n}f(\mathbf x) e^{-i \boldsymbol \omega \cdot \mathbf x}\, d \mathbf x \end{align}</math> |
|||
|align="center"|<math> \hat{f}(\nu)=</math> |
|||
<math>\int_{\mathbb{R}^n}f(x) e^{-i x\cdot\nu }\, d^nx </math> |
|||
| |
| |
||
|- |
|- |
||
|501 |
|501 |
||
|<math>\chi_{[0,1]}(|x|)(1-|x|^2)^\delta</math> |
|<math> \chi_{[0,1]}(|\mathbf x|)\left(1-|\mathbf x|^2\right)^\delta</math> |
||
|<math> \frac{\Gamma(\delta+1)}{\pi^\delta\,|\boldsymbol \xi|^{\frac{n}{2} + \delta}} J_{\frac{n}{2}+\delta}(2\pi|\boldsymbol \xi|)</math> |
|||
|<math> 2^\delta \, \frac{\Gamma(\delta+1)}{\left|\boldsymbol \omega\right|^{\frac{n}{2}+\delta}} J_{\frac{n}{2}+\delta}(|\boldsymbol \omega|)</math> |
|||
|<math> \frac{\Gamma(\delta+1)}{\pi^\delta} \left|\frac{\boldsymbol \omega}{2\pi}\right|^{-\frac{n}{2}-\delta} J_{\frac{n}{2}+\delta}(\!|\boldsymbol \omega|\!)</math> |
|||
| |
|The function {{math|''χ''<sub>[0, 1]</sub>}} is the [[indicator function]] of the interval {{math|[0, 1]}}. The function {{math|Γ(''x'')}} is the gamma function. The function {{math|''J''<sub>{{sfrac|''n''|2}} + ''δ''</sub>}} is a Bessel function of the first kind, with order {{math|{{sfrac|''n''|2}} + ''δ''}}. Taking {{math|1=''n'' = 2}} and {{math|1=''δ'' = 0}} produces 402.<ref>{{harvnb|Stein|Weiss|1971|loc=Thm. 4.15}}</ref> |
||
| |
|||
|- |
|- |
||
|502 |
|502 |
||
|<math>|x|^{ |
|<math> |\mathbf x|^{-\alpha}, \quad 0 < \operatorname{Re} \alpha < n.</math> |
||
|<math> \frac{(2\pi)^{\alpha}}{c_{n, \alpha}} |\boldsymbol \xi|^{-(n - \alpha)}</math> |
|||
|<math> \frac{(2\pi)^{\frac{n}{2}}}{c_{n, \alpha}} |\boldsymbol \omega|^{-(n - \alpha)}</math> |
|||
|align="center"| |
|||
|<math> \frac{(2\pi)^{n}}{c_{n, \alpha}} |\boldsymbol \omega|^{-(n - \alpha)}</math> |
|||
|align="center"| |
|||
|See [[Riesz potential]] where the constant is given by{{br}}<math>c_{n, \alpha} = \pi^\frac{n}{2} 2^\alpha \frac{\Gamma\left(\frac{\alpha}{2}\right)}{\Gamma\left(\frac{n - \alpha}{2}\right)}.</math>{{br}}The formula also holds for all {{math|''α'' ≠ ''n'', ''n'' + 2, ...}} by analytic continuation, but then the function and its Fourier transforms need to be understood as suitably regularized tempered distributions. See [[homogeneous distribution]].<ref group=note>In {{harvnb|Gelfand|Shilov|1964|p=363}}, with the non-unitary conventions of this table, the transform of <math>|\mathbf x|^\lambda</math> is given to be{{br}} <math>2^{\lambda+n}\pi^{\tfrac12 n}\frac{\Gamma\left(\frac{\lambda+n}{2}\right)}{\Gamma\left(-\frac{\lambda}{2}\right)}|\boldsymbol\omega|^{-\lambda-n}</math>{{br}}from which this follows, with <math>\lambda=-\alpha</math>.</ref> |
|||
| See [[Riesz potential]]. |
|||
|- |
|||
|503 |
|||
|<math> \frac{1}{\left|\boldsymbol \sigma\right|\left(2\pi\right)^\frac{n}{2}} e^{-\frac{1}{2} \mathbf x^{\mathrm T} \boldsymbol \sigma^{-\mathrm T} \boldsymbol \sigma^{-1} \mathbf x}</math> |
|||
|<math> e^{-2\pi^2 \boldsymbol \xi^{\mathrm T} \boldsymbol \sigma \boldsymbol \sigma^{\mathrm T} \boldsymbol \xi} </math> |
|||
|<math> (2\pi)^{-\frac{n}{2}} e^{-\frac{1}{2} \boldsymbol \omega^{\mathrm T} \boldsymbol \sigma \boldsymbol \sigma^{\mathrm T} \boldsymbol \omega} </math> |
|||
|<math> e^{-\frac{1}{2} \boldsymbol \omega^{\mathrm T} \boldsymbol \sigma \boldsymbol \sigma^{\mathrm T} \boldsymbol \omega} </math> |
|||
|This is the formula for a [[multivariate normal distribution]] normalized to 1 with a mean of 0. Bold variables are vectors or matrices. Following the notation of the aforementioned page, {{math|'''Σ''' {{=}} '''σ''' '''σ'''<sup>T</sup>}} and {{math|'''Σ'''<sup>−1</sup> {{=}} '''σ'''<sup>−T</sup> '''σ'''<sup>−1</sup>}} |
|||
|- |
|||
|504 |
|||
|<math> e^{-2\pi\alpha|\mathbf x|}</math> |
|||
| <math>\frac{c_n\alpha}{\left(\alpha^2+|\boldsymbol{\xi}|^2\right)^\frac{n+1}{2}}</math> |
|||
|<math>\frac{c_n (2\pi)^{\frac{n+2}{2}} \alpha}{\left(4\pi^2\alpha^2+|\boldsymbol{\omega}|^2\right)^\frac{n+1}{2}}</math> |
|||
|<math>\frac{c_n (2\pi)^{n+1} \alpha}{\left(4\pi^2\alpha^2+|\boldsymbol{\omega}|^2\right)^\frac{n+1}{2}}</math> |
|||
|Here<ref>{{harvnb|Stein|Weiss|1971|p=6}}</ref>{{br}}<math>c_n=\frac{\Gamma\left(\frac{n+1}{2}\right)}{\pi^\frac{n+1}{2}},</math> {{math|Re(''α'') > 0}} |
|||
|} |
|} |
||
==See also== |
== See also == |
||
{{div col|colwidth=22em}} |
|||
<div style="-moz-column-count:2; column-count:2;"> |
|||
* [[Analog signal processing]] |
|||
*[[Fourier series]] |
|||
* [[Beevers–Lipson strip]] |
|||
*[[Fast Fourier transform]] |
|||
*[[ |
* [[Constant-Q transform]] |
||
*[[Discrete Fourier transform]] |
* [[Discrete Fourier transform]] |
||
* |
* [[DFT matrix]] |
||
*[[ |
* [[Fast Fourier transform]] |
||
*[[ |
* [[Fourier integral operator]] |
||
*[[ |
* [[Fourier inversion theorem]] |
||
*[[Fourier |
* [[Fourier multiplier]] |
||
*[[ |
* [[Fourier series]] |
||
*[[ |
* [[Fourier sine transform]] |
||
* [[Fourier–Deligne transform]] |
|||
*[[Transform (mathematics)]] |
|||
* [[Fourier–Mukai transform]] |
|||
</div> |
|||
* [[Fractional Fourier transform]] |
|||
* [[Indirect Fourier transform]] |
|||
* [[Integral transform]] |
|||
** [[Hankel transform]] |
|||
** [[Hartley transform]] |
|||
* [[Laplace transform]] |
|||
* [[Least-squares spectral analysis]] |
|||
* [[Linear canonical transform]] |
|||
* [[List of Fourier-related transforms]] |
|||
* [[Mellin transform]] |
|||
* [[Multidimensional transform]] |
|||
* [[NGC 4622]], especially the image NGC 4622 Fourier transform {{math|1=''m'' = 2}}. |
|||
* [[Nonlocal operator]] |
|||
* [[Quantum Fourier transform]] |
|||
* [[Quadratic Fourier transform]] |
|||
* [[Short-time Fourier transform]] |
|||
* [[Spectral density]] |
|||
** [[Spectral density estimation]] |
|||
* [[Symbolic integration]] |
|||
* [[Time stretch dispersive Fourier transform]] |
|||
* [[Transform (mathematics)]] |
|||
{{div col end}} |
|||
== |
== Notes == |
||
{{reflist|group=note}} |
|||
*{{cite book | author =[[Salomon Bochner|Bochner S.]],[[K. S. Chandrasekharan|Chandrasekharan K.]] | title=Fourier Transforms | publisher= Princeton University Press | year=1949}} |
|||
* {{citation|first=R. N.|last=Bracewell|title=The Fourier Transform and Its Applications|edition=3rd|publication-place=Boston|publisher=McGraw-Hill|year=2000}}. |
|||
* {{citation|first1=George|last1=Campbell|first2=Ronald|last2=Foster|title=Fourier Integrals for Practical Applications|publication-place=New York|publisher=D. Van Nostrand Company, Inc.|year=1948}}. |
|||
* {{citation|last=Duoandikoetxea|first=Javier|title=Fourier Analysis|publisher=American Mathematical Society|year=2001|isbn=0-8218-2172-5}}. |
|||
* {{citation|last1=Dym|first1=H|first2=H|last2=McKean|authorlink1=Harry Dym|title=Fourier Series and Integrals|publisher=Academic Press|year=1985|isbn=978-0122264511}}. |
|||
* {{citation|editor-last=Erdélyi|editor-first=Arthur|title=Tables of Integral Transforms|volume=1|publication-place=New Your|publisher=McGraw-Hill|year=1954}} |
|||
* {{citation|last=Fourier|first=J. B. Joseph|authorlink=Joseph_Fourier|title=Théorie Analytique de la Chaleur|publication-place=Paris|url=http://books.google.de/books?id=1TUVAAAAQAAJ|publisher=|year=1822}} |
|||
* {{citation|first=Loukas|last=Grafakos|title=Classical and Modern Fourier Analysis|publisher=Prentice-Hall|year=2004|isbn=0-13-035399-X}}. |
|||
*{{Citation | last1=Hewitt | first1=Edwin | last2=Ross | first2=Kenneth A. | title=Abstract harmonic analysis. Vol. II: Structure and analysis for compact groups. Analysis on locally compact Abelian groups | publisher=[[Springer-Verlag]] | location=Berlin, New York | series=Die Grundlehren der mathematischen Wissenschaften, Band 152 | id={{MathSciNet | id = 0262773}} | year=1970}}. |
|||
* {{citation|first=L.|last=Hörmander|authorlink=Lars Hörmander|title=Linear Partial Differential Operators, Volume 1|publisher=Springer-Verlag|year=1976|isbn=978-3540006626}}. |
|||
*{{citation|first = J.F.|last = James|title=A Student's Guide to Fourier Transforms|edition=2nd|publication-place=New York|publisher=Cambridge University Press|year=2002|isbn=0-521-00428-4}}. |
|||
* {{citation|first=Gerald|last=Kaiser|title=A Friendly Guide to Wavelets|year=1994|publisher=Birkhäuser|isbn=0-8176-3711-7}} |
|||
* {{citation|first=David|last=Kammler|title=A First Course in Fourier Analysis|year=2000|publisher=Prentice Hall|isbn=0-13-578782-3}} |
|||
* {{citation|first=Yitzhak|last=Katznelson|title=An introduction to Harmonic Analysis|year=1976|publisher=Dover|isbn=0-486-63331-4}} |
|||
* {{Citation | last1=Knapp | first1=Anthony W. | title=Representation Theory of Semisimple Groups: An Overview Based on Examples | url=http://books.google.com/books?id=QCcW1h835pwC | publisher=[[Princeton University Press]] | isbn=978-0-691-09089-4 | year=2001}} |
|||
* {{citation|first=Mark|last=Pinsky|title=Introduction to Fourier Analysis and Wavelets|year=2002|publisher=Brooks/Cole|isbn=0-534-37660-6}} |
|||
* {{citation|first1=A. D.|last1=Polyanin|first2=A. V.|last2=Manzhirov|title=Handbook of Integral Equations|publisher=CRC Press|publication-place=Boca Raton|year=1998|isbn=0-8493-2876-4}}. |
|||
* {{citation|first=Walter|last=Rudin|title=Real and Complex Analysis|publilsher=McGraw Hill| edition=Third|year=1987|isbn=0-07-100276-6|publisher=McGraw-Hill|location=Singapore}}. |
|||
* {{citation|first1=Elias|last1=Stein|first2=Rami|last2=Shakarchi|title=Fourier Analysis: An introduction|publisher=Princeton University Press|year=2003|isbn=0-691-11384-X}}. |
|||
* {{citation|last1=Stein|first1=Elias|authorlink1=Elias Stein|first2=Guido|last2=Weiss|authorlink2=Guido Weiss|title=Introduction to Fourier Analysis on Euclidean Spaces|publilsher=Princeton University Press|year=1971|isbn=978-0-691-08078-9|publisher=Princeton University Press|location=Princeton, N.J.}}. |
|||
* {{citation|first=R. G.|last=Wilson|title=Fourier Series and Optical Transform Techniques in Contemporary Optics|publilsher=Wiley|year=1995|isbn=0471303577|publisher=Wiley|location=New York}}. |
|||
* {{citation|first=K.|last=Yosida|authorlink=Kōsaku Yosida|title=Functional Analysis|publisher=Springer-Verlag|year=1968|isbn=3-540-58654-7}}. |
|||
== |
== Citations == |
||
{{reflist|22em}} |
|||
* [http://www.westga.edu/~jhasbun/osp/Fourier.htm Fourier Series Applet] (Tip: drag magnitude or phase dots up or down to change the wave form). |
|||
* [http://eqworld.ipmnet.ru/en/auxiliary/aux-inttrans.htm Tables of Integral Transforms] at EqWorld: The World of Mathematical Equations. |
|||
* {{MathWorld | urlname= FourierTransform | title= Fourier Transform}} |
|||
* [http://math.fullerton.edu/mathews/c2003/FourierTransformMod.html Fourier Transform Module by John H. Mathews] |
|||
* [http://www.dspdimension.com/admin/dft-a-pied/ The DFT “à Pied”: Mastering The Fourier Transform in One Day] at The DSP Dimension |
|||
* [http://www.fourier-series.com/f-transform/index.html An Interactive Flash Tutorial for the Fourier Transform] |
|||
== References == |
|||
{{refbegin|2|indent=yes}} |
|||
<!-- DO NOT leave blank lines between list items, they create entirely new lists --> |
|||
* {{citation |
|||
| last1 = Arfken | first1 = George |
|||
| title = Mathematical Methods for Physicists |
|||
| date = 1985 |
|||
| publisher = Academic Press |
|||
| isbn = 9780120598205 |
|||
| edition = 3rd |
|||
}} |
|||
* {{citation |
|||
| last1 = Bailey |
|||
| first1 = David H. |
|||
| last2 = Swarztrauber |
|||
| first2 = Paul N. |
|||
| title = A fast method for the numerical evaluation of continuous Fourier and Laplace transforms |
|||
| journal = [[SIAM Journal on Scientific Computing]] |
|||
| volume = 15 |
|||
| issue = 5 |
|||
| year = 1994 |
|||
| pages = 1105–1110 |
|||
| doi = 10.1137/0915067 |
|||
| bibcode = 1994SJSC...15.1105B |
|||
| url = http://crd.lbl.gov/~dhbailey/dhbpapers/fourint.pdf |
|||
| citeseerx = 10.1.1.127.1534 |
|||
| access-date = 2017-11-01 |
|||
| archive-date = 2008-07-20 |
|||
| archive-url = https://web.archive.org/web/20080720002714/http://crd.lbl.gov/~dhbailey/dhbpapers/fourint.pdf |
|||
| url-status = dead |
|||
}} |
|||
* {{citation |
|||
| editor-last = Boashash | editor-first = B. |
|||
| title = Time–Frequency Signal Analysis and Processing: A Comprehensive Reference |
|||
| publisher = Elsevier Science |
|||
| location = Oxford |
|||
| year = 2003 |
|||
| isbn = 978-0-08-044335-5 |
|||
}} |
|||
* {{citation |
|||
| last1 = Bochner | first1 = S. | author1-link = Salomon Bochner |
|||
| last2 = Chandrasekharan | first2 = K. | author2-link = K. S. Chandrasekharan |
|||
| title = Fourier Transforms |
|||
| publisher = [[Princeton University Press]] |
|||
| year = 1949 |
|||
}} |
|||
* {{citation |
|||
| last = Bracewell | first = R. N. |
|||
| title = The Fourier Transform and Its Applications |
|||
| edition = 3rd |
|||
| location = Boston |
|||
| publisher = McGraw-Hill |
|||
| year = 2000 |
|||
| isbn = 978-0-07-116043-8 |
|||
}} |
|||
* {{citation |
|||
| last1 = Campbell | first1 = George |
|||
| last2 = Foster | first2 = Ronald |
|||
| title = Fourier Integrals for Practical Applications |
|||
| publisher = D. Van Nostrand Company, Inc. |
|||
| location = New York |
|||
| year = 1948 |
|||
}} |
|||
* {{citation |
|||
| last1 = Celeghini | first1 = Enrico |
|||
| last2 = Gadella | first2 = Manuel |
|||
| last3 = del Olmo | first3 = Mariano A. |
|||
| journal = Symmetry |
|||
| date = 2021 |
|||
| volume = 13 |
|||
| issue = 5 |
|||
| title = Hermite Functions and Fourier Series |
|||
| page = 853 |
|||
| doi = 10.3390/sym13050853 |
|||
| arxiv = 2007.10406 |
|||
| bibcode = 2021Symm...13..853C |
|||
| doi-access = free |
|||
}} |
|||
* {{citation |
|||
| last = Champeney | first = D.C. |
|||
| title = A Handbook of Fourier Theorems |
|||
| year = 1987 |
|||
| publisher = [[Cambridge University Press]] |
|||
}} |
|||
* {{citation |
|||
| last = Chatfield |
|||
| first = Chris |
|||
| title = The Analysis of Time Series: An Introduction |
|||
| year = 2004 |
|||
| edition = 6th |
|||
| publisher = Chapman & Hall/CRC |
|||
| series = Texts in Statistical Science |
|||
| location = London |
|||
| isbn = 9780203491683 |
|||
| url = https://books.google.com/books?id=qKzyAbdaDFAC&q=%22Fourier+transform%22 |
|||
}} |
|||
* {{citation |
|||
| last1 = Clozel | first1 = Laurent |
|||
| last2 = Delorme | first2 = Patrice |
|||
| title = Sur le théorème de Paley-Wiener invariant pour les groupes de Lie réductifs réels |
|||
| date = 1985 |
|||
| journal = Comptes Rendus de l'Académie des Sciences, Série I |
|||
| volume = 300 |
|||
| pages = 331–333 |
|||
}} |
|||
* {{citation |
|||
| last = Condon | first = E. U. | author-link = Edward Condon |
|||
| title = Immersion of the Fourier transform in a continuous group of functional transformations |
|||
| journal = [[PNAS|Proc. Natl. Acad. Sci.]] |
|||
| volume = 23 |
|||
| issue = 3 |
|||
| pages = 158–164 |
|||
| year = 1937 |
|||
| doi=10.1073/pnas.23.3.158 |
|||
| pmid = 16588141 |
|||
| pmc = 1076889 |
|||
| bibcode = 1937PNAS...23..158C |
|||
| doi-access = free |
|||
}} |
|||
* {{citation |
|||
| last1 = de Groot | first1 = Sybren R. |
|||
| last2 = Mazur | first2 = Peter |
|||
| title = Non-Equilibrium Thermodynamics |
|||
| edition = 2nd |
|||
| year = 1984 |
|||
| publisher = [[Dover Publications|Dover]] |
|||
| location = New York |
|||
}} |
|||
* {{citation |
|||
| last = Duoandikoetxea | first = Javier |
|||
| title = Fourier Analysis |
|||
| publisher = [[American Mathematical Society]] |
|||
| year = 2001 |
|||
| isbn = 978-0-8218-2172-5 |
|||
}} |
|||
* {{citation |
|||
| last1 = Dym | first1 = H. | author1-link = Harry Dym |
|||
| last2 = McKean | first2 = H. |
|||
| title = Fourier Series and Integrals |
|||
| publisher = [[Academic Press]] |
|||
| year = 1985 |
|||
| isbn = 978-0-12-226451-1 |
|||
}} |
|||
* {{citation |
|||
| last = Easton |
|||
| first = Roger L. Jr. |
|||
| title = Fourier Methods in Imaging |
|||
| date = 2010 |
|||
| publisher = John Wiley & Sons |
|||
| isbn = 978-0-470-68983-7 |
|||
| url = https://books.google.com/books?id=wCoDDQAAQBAJ |
|||
| access-date = 26 May 2020 |
|||
| language = en |
|||
}} |
|||
* {{cite book | last=Edwards | first=R. E. | title=Fourier Series | publisher=Springer New York | publication-place=New York, NY | volume=64 | date=1979 | isbn=978-1-4612-6210-7 | doi=10.1007/978-1-4612-6208-4}} |
|||
* {{cite book | last=Edwards | first=R. E. | title=Fourier Series | publisher=Springer New York | publication-place=New York, NY | volume=85 | date=1982 | isbn=978-1-4613-8158-7 | doi=10.1007/978-1-4613-8156-3}} |
|||
* {{citation |
|||
| editor-last = Erdélyi | editor-first = Arthur |
|||
| title = Tables of Integral Transforms |
|||
| volume = 1 |
|||
| publisher = McGraw-Hill |
|||
| year = 1954 |
|||
}} |
|||
* {{citation |
|||
| last = Feller | first = William | author-link = William Feller |
|||
| title = An Introduction to Probability Theory and Its Applications |
|||
| volume = II |
|||
| publisher = [[John Wiley & Sons|Wiley]] |
|||
| location = New York |
|||
| edition = 2nd |
|||
| mr = 0270403 |
|||
| year = 1971 |
|||
}} |
|||
* {{citation |
|||
| last = Folland | first = Gerald |
|||
| title = Harmonic analysis in phase space |
|||
| publisher = [[Princeton University Press]] |
|||
| year = 1989 |
|||
}} |
|||
* {{citation |
|||
| last = Folland | first = Gerald |
|||
| title = Fourier analysis and its applications |
|||
| publisher = [[Wadsworth & Brooks/Cole]] |
|||
| year = 1992 |
|||
}} |
|||
* {{citation |
|||
| last = Fourier |
|||
| first = J.B. Joseph |
|||
| author-link = Joseph Fourier |
|||
| title = Théorie analytique de la chaleur |
|||
| location = Paris |
|||
| url = https://books.google.com/books?id=TDQJAAAAIAAJ&q=%22c%27est-%C3%A0-dire+qu%27on+a+l%27%C3%A9quation%22&pg=PA525 |
|||
| publisher = Firmin Didot, père et fils |
|||
| year = 1822 |
|||
| language = fr |
|||
| oclc = 2688081 |
|||
}} |
|||
* {{citation |
|||
| last = Fourier |
|||
| first = J.B. Joseph |
|||
| author-link = Joseph Fourier |
|||
| title = The Analytical Theory of Heat |
|||
| url = https://books.google.com/books?id=-N8EAAAAYAAJ&q=%22that+is+to+say%2C+that+we+have+the+equation%22&pg=PA408 |
|||
| year = 1878 |
|||
| orig-year = 1822 |
|||
| publisher = The University Press |
|||
| translator = Alexander Freeman |
|||
}} (translated from French) |
|||
* {{citation |
|||
| last1 = Gradshteyn | first1 = Izrail Solomonovich | author1-link = Izrail Solomonovich Gradshteyn |
|||
| last2 = Ryzhik | first2 = Iosif Moiseevich | author2-link = Iosif Moiseevich Ryzhik |
|||
| last3 = Geronimus | first3 = Yuri Veniaminovich | author3-link = Yuri Veniaminovich Geronimus |
|||
| last4 = Tseytlin | first4 = Michail Yulyevich | author4-link = Michail Yulyevich Tseytlin |
|||
| last5 = Jeffrey | first5 = Alan |
|||
| editor1-last = Zwillinger | editor1-first = Daniel |
|||
| editor2-last = Moll | editor2-first = Victor Hugo | editor-link2=Victor Hugo Moll |
|||
| translator = Scripta Technica, Inc. |
|||
| title = Table of Integrals, Series, and Products |
|||
| title-link = Gradshteyn and Ryzhik |
|||
| publisher = [[Academic Press]] |
|||
| year = 2015 |
|||
| edition = 8th |
|||
| language = en |
|||
| isbn = 978-0-12-384933-5 |
|||
}} |
|||
* {{citation |
|||
| last = Grafakos | first = Loukas |
|||
| title = Classical and Modern Fourier Analysis |
|||
| publisher = Prentice-Hall |
|||
| year = 2004 |
|||
| isbn = 978-0-13-035399-3 |
|||
}} |
|||
* {{citation |
|||
| last1 = Grafakos | first1 = Loukas |
|||
| last2 = Teschl | first2 = Gerald | author2-link = Gerald Teschl |
|||
| title = On Fourier transforms of radial functions and distributions |
|||
| journal = J. Fourier Anal. Appl. |
|||
| volume = 19 |
|||
| pages = 167–179 |
|||
| year = 2013 |
|||
| issue = 1 |
|||
| doi = 10.1007/s00041-012-9242-5 |
|||
| arxiv = 1112.5469 |
|||
| bibcode = 2013JFAA...19..167G |
|||
| s2cid = 1280745 |
|||
}} |
|||
* {{citation |
|||
| last1 = Greiner |
|||
| first1 = W. |
|||
| last2 = Reinhardt |
|||
| first2 = J. |
|||
| title = Field Quantization |
|||
| publisher = [[Springer-Verlag|Springer]] |
|||
| year = 1996 |
|||
| isbn = 978-3-540-59179-5 |
|||
| url-access = registration |
|||
| url = https://archive.org/details/fieldquantizatio0000grei |
|||
}} |
|||
* {{citation |
|||
| last1 = Gelfand | first1 = I.M. | author1-link = Israel Gelfand |
|||
| last2 = Shilov | first2 = G.E. | author2-link = Naum Ya. Vilenkin |
|||
| title = Generalized Functions |
|||
| volume = 1 |
|||
| publisher = [[Academic Press]] |
|||
| location = New York |
|||
| year = 1964 |
|||
}} (translated from Russian) |
|||
* {{citation |
|||
| last1 = Gelfand | first1 = I.M. | author1-link = Israel Gelfand |
|||
| last2 = Vilenkin | first2 = N.Y. | author2-link = Naum Ya. Vilenkin |
|||
| title = Generalized Functions |
|||
| volume = 4 |
|||
| publisher = [[Academic Press]] |
|||
| location = New York |
|||
| year = 1964 |
|||
}} (translated from Russian) |
|||
* {{citation |
|||
| last1 = Hewitt | first1 = Edwin |
|||
| last2 = Ross | first2 = Kenneth A. |
|||
| title = Abstract harmonic analysis |
|||
| volume = II: Structure and analysis for compact groups. Analysis on locally compact Abelian groups |
|||
| publisher = [[Springer-Verlag|Springer]] |
|||
| series = Die Grundlehren der mathematischen Wissenschaften, Band 152 |
|||
| mr = 0262773 |
|||
| year = 1970 |
|||
}} |
|||
* {{citation |
|||
| last = Hörmander | first = L. | author-link = Lars Hörmander |
|||
| title = Linear Partial Differential Operators |
|||
| volume = 1 |
|||
| publisher = [[Springer-Verlag|Springer]] |
|||
| year = 1976 |
|||
| isbn = 978-3-540-00662-6 |
|||
}} |
|||
* {{citation |
|||
| last = Howe | first = Roger |
|||
| title= On the role of the Heisenberg group in harmonic analysis |
|||
| journal = [[Bulletin of the American Mathematical Society]] |
|||
| volume = 3 |
|||
| pages = 821–844 |
|||
| number = 2 |
|||
| year = 1980 |
|||
| doi = 10.1090/S0273-0979-1980-14825-9 |
|||
| mr = 578375 |
|||
| doi-access = free |
|||
}} |
|||
* {{citation |
|||
| last = James | first = J.F. |
|||
| title = A Student's Guide to Fourier Transforms |
|||
| edition = 3rd |
|||
| publisher = [[Cambridge University Press]] |
|||
| year = 2011 |
|||
| isbn = 978-0-521-17683-5 |
|||
}} |
|||
* {{citation |
|||
| last = Jordan | first = Camille | author-link = Camille Jordan |
|||
| title = Cours d'Analyse de l'École Polytechnique |
|||
| volume = II, Calcul Intégral: Intégrales définies et indéfinies |
|||
| edition = 2nd |
|||
| location = Paris |
|||
| year = 1883 |
|||
}} |
|||
* {{citation |
|||
| last = Kaiser |
|||
| first = Gerald |
|||
| title = A Friendly Guide to Wavelets |
|||
| journal = Physics Today |
|||
| volume = 48 |
|||
| issue = 7 |
|||
| pages = 57–58 |
|||
| year = 1994 |
|||
| isbn = 978-0-8176-3711-8 |
|||
| url = https://books.google.com/books?id=rfRnrhJwoloC&q=%22becomes+the+Fourier+%28integral%29+transform%22&pg=PA29 |
|||
| bibcode = 1995PhT....48g..57K |
|||
| doi = 10.1063/1.2808105 |
|||
}} |
|||
* {{citation |
|||
| last = Kammler | first = David |
|||
| title = A First Course in Fourier Analysis |
|||
| year = 2000 |
|||
| publisher = Prentice Hall |
|||
| isbn = 978-0-13-578782-3 |
|||
}} |
|||
* {{citation |
|||
| last = Katznelson | first = Yitzhak |
|||
| title = An Introduction to Harmonic Analysis |
|||
| year = 1976 |
|||
| publisher = [[Dover Publications|Dover]] |
|||
| isbn = 978-0-486-63331-2 |
|||
}} |
|||
* {{citation |
|||
| last1 = Khare | first1 = Kedar |
|||
| last2 = Butola | first2 = Mansi |
|||
| last3 = Rajora | first3 = Sunaina |
|||
| title = Fourier Optics and Computational Imaging |
|||
| publisher = Springer |
|||
| year = 2023 |
|||
| isbn = 978-3-031-18353-9 |
|||
| edition = 2nd |
|||
| chapter = Chapter 2.3 Fourier Transform as a Limiting Case of Fourier Series |
|||
| doi = 10.1007/978-3-031-18353-9 |
|||
| s2cid = 255676773 |
|||
}} |
|||
* {{citation |
|||
| last1 = Kirillov | first1 = Alexandre | author1-link = Alexandre Kirillov |
|||
| last2 = Gvishiani |
|||
| first2 = Alexei D. |
|||
| title = Theorems and Problems in Functional Analysis |
|||
| year = 1982 |
|||
| orig-year = 1979 |
|||
| publisher = [[Springer-Verlag|Springer]] |
|||
}} (translated from Russian) |
|||
* {{citation |
|||
| last = Knapp |
|||
| first = Anthony W. |
|||
| title = Representation Theory of Semisimple Groups: An Overview Based on Examples |
|||
| url = https://books.google.com/books?id=QCcW1h835pwC |
|||
| publisher = [[Princeton University Press]] |
|||
| year = 2001 |
|||
| isbn = 978-0-691-09089-4 |
|||
}} |
|||
* {{citation |
|||
| last1 = Kolmogorov |
|||
| first1 = Andrey Nikolaevich |
|||
| author1-link = Andrey Kolmogorov |
|||
| last2 = Fomin |
|||
| first2 = Sergei Vasilyevich |
|||
| author2-link = Sergei Fomin |
|||
| title = Elements of the Theory of Functions and Functional Analysis |
|||
| year = 1999 |
|||
| orig-year = 1957 |
|||
| publisher = [[Dover Publications|Dover]] |
|||
| url = http://store.doverpublications.com/0486406830.html |
|||
}} (translated from Russian) |
|||
* {{citation |
|||
| last = Lado |
|||
| first = F. |
|||
| title = Numerical Fourier transforms in one, two, and three dimensions for liquid state calculations |
|||
| journal = [[Journal of Computational Physics]] |
|||
| volume = 8 |
|||
| issue = 3 |
|||
| year = 1971 |
|||
| pages = 417–433 |
|||
| doi = 10.1016/0021-9991(71)90021-0 |
|||
| bibcode = 1971JCoPh...8..417L |
|||
| url = http://www.lib.ncsu.edu/resolver/1840.2/2465 |
|||
}} |
|||
* {{citation |
|||
| last = Müller |
|||
| first = Meinard |
|||
| title = The Fourier Transform in a Nutshell. |
|||
| url = https://www.audiolabs-erlangen.de/content/05-fau/professor/00-mueller/04-bookFMP/2015_Mueller_FundamentalsMusicProcessing_Springer_Section2-1_SamplePages.pdf |
|||
| publisher = [[Springer-Verlag|Springer]] |
|||
| year = 2015 |
|||
| doi = 10.1007/978-3-319-21945-5 |
|||
| isbn = 978-3-319-21944-8 |
|||
| s2cid = 8691186 |
|||
| access-date = 2016-03-28 |
|||
| archive-date = 2016-04-08 |
|||
| archive-url = https://web.archive.org/web/20160408083515/https://www.audiolabs-erlangen.de/content/05-fau/professor/00-mueller/04-bookFMP/2015_Mueller_FundamentalsMusicProcessing_Springer_Section2-1_SamplePages.pdf |
|||
| url-status = dead |
|||
}}; also available at [http://www.music-processing.de Fundamentals of Music Processing], Section 2.1, pages 40–56 |
|||
* {{citation |
|||
|last1=Oppenheim |
|||
|first1=Alan V. |
|||
|author-link=Alan V. Oppenheim |
|||
|last2=Schafer |
|||
|first2=Ronald W. |
|||
|author2-link=Ronald W. Schafer |
|||
|last3=Buck |
|||
|first3=John R. |
|||
|title=Discrete-time signal processing |
|||
|year=1999 |
|||
|publisher=Prentice Hall |
|||
|location=Upper Saddle River, N.J. |
|||
|isbn=0-13-754920-2 |
|||
|edition=2nd |
|||
|url-access=registration |
|||
|url=https://archive.org/details/discretetimesign00alan |
|||
}} |
|||
* {{citation |
|||
| last1 = Paley | first1 = R.E.A.C. | author1-link = Raymond Paley |
|||
| last2 = Wiener | first2 = Norbert | author2-link = Norbert Wiener |
|||
| title = Fourier Transforms in the Complex Domain |
|||
| series = American Mathematical Society Colloquium Publications |
|||
| number = 19 |
|||
| year = 1934 |
|||
| publisher = [[American Mathematical Society]] |
|||
| location = Providence, Rhode Island |
|||
}} |
|||
* {{citation |
|||
| last = Pinsky |
|||
| first = Mark |
|||
| title = Introduction to Fourier Analysis and Wavelets |
|||
| year = 2002 |
|||
| publisher = Brooks/Cole |
|||
| isbn = 978-0-534-37660-4 |
|||
| url = https://books.google.com/books?id=PyISCgAAQBAJ&q=%22The+Fourier+transform+of+the+measure%22&pg=PA256 |
|||
}} |
|||
* {{citation |
|||
| last = Poincaré |
|||
| first = Henri |
|||
| author-link = Henri Poincaré |
|||
| title = Théorie analytique de la propagation de la chaleur |
|||
| publisher = Carré |
|||
| location = Paris |
|||
| year = 1895 |
|||
| url = http://gallica.bnf.fr/ark:/12148/bpt6k5500702f |
|||
}} |
|||
* {{citation |
|||
| last1 = Polyanin | first1 = A. D. |
|||
| last2 = Manzhirov | first2 = A. V. |
|||
| title = Handbook of Integral Equations |
|||
| publisher = [[CRC Press]] |
|||
| location = Boca Raton |
|||
| year = 1998 |
|||
| isbn = 978-0-8493-2876-3 |
|||
}} |
|||
* {{citation |
|||
| last1 = Press | first1 = William H. |
|||
| last2 = Flannery | first2 = Brian P. |
|||
| last3 = Teukolsky | first3 = Saul A. |
|||
| last4 = Vetterling | first4 = William T. |
|||
| title = Numerical Recipes in C: The Art of Scientific Computing, Second Edition |
|||
| edition = 2nd |
|||
| publisher = [[Cambridge University Press]] |
|||
| year = 1992 |
|||
}} |
|||
* {{cite book |
|||
|last1 = Proakis |
|||
|first1 = John G. |
|||
|last2 = Manolakis |
|||
|first2 = Dimitri G. |
|||
|title = Digital Signal Processing: Principles, Algorithms and Applications |
|||
|place = New Jersey |
|||
|publisher = Prentice-Hall International |
|||
|year = 1996 |
|||
|edition = 3 |
|||
|language = en |
|||
|id = sAcfAQAAIAAJ |
|||
|isbn = 9780133942897 |
|||
|bibcode = 1996dspp.book.....P |
|||
|url-access = registration |
|||
|url = https://archive.org/details/digitalsignalpro00proa |
|||
}} |
|||
* {{citation |
|||
| last = Rahman |
|||
| first = Matiur |
|||
| url = https://books.google.com/books?id=k_rdcKaUdr4C&pg=PA10 |
|||
| isbn = 978-1-84564-564-9 |
|||
| publisher = WIT Press |
|||
| title = Applications of Fourier Transforms to Generalized Functions |
|||
| year = 2011 |
|||
}} |
|||
* {{citation |
|||
| last = Rudin | first = Walter |
|||
| title = Real and Complex Analysis |
|||
| publisher = McGraw Hill |
|||
| edition = 3rd |
|||
| year = 1987 |
|||
| isbn = 978-0-07-100276-9 |
|||
| location = Singapore |
|||
}} |
|||
* {{citation |
|||
| last1 = Simonen | first1 = P. |
|||
| last2 = Olkkonen | first2 = H. |
|||
| title = Fast method for computing the Fourier integral transform via Simpson's numerical integration |
|||
| journal = Journal of Biomedical Engineering |
|||
| volume = 7 |
|||
| issue = 4 |
|||
| year = 1985 |
|||
| pages = 337–340 |
|||
| doi=10.1016/0141-5425(85)90067-6 |
|||
| pmid = 4057997 |
|||
}} |
|||
* {{cite web |
|||
| last = Smith |
|||
| first = Julius O. |
|||
| url = http://ccrma.stanford.edu/~jos/mdft/Positive_Negative_Frequencies.html |
|||
| title = Mathematics of the Discrete Fourier Transform (DFT), with Audio Applications --- Second Edition |
|||
| website = ccrma.stanford.edu |
|||
| access-date = 2022-12-29 |
|||
| quote = We may think of a real sinusoid as being the sum of a positive-frequency and a negative-frequency complex sinusoid. |
|||
}} |
|||
* {{cite book | last=Stade | first=Eric | title=Fourier Analysis | publisher=Wiley | date=2005 | isbn=978-0-471-66984-5 | doi=10.1002/9781118165508}} |
|||
* {{citation |
|||
| last1 = Stein |
|||
| first1 = Elias |
|||
| last2 = Shakarchi |
|||
| first2 = Rami |
|||
| title = Fourier Analysis: An introduction |
|||
| publisher = [[Princeton University Press]] |
|||
| year = 2003 |
|||
| isbn = 978-0-691-11384-5 |
|||
| url = https://books.google.com/books?id=FAOc24bTfGkC&q=%22The+mathematical+thrust+of+the+principle%22&pg=PA158 |
|||
}} |
|||
* {{citation |
|||
| last1 = Stein |
|||
| first1 = Elias |
|||
| author1-link = Elias Stein |
|||
| last2 = Weiss |
|||
| first2 = Guido |
|||
| author2-link = Guido Weiss |
|||
| title = Introduction to Fourier Analysis on Euclidean Spaces |
|||
| publisher = [[Princeton University Press]] |
|||
| location = Princeton, N.J. |
|||
| year = 1971 |
|||
| isbn = 978-0-691-08078-9 |
|||
| url = https://books.google.com/books?id=YUCV678MNAIC&q=editions:xbArf-TFDSEC |
|||
}} |
|||
* {{citation |
|||
| last = Taneja |
|||
| first = H.C. |
|||
| title = Advanced Engineering Mathematics |
|||
| volume = 2 |
|||
| chapter-url = https://books.google.com/books?id=X-RFRHxMzvYC&q=%22The+Fourier+integral+can+be+regarded+as+an+extension+of+the+concept+of+Fourier+series%22&pg=PA192 |
|||
| chapter = Chapter 18: Fourier integrals and Fourier transforms |
|||
| isbn = 978-8189866563 |
|||
| year = 2008 |
|||
| publisher = I. K. International Pvt Ltd |
|||
| location = New Delhi, India |
|||
}} |
|||
* {{citation |
|||
| last = Titchmarsh | first = E. | author-link = Edward Charles Titchmarsh |
|||
| title = Introduction to the theory of Fourier integrals |
|||
| isbn = 978-0-8284-0324-5 |
|||
| orig-year = 1948 |
|||
| year = 1986 |
|||
| edition = 2nd |
|||
| publisher = [[Clarendon Press]] |
|||
| location = Oxford University |
|||
}} |
|||
* {{citation |
|||
| last = Vretblad | first = Anders |
|||
| title = Fourier Analysis and its Applications |
|||
| year = 2000 |
|||
| isbn = 978-0-387-00836-3 |
|||
| publisher = [[Springer-Verlag|Springer]] |
|||
| series = [[Graduate Texts in Mathematics]] |
|||
| volume = 223 |
|||
| location = New York |
|||
}} |
|||
* {{citation |
|||
| last1 = Whittaker | first1 = E. T. | author1-link = E. T. Whittaker |
|||
| last2 = Watson | first2 = G. N. | author2-link = G. N. Watson |
|||
| title = A Course of Modern Analysis |
|||
| title-link = A Course of Modern Analysis |
|||
| edition = 4th |
|||
| publisher = [[Cambridge University Press]] |
|||
| year = 1927 |
|||
}} |
|||
* {{citation |
|||
| last1 = Widder |
|||
| first1 = David Vernon |
|||
| last2 = Wiener |
|||
| first2 = Norbert |
|||
| author2-link = Norbert Wiener |
|||
| title = Remarks on the Classical Inversion Formula for the Laplace Integral |
|||
| date = August 1938 |
|||
| journal = Bulletin of the American Mathematical Society |
|||
| volume = 44 |
|||
| issue = 8 |
|||
| pages = 573–575 |
|||
| doi = 10.1090/s0002-9904-1938-06812-7 |
|||
| url = http://projecteuclid.org/euclid.bams/1183500627 |
|||
}} |
|||
* {{citation |
|||
| last = Wiener | first = Norbert | author-link = Norbert Wiener |
|||
| title = Extrapolation, Interpolation, and Smoothing of Stationary Time Series With Engineering Applications |
|||
| year = 1949 |
|||
| publisher = Technology Press and John Wiley & Sons and Chapman & Hall |
|||
| location = Cambridge, Mass. |
|||
}} |
|||
* {{citation |
|||
| last = Wilson | first = R. G. |
|||
| title = Fourier Series and Optical Transform Techniques in Contemporary Optics |
|||
| publisher = [[John Wiley & Sons|Wiley]] |
|||
| year = 1995 |
|||
| isbn = 978-0-471-30357-2 |
|||
| location = New York |
|||
}} |
|||
* {{citation |
|||
| last = Wolf |
|||
| first = Kurt B. |
|||
| title = Integral Transforms in Science and Engineering |
|||
| publisher = [[Springer-Verlag|Springer]] |
|||
| year = 1979 |
|||
| doi = 10.1007/978-1-4757-0872-1 |
|||
| isbn = 978-1-4757-0874-5 |
|||
| url = https://www.fis.unam.mx/~bwolf/integraleng.html |
|||
}} |
|||
* {{citation |
|||
| last = Yosida | first = K. | author-link = Kōsaku Yosida |
|||
| title = Functional Analysis |
|||
| publisher = [[Springer-Verlag|Springer]] |
|||
| year = 1968 |
|||
| isbn = 978-3-540-58654-8 |
|||
}} |
|||
{{refend}} |
|||
== External links == |
|||
* {{Commons category-inline}} |
|||
* [https://www.encyclopediaofmath.org/index.php/Fourier_transform Encyclopedia of Mathematics] |
|||
* {{MathWorld | urlname = FourierTransform | title = Fourier Transform}} |
|||
* [https://www.xtal.iqf.csic.es/Cristalografia/parte_05-en.html Fourier Transform in Crystallography] |
|||
{{Authority control}} |
|||
{{DEFAULTSORT:Fourier Transform}} |
{{DEFAULTSORT:Fourier Transform}} |
||
[[Category:Fundamental physics concepts]] |
|||
[[Category:Fourier analysis]] |
[[Category:Fourier analysis]] |
||
[[Category:Integral transforms]] |
[[Category:Integral transforms]] |
||
[[Category:Unitary operators]] |
[[Category:Unitary operators]] |
||
[[Category:Joseph Fourier]] |
[[Category:Joseph Fourier]] |
||
[[Category:Mathematical physics]] |
|||
[[ar:تحويل فورييه]] |
|||
[[be-x-old:Пераўтварэньне Фур'е]] |
|||
[[ca:Transformada de Fourier]] |
|||
[[cs:Fourierova transformace]] |
|||
[[da:Fouriertransformation]] |
|||
[[de:Fourier-Transformation]] |
|||
[[es:Transformada de Fourier]] |
|||
[[eo:Konverto de Fourier]] |
|||
[[eu:Fourierren transformaketa]] |
|||
[[fa:تبدیل فوریه]] |
|||
[[fr:Transformée de Fourier]] |
|||
[[gl:Transformada de Fourier]] |
|||
[[ko:푸리에 변환]] |
|||
[[id:Transformasi Fourier]] |
|||
[[is:Fourier–vörpun]] |
|||
[[it:Trasformata di Fourier]] |
|||
[[lt:Furjė transformacija]] |
|||
[[mt:Trasformata ta' Fourier]] |
|||
[[nl:Fouriertransformatie]] |
|||
[[ja:フーリエ変換]] |
|||
[[no:Fouriertransformasjon]] |
|||
[[nn:Fouriertransformasjon]] |
|||
[[pl:Transformacja Fouriera]] |
|||
[[pt:Transformada de Fourier]] |
|||
[[ro:Transformata Fourier]] |
|||
[[ru:Преобразование Фурье]] |
|||
[[simple:Fourier transform]] |
|||
[[sk:Fourierova transformácia]] |
|||
[[sr:Фуријеов ред]] |
|||
[[fi:Fourier'n muunnos]] |
|||
[[sv:Fouriertransform]] |
|||
[[th:การแปลงฟูริเยร์]] |
|||
[[tr:Fourier dönüşümü]] |
|||
[[uk:Перетворення Фур'є]] |
|||
[[vi:Biến đổi Fourier]] |
|||
[[zh:傅里叶变换]] |
Latest revision as of 08:15, 27 December 2024
Fourier transforms |
---|
In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.
Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle. The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion). The Fourier transform of a Gaussian function is another Gaussian function. Joseph Fourier introduced sine and cosine transforms (which correspond to the imaginary and real components of the modern Fourier transform) in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation.
The Fourier transform can be formally defined as an improper Riemann integral, making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory.[note 1] For example, many relatively simple applications use the Dirac delta function, which can be treated formally as if it were a function, but the justification requires a mathematically more sophisticated viewpoint.[note 2]
The Fourier transform can also be generalized to functions of several variables on Euclidean space, sending a function of 3-dimensional 'position space' to a function of 3-dimensional momentum (or a function of space and time to a function of 4-momentum). This idea makes the spatial Fourier transform very natural in the study of waves, as well as in quantum mechanics, where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possibly vector-valued.[note 3] Still further generalization is possible to functions on groups, which, besides the original Fourier transform on R or Rn, notably includes the discrete-time Fourier transform (DTFT, group = Z), the discrete Fourier transform (DFT, group = Z mod N) and the Fourier series or circular Fourier transform (group = S1, the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handle periodic functions. The fast Fourier transform (FFT) is an algorithm for computing the DFT.
Definition
[edit]The Fourier transform is an analysis process, decomposing a complex-valued function into its constituent frequencies and their amplitudes. The inverse process is synthesis, which recreates from its transform.
We can start with an analogy, the Fourier series, which analyzes over a bounded interval on the real line. The constituent frequencies at form a discrete set of harmonics whose amplitude and phase are given by the analysis formula:The actual Fourier series is the synthesis formula:On an unbounded interval, the constituent frequencies are a continuum: [1][2][3] and is replaced by a function:[4]
| (Eq.1) |
Evaluating the Fourier transform for all values of produces the frequency-domain function. Though, in general, the integral can diverge at some frequencies, if decays with all derivatives, i.e., then converges for all frequencies and, by the Riemann–Lebesgue lemma, also decays with all derivatives.
The complex number , in polar coordinates, conveys both amplitude and phase of frequency The intuitive interpretation of Eq.1 is that the effect of multiplying by is to subtract from every frequency component of function [note 4] Only the component that was at frequency can produce a non-zero value of the infinite integral, because (at least formally) all the other shifted components are oscillatory and integrate to zero. (see § Example)
The corresponding synthesis formula is:
| (Eq.2) |
Eq.2 is a representation of as a weighted summation of complex exponential functions.
This is also known as the Fourier inversion theorem, and was first introduced in Fourier's Analytical Theory of Heat.[5][6][7][8]
The functions and are referred to as a Fourier transform pair.[9] A common notation for designating transform pairs is:[10] for example
Lebesgue integrable functions
[edit]A measurable function is called (Lebesgue) integrable if the Lebesgue integral of its absolute value is finite: For a Lebesgue integrable function the Fourier transform is defined by Eq.1.[11] The integral Eq.1 is well-defined for all because of the assumption . (It can be shown that the function is bounded and uniformly continuous in the frequency domain, and moreover, by the Riemann–Lebesgue lemma, it is zero at infinity.)
The space is the space of measurable functions for which the norm is finite, modulo the equivalence relation of equality almost everywhere. The Fourier transform is one-to-one on . However, there is no easy characterization of the image, and thus no easy characterization of the inverse transform. In particular, Eq.2 is no longer valid, as it was stated only under the hypothesis that decayed with all derivatives.
Moreover, while Eq.1 defines the Fourier transform for (complex-valued) functions in , it is easy to see that it is not well-defined for other integrability classes, most importantly the space of square-integrable functions . For example, the function is in but not , so the integral Eq.1 diverges. However, the Fourier transform on the dense subspace admits a unique continuous extension to a unitary operator on . This extension is important in part because the Fourier transform preserves the space . That is, unlike the case of , both the Fourier transform and its inverse act on same function space .
In such cases, the Fourier transform can be obtained explicitly by regularizing the integral, and then passing to a limit. In practice, the integral is often regarded as an improper integral instead of a proper Lebesgue integral, but sometimes for convergence one needs to use weak limit or principal value instead of the (pointwise) limits implicit in an improper integral. Titchmarsh (1986) and Dym & McKean (1985) each gives three rigorous ways of extending the Fourier transform to square integrable functions using this procedure. A general principle in working with the Fourier transform is that Gaussians are dense in , and the various features of the Fourier transform, such as its unitarity, are easily inferred for Gaussians. Many of the properties of the Fourier transform, can then be proven from two facts about Gaussians:[12]
- that is its own Fourier transform; and
- that the Gaussian integral
A feature of the Fourier transform is that it is a homomorphism of Banach algebras from equipped with the convolution operation to the Banach algebra of continuous functions under the (supremum) norm. The conventions chosen in this article are those of harmonic analysis, and are characterized as the unique conventions such that the Fourier transform is both unitary on L2 and an algebra homomorphism from L1 to L∞, without renormalizing the Lebesgue measure.[13]
Angular frequency (ω)
[edit]When the independent variable () represents time (often denoted by ), the transform variable () represents frequency (often denoted by ). For example, if time is measured in seconds, then frequency is in hertz. The Fourier transform can also be written in terms of angular frequency, whose units are radians per second.
The substitution into Eq.1 produces this convention, where function is relabeled Unlike the Eq.1 definition, the Fourier transform is no longer a unitary transformation, and there is less symmetry between the formulas for the transform and its inverse. Those properties are restored by splitting the factor evenly between the transform and its inverse, which leads to another convention: Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forward and the reverse transform. The signs must be opposites.
ordinary frequency ξ (Hz) | unitary | |
---|---|---|
angular frequency ω (rad/s) | unitary | |
non-unitary |
ordinary frequency ξ (Hz) | unitary | |
---|---|---|
angular frequency ω (rad/s) | unitary | |
non-unitary |
Background
[edit]History
[edit]In 1822, Fourier claimed (see Joseph Fourier § The Analytic Theory of Heat) that any function, whether continuous or discontinuous, can be expanded into a series of sines.[14] That important work was corrected and expanded upon by others to provide the foundation for the various forms of the Fourier transform used since.
Complex sinusoids
[edit]In general, the coefficients are complex numbers, which have two equivalent forms (see Euler's formula):
The product with (Eq.2) has these forms:
It is noteworthy how easily the product was simplified using the polar form, and how easily the rectangular form was deduced by an application of Euler's formula.
Negative frequency
[edit]Euler's formula introduces the possibility of negative And Eq.1 is defined Only certain complex-valued have transforms (See Analytic signal. A simple example is ) But negative frequency is necessary to characterize all other complex-valued found in signal processing, partial differential equations, radar, nonlinear optics, quantum mechanics, and others.
For a real-valued Eq.1 has the symmetry property (see § Conjugation below). This redundancy enables Eq.2 to distinguish from But of course it cannot tell us the actual sign of because and are indistinguishable on just the real numbers line.
Fourier transform for periodic functions
[edit]The Fourier transform of a periodic function cannot be defined using the integral formula directly. In order for integral in Eq.1 to be defined the function must be absolutely integrable. Instead it is common to use Fourier series. It is possible to extend the definition to include periodic functions by viewing them as tempered distributions.
This makes it possible to see a connection between the Fourier series and the Fourier transform for periodic functions that have a convergent Fourier series. If is a periodic function, with period , that has a convergent Fourier series, then: where are the Fourier series coefficients of , and is the Dirac delta function. In other words, the Fourier transform is a Dirac comb function whose teeth are multiplied by the Fourier series coefficients.
Sampling the Fourier transform
[edit]The Fourier transform of an integrable function can be sampled at regular intervals of arbitrary length These samples can be deduced from one cycle of a periodic function which has Fourier series coefficients proportional to those samples by the Poisson summation formula:
The integrability of ensures the periodic summation converges. Therefore, the samples can be determined by Fourier series analysis:
When has compact support, has a finite number of terms within the interval of integration. When does not have compact support, numerical evaluation of requires an approximation, such as tapering or truncating the number of terms.
Units
[edit]The frequency variable must have inverse units to the units of the original function's domain (typically named or ). For example, if is measured in seconds, should be in cycles per second or hertz. If the scale of time is in units of seconds, then another Greek letter is typically used instead to represent angular frequency (where ) in units of radians per second. If using for units of length, then must be in inverse length, e.g., wavenumbers. That is to say, there are two versions of the real line: one which is the range of and measured in units of and the other which is the range of and measured in inverse units to the units of These two distinct versions of the real line cannot be equated with each other. Therefore, the Fourier transform goes from one space of functions to a different space of functions: functions which have a different domain of definition.
In general, must always be taken to be a linear form on the space of its domain, which is to say that the second real line is the dual space of the first real line. See the article on linear algebra for a more formal explanation and for more details. This point of view becomes essential in generalizations of the Fourier transform to general symmetry groups, including the case of Fourier series.
That there is no one preferred way (often, one says "no canonical way") to compare the two versions of the real line which are involved in the Fourier transform—fixing the units on one line does not force the scale of the units on the other line—is the reason for the plethora of rival conventions on the definition of the Fourier transform. The various definitions resulting from different choices of units differ by various constants.
In other conventions, the Fourier transform has i in the exponent instead of −i, and vice versa for the inversion formula. This convention is common in modern physics[15] and is the default for Wolfram Alpha, and does not mean that the frequency has become negative, since there is no canonical definition of positivity for frequency of a complex wave. It simply means that is the amplitude of the wave instead of the wave (the former, with its minus sign, is often seen in the time dependence for Sinusoidal plane-wave solutions of the electromagnetic wave equation, or in the time dependence for quantum wave functions). Many of the identities involving the Fourier transform remain valid in those conventions, provided all terms that explicitly involve i have it replaced by −i. In Electrical engineering the letter j is typically used for the imaginary unit instead of i because i is used for current.
When using dimensionless units, the constant factors might not even be written in the transform definition. For instance, in probability theory, the characteristic function Φ of the probability density function f of a random variable X of continuous type is defined without a negative sign in the exponential, and since the units of x are ignored, there is no 2π either:
(In probability theory, and in mathematical statistics, the use of the Fourier—Stieltjes transform is preferred, because so many random variables are not of continuous type, and do not possess a density function, and one must treat not functions but distributions, i.e., measures which possess "atoms".)
From the higher point of view of group characters, which is much more abstract, all these arbitrary choices disappear, as will be explained in the later section of this article, which treats the notion of the Fourier transform of a function on a locally compact Abelian group.
Properties
[edit]Let and represent integrable functions Lebesgue-measurable on the real line satisfying: We denote the Fourier transforms of these functions as and respectively.
Basic properties
[edit]The Fourier transform has the following basic properties:[16]
Linearity
[edit]
Time shifting
[edit]
Frequency shifting
[edit]
Time scaling
[edit]The case leads to the time-reversal property:
Symmetry
[edit]When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:[17]
From this, various relationships are apparent, for example:
- The transform of a real-valued function is the conjugate symmetric function Conversely, a conjugate symmetric transform implies a real-valued time-domain.
- The transform of an imaginary-valued function is the conjugate antisymmetric function and the converse is true.
- The transform of a conjugate symmetric function is the real-valued function and the converse is true.
- The transform of a conjugate antisymmetric function is the imaginary-valued function and the converse is true.
Conjugation
[edit](Note: the ∗ denotes complex conjugation.)
In particular, if is real, then is even symmetric (aka Hermitian function):
And if is purely imaginary, then is odd symmetric:
Real and imaginary parts
[edit]
Zero frequency component
[edit]Substituting in the definition, we obtain:
The integral of over its domain is known as the average value or DC bias of the function.
Uniform continuity and the Riemann–Lebesgue lemma
[edit]The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties.
The Fourier transform of any integrable function is uniformly continuous and[18]
By the Riemann–Lebesgue lemma,[19]
However, need not be integrable. For example, the Fourier transform of the rectangular function, which is integrable, is the sinc function, which is not Lebesgue integrable, because its improper integrals behave analogously to the alternating harmonic series, in converging to a sum without being absolutely convergent.
It is not generally possible to write the inverse transform as a Lebesgue integral. However, when both and are integrable, the inverse equality holds for almost every x. As a result, the Fourier transform is injective on L1(R).
Plancherel theorem and Parseval's theorem
[edit]Let f(x) and g(x) be integrable, and let f̂(ξ) and ĝ(ξ) be their Fourier transforms. If f(x) and g(x) are also square-integrable, then the Parseval formula follows:[20] where the bar denotes complex conjugation.
The Plancherel theorem, which follows from the above, states that[21]
Plancherel's theorem makes it possible to extend the Fourier transform, by a continuity argument, to a unitary operator on L2(R). On L1(R) ∩ L2(R), this extension agrees with original Fourier transform defined on L1(R), thus enlarging the domain of the Fourier transform to L1(R) + L2(R) (and consequently to Lp(R) for 1 ≤ p ≤ 2). Plancherel's theorem has the interpretation in the sciences that the Fourier transform preserves the energy of the original quantity. The terminology of these formulas is not quite standardised. Parseval's theorem was proved only for Fourier series, and was first proved by Lyapunov. But Parseval's formula makes sense for the Fourier transform as well, and so even though in the context of the Fourier transform it was proved by Plancherel, it is still often referred to as Parseval's formula, or Parseval's relation, or even Parseval's theorem.
See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups.
Convolution theorem
[edit]The Fourier transform translates between convolution and multiplication of functions. If f(x) and g(x) are integrable functions with Fourier transforms f̂(ξ) and ĝ(ξ) respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms f̂(ξ) and ĝ(ξ) (under other conventions for the definition of the Fourier transform a constant factor may appear).
This means that if: where ∗ denotes the convolution operation, then:
In linear time invariant (LTI) system theory, it is common to interpret g(x) as the impulse response of an LTI system with input f(x) and output h(x), since substituting the unit impulse for f(x) yields h(x) = g(x). In this case, ĝ(ξ) represents the frequency response of the system.
Conversely, if f(x) can be decomposed as the product of two square integrable functions p(x) and q(x), then the Fourier transform of f(x) is given by the convolution of the respective Fourier transforms p̂(ξ) and q̂(ξ).
Cross-correlation theorem
[edit]In an analogous manner, it can be shown that if h(x) is the cross-correlation of f(x) and g(x): then the Fourier transform of h(x) is:
As a special case, the autocorrelation of function f(x) is: for which
Differentiation
[edit]Suppose f(x) is an absolutely continuous differentiable function, and both f and its derivative f′ are integrable. Then the Fourier transform of the derivative is given by More generally, the Fourier transformation of the nth derivative f(n) is given by
Analogously, , so
By applying the Fourier transform and using these formulas, some ordinary differential equations can be transformed into algebraic equations, which are much easier to solve. These formulas also give rise to the rule of thumb "f(x) is smooth if and only if f̂(ξ) quickly falls to 0 for |ξ| → ∞." By using the analogous rules for the inverse Fourier transform, one can also say "f(x) quickly falls to 0 for |x| → ∞ if and only if f̂(ξ) is smooth."
Eigenfunctions
[edit]The Fourier transform is a linear transform which has eigenfunctions obeying with
A set of eigenfunctions is found by noting that the homogeneous differential equation leads to eigenfunctions of the Fourier transform as long as the form of the equation remains invariant under Fourier transform.[note 5] In other words, every solution and its Fourier transform obey the same equation. Assuming uniqueness of the solutions, every solution must therefore be an eigenfunction of the Fourier transform. The form of the equation remains unchanged under Fourier transform if can be expanded in a power series in which for all terms the same factor of either one of arises from the factors introduced by the differentiation rules upon Fourier transforming the homogeneous differential equation because this factor may then be cancelled. The simplest allowable leads to the standard normal distribution.[22]
More generally, a set of eigenfunctions is also found by noting that the differentiation rules imply that the ordinary differential equation with constant and being a non-constant even function remains invariant in form when applying the Fourier transform to both sides of the equation. The simplest example is provided by which is equivalent to considering the Schrödinger equation for the quantum harmonic oscillator.[23] The corresponding solutions provide an important choice of an orthonormal basis for L2(R) and are given by the "physicist's" Hermite functions. Equivalently one may use where Hen(x) are the "probabilist's" Hermite polynomials, defined as
Under this convention for the Fourier transform, we have that
In other words, the Hermite functions form a complete orthonormal system of eigenfunctions for the Fourier transform on L2(R).[16][24] However, this choice of eigenfunctions is not unique. Because of there are only four different eigenvalues of the Fourier transform (the fourth roots of unity ±1 and ±i) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction.[25] As a consequence of this, it is possible to decompose L2(R) as a direct sum of four spaces H0, H1, H2, and H3 where the Fourier transform acts on Hek simply by multiplication by ik.
Since the complete set of Hermite functions ψn provides a resolution of the identity they diagonalize the Fourier operator, i.e. the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed:
This approach to define the Fourier transform was first proposed by Norbert Wiener.[26] Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely the fractional Fourier transform used in time–frequency analysis.[27] In physics, this transform was introduced by Edward Condon.[28] This change of basis functions becomes possible because the Fourier transform is a unitary transform when using the right conventions. Consequently, under the proper conditions it may be expected to result from a self-adjoint generator via[29]
The operator is the number operator of the quantum harmonic oscillator written as[30][31]
It can be interpreted as the generator of fractional Fourier transforms for arbitrary values of t, and of the conventional continuous Fourier transform for the particular value with the Mehler kernel implementing the corresponding active transform. The eigenfunctions of are the Hermite functions which are therefore also eigenfunctions of
Upon extending the Fourier transform to distributions the Dirac comb is also an eigenfunction of the Fourier transform.
Inversion and periodicity
[edit]Under suitable conditions on the function , it can be recovered from its Fourier transform . Indeed, denoting the Fourier transform operator by , so , then for suitable functions, applying the Fourier transform twice simply flips the function: , which can be interpreted as "reversing time". Since reversing time is two-periodic, applying this twice yields , so the Fourier transform operator is four-periodic, and similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times: . In particular the Fourier transform is invertible (under suitable conditions).
More precisely, defining the parity operator such that , we have: These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equality almost everywhere?) and defining equality of operators – that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of the Fourier inversion theorem.
This fourfold periodicity of the Fourier transform is similar to a rotation of the plane by 90°, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90° in the time–frequency domain (considering time as the x-axis and frequency as the y-axis), and the Fourier transform can be generalized to the fractional Fourier transform, which involves rotations by other angles. This can be further generalized to linear canonical transformations, which can be visualized as the action of the special linear group SL2(R) on the time–frequency plane, with the preserved symplectic form corresponding to the uncertainty principle, below. This approach is particularly studied in signal processing, under time–frequency analysis.
Connection with the Heisenberg group
[edit]The Heisenberg group is a certain group of unitary operators on the Hilbert space L2(R) of square integrable complex valued functions f on the real line, generated by the translations (Ty f)(x) = f (x + y) and multiplication by ei2πξx, (Mξ f)(x) = ei2πξx f (x). These operators do not commute, as their (group) commutator is which is multiplication by the constant (independent of x) ei2πξy ∈ U(1) (the circle group of unit modulus complex numbers). As an abstract group, the Heisenberg group is the three-dimensional Lie group of triples (x, ξ, z) ∈ R2 × U(1), with the group law
Denote the Heisenberg group by H1. The above procedure describes not only the group structure, but also a standard unitary representation of H1 on a Hilbert space, which we denote by ρ : H1 → B(L2(R)). Define the linear automorphism of R2 by so that J2 = −I. This J can be extended to a unique automorphism of H1:
According to the Stone–von Neumann theorem, the unitary representations ρ and ρ ∘ j are unitarily equivalent, so there is a unique intertwiner W ∈ U(L2(R)) such that This operator W is the Fourier transform.
Many of the standard properties of the Fourier transform are immediate consequences of this more general framework.[32] For example, the square of the Fourier transform, W2, is an intertwiner associated with J2 = −I, and so we have (W2f)(x) = f (−x) is the reflection of the original function f.
Complex domain
[edit]The integral for the Fourier transform can be studied for complex values of its argument ξ. Depending on the properties of f, this might not converge off the real axis at all, or it might converge to a complex analytic function for all values of ξ = σ + iτ, or something in between.[33]
The Paley–Wiener theorem says that f is smooth (i.e., n-times differentiable for all positive integers n) and compactly supported if and only if f̂ (σ + iτ) is a holomorphic function for which there exists a constant a > 0 such that for any integer n ≥ 0, for some constant C. (In this case, f is supported on [−a, a].) This can be expressed by saying that f̂ is an entire function which is rapidly decreasing in σ (for fixed τ) and of exponential growth in τ (uniformly in σ).[34]
(If f is not smooth, but only L2, the statement still holds provided n = 0.[35]) The space of such functions of a complex variable is called the Paley—Wiener space. This theorem has been generalised to semisimple Lie groups.[36]
If f is supported on the half-line t ≥ 0, then f is said to be "causal" because the impulse response function of a physically realisable filter must have this property, as no effect can precede its cause. Paley and Wiener showed that then f̂ extends to a holomorphic function on the complex lower half-plane τ < 0 which tends to zero as τ goes to infinity.[37] The converse is false and it is not known how to characterise the Fourier transform of a causal function.[38]
Laplace transform
[edit]The Fourier transform f̂(ξ) is related to the Laplace transform F(s), which is also used for the solution of differential equations and the analysis of filters.
It may happen that a function f for which the Fourier integral does not converge on the real axis at all, nevertheless has a complex Fourier transform defined in some region of the complex plane.
For example, if f(t) is of exponential growth, i.e., for some constants C, a ≥ 0, then[39] convergent for all 2πτ < −a, is the two-sided Laplace transform of f.
The more usual version ("one-sided") of the Laplace transform is
If f is also causal, and analytical, then: Thus, extending the Fourier transform to the complex domain means it includes the Laplace transform as a special case in the case of causal functions—but with the change of variable s = i2πξ.
From another, perhaps more classical viewpoint, the Laplace transform by its form involves an additional exponential regulating term which lets it converge outside of the imaginary line where the Fourier transform is defined. As such it can converge for at most exponentially divergent series and integrals, whereas the original Fourier decomposition cannot, enabling analysis of systems with divergent or critical elements. Two particular examples from linear signal processing are the construction of allpass filter networks from critical comb and mitigating filters via exact pole-zero cancellation on the unit circle. Such designs are common in audio processing, where highly nonlinear phase response is sought for, as in reverb.
Furthermore, when extended pulselike impulse responses are sought for signal processing work, the easiest way to produce them is to have one circuit which produces a divergent time response, and then to cancel its divergence through a delayed opposite and compensatory response. There, only the delay circuit in-between admits a classical Fourier description, which is critical. Both the circuits to the side are unstable, and do not admit a convergent Fourier decomposition. However, they do admit a Laplace domain description, with identical half-planes of convergence in the complex plane (or in the discrete case, the Z-plane), wherein their effects cancel.
In modern mathematics the Laplace transform is conventionally subsumed under the aegis Fourier methods. Both of them are subsumed by the far more general, and more abstract, idea of harmonic analysis.
Inversion
[edit]Still with , if is complex analytic for a ≤ τ ≤ b, then
by Cauchy's integral theorem. Therefore, the Fourier inversion formula can use integration along different lines, parallel to the real axis.[40]
Theorem: If f(t) = 0 for t < 0, and |f(t)| < Cea|t| for some constants C, a > 0, then for any τ < −a/2π.
This theorem implies the Mellin inversion formula for the Laplace transformation,[39] for any b > a, where F(s) is the Laplace transform of f(t).
The hypotheses can be weakened, as in the results of Carleson and Hunt, to f(t) e−at being L1, provided that f be of bounded variation in a closed neighborhood of t (cf. Dini test), the value of f at t be taken to be the arithmetic mean of the left and right limits, and that the integrals be taken in the sense of Cauchy principal values.[41]
L2 versions of these inversion formulas are also available.[42]
Fourier transform on Euclidean space
[edit]The Fourier transform can be defined in any arbitrary number of dimensions n. As with the one-dimensional case, there are many conventions. For an integrable function f(x), this article takes the definition: where x and ξ are n-dimensional vectors, and x · ξ is the dot product of the vectors. Alternatively, ξ can be viewed as belonging to the dual vector space , in which case the dot product becomes the contraction of x and ξ, usually written as ⟨x, ξ⟩.
All of the basic properties listed above hold for the n-dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the Riemann–Lebesgue lemma holds.[19]
Uncertainty principle
[edit]Generally speaking, the more concentrated f(x) is, the more spread out its Fourier transform f̂(ξ) must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we squeeze a function in x, its Fourier transform stretches out in ξ. It is not possible to arbitrarily concentrate both a function and its Fourier transform.
The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an uncertainty principle by viewing a function and its Fourier transform as conjugate variables with respect to the symplectic form on the time–frequency domain: from the point of view of the linear canonical transformation, the Fourier transform is rotation by 90° in the time–frequency domain, and preserves the symplectic form.
Suppose f(x) is an integrable and square-integrable function. Without loss of generality, assume that f(x) is normalized:
It follows from the Plancherel theorem that f̂(ξ) is also normalized.
The spread around x = 0 may be measured by the dispersion about zero[43] defined by
In probability terms, this is the second moment of |f(x)|2 about zero.
The uncertainty principle states that, if f(x) is absolutely continuous and the functions x·f(x) and f′(x) are square integrable, then[16]
The equality is attained only in the case where σ > 0 is arbitrary and C1 = 4√2/√σ so that f is L2-normalized.[16] In other words, where f is a (normalized) Gaussian function with variance σ2/2π, centered at zero, and its Fourier transform is a Gaussian function with variance σ−2/2π.
In fact, this inequality implies that: for any x0, ξ0 ∈ R.[44]
In quantum mechanics, the momentum and position wave functions are Fourier transform pairs, up to a factor of the Planck constant. With this constant properly taken into account, the inequality above becomes the statement of the Heisenberg uncertainty principle.[45]
A stronger uncertainty principle is the Hirschman uncertainty principle, which is expressed as: where H(p) is the differential entropy of the probability density function p(x): where the logarithms may be in any base that is consistent. The equality is attained for a Gaussian, as in the previous case.
Sine and cosine transforms
[edit]Fourier's original formulation of the transform did not use complex numbers, but rather sines and cosines. Statisticians and others still use this form. An absolutely integrable function f for which Fourier inversion holds can be expanded in terms of genuine frequencies (avoiding negative frequencies, which are sometimes considered hard to interpret physically[46]) λ by
This is called an expansion as a trigonometric integral, or a Fourier integral expansion. The coefficient functions a and b can be found by using variants of the Fourier cosine transform and the Fourier sine transform (the normalisations are, again, not standardised): and
Older literature refers to the two transform functions, the Fourier cosine transform, a, and the Fourier sine transform, b.
The function f can be recovered from the sine and cosine transform using together with trigonometric identities. This is referred to as Fourier's integral formula.[39][47][48][49]
Spherical harmonics
[edit]Let the set of homogeneous harmonic polynomials of degree k on Rn be denoted by Ak. The set Ak consists of the solid spherical harmonics of degree k. The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, if f(x) = e−π|x|2P(x) for some P(x) in Ak, then f̂(ξ) = i−k f(ξ). Let the set Hk be the closure in L2(Rn) of linear combinations of functions of the form f(|x|)P(x) where P(x) is in Ak. The space L2(Rn) is then a direct sum of the spaces Hk and the Fourier transform maps each space Hk to itself and is possible to characterize the action of the Fourier transform on each space Hk.[19]
Let f(x) = f0(|x|)P(x) (with P(x) in Ak), then where
Here J(n + 2k − 2)/2 denotes the Bessel function of the first kind with order n + 2k − 2/2. When k = 0 this gives a useful formula for the Fourier transform of a radial function.[50] This is essentially the Hankel transform. Moreover, there is a simple recursion relating the cases n + 2 and n[51] allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one.
Restriction problems
[edit]In higher dimensions it becomes interesting to study restriction problems for the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a general class of square integrable functions. As such, the restriction of the Fourier transform of an L2(Rn) function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems in Lp for 1 < p < 2. It is possible in some cases to define the restriction of a Fourier transform to a set S, provided S has non-zero curvature. The case when S is the unit sphere in Rn is of particular interest. In this case the Tomas–Stein restriction theorem states that the restriction of the Fourier transform to the unit sphere in Rn is a bounded operator on Lp provided 1 ≤ p ≤ 2n + 2/n + 3.
One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable sets ER indexed by R ∈ (0,∞): such as balls of radius R centered at the origin, or cubes of side 2R. For a given integrable function f, consider the function fR defined by:
Suppose in addition that f ∈ Lp(Rn). For n = 1 and 1 < p < ∞, if one takes ER = (−R, R), then fR converges to f in Lp as R tends to infinity, by the boundedness of the Hilbert transform. Naively one may hope the same holds true for n > 1. In the case that ER is taken to be a cube with side length R, then convergence still holds. Another natural candidate is the Euclidean ball ER = {ξ : |ξ| < R}. In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded in Lp(Rn). For n ≥ 2 it is a celebrated theorem of Charles Fefferman that the multiplier for the unit ball is never bounded unless p = 2.[26] In fact, when p ≠ 2, this shows that not only may fR fail to converge to f in Lp, but for some functions f ∈ Lp(Rn), fR is not even an element of Lp.
Fourier transform on function spaces
[edit]The definition of the Fourier transform naturally extends from to as, for f ∈ L1(Rn) whereby the Riemann–Lebesgue lemma may be formulated as the Fourier transform F : L1(Rn) → L∞(Rn). This operator is bounded as which shows that its operator norm is bounded by 1. The image of L1 is a strict subset of C0(Rn), the space of continuous functions which vanish at infinity.
Similarly to the case of one variable, the Fourier transform can be defined on . Since the space of compactly supported smooth functions C∞
c(Rn) is dense in L2(Rn), the Plancherel theorem allows one to extend the definition of the Fourier transform to general functions in L2(Rn) by continuity arguments. The Fourier transform in L2(Rn) is no longer given by an ordinary Lebesgue integral, although it can be computed by an improper integral, i.e.,
where the limit is taken in the L2 sense.[52][53]
Furthermore, F : L2(Rn) → L2(Rn) is a unitary operator.[54] For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for any f, g ∈ L2(Rn) we have
In particular, the image of L2(Rn) is itself under the Fourier transform.
On other Lp
[edit]For , the Fourier transform can be defined on by Marcinkiewicz interpolation, which amounts to decomposing such functions into a fat tail part in L2 plus a fat body part in L1. In each of these spaces, the Fourier transform of a function in Lp(Rn) is in Lq(Rn), where q = p/p − 1 is the Hölder conjugate of p (by the Hausdorff–Young inequality). However, except for p = 2, the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions in Lp for the range 2 < p < ∞ requires the study of distributions.[18] In fact, it can be shown that there are functions in Lp with p > 2 so that the Fourier transform is not defined as a function.[19]
Tempered distributions
[edit]One might consider enlarging the domain of the Fourier transform from L1 + L2 by considering generalized functions, or distributions. A distribution on Rn is a continuous linear functional on the space C∞
c(Rn) of compactly supported smooth functions, equipped with a suitable topology. The strategy is then to consider the action of the Fourier transform on C∞
c(Rn) and pass to distributions by duality. The obstruction to doing this is that the Fourier transform does not map C∞
c(Rn) to C∞
c(Rn). In fact the Fourier transform of an element in C∞
c(Rn) can not vanish on an open set; see the above discussion on the uncertainty principle.
The Fourier transform can also be defined for tempered distributions , dual to the space of Schwartz functions . A Schwartz function is a smooth function that decays at infinity, along with all of its derivatives, hence . The Fourier transform is an automorphism on the Schwartz space, as a topological vector space, and thus induces an automorphism on its dual, the space of tempered distributions.[19] The tempered distributions include well-behaved functions of polynomial growth, distributions of compact support as well as all the integrable functions mentioned above.
For the definition of the Fourier transform of a tempered distribution, let f and g be integrable functions, and let f̂ and ĝ be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula,[19]
Every integrable function f defines (induces) a distribution Tf by the relation So it makes sense to define the Fourier transform of a tempered distribution by the duality: Extending this to all tempered distributions T gives the general definition of the Fourier transform.
Distributions can be differentiated and the above-mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions.
Generalizations
[edit]Fourier–Stieltjes transform
[edit]The Fourier transform of a finite Borel measure μ on Rn is given by:[55] and called the Fourier-Stieltjes transform due to its connection with the Riemann-Stieltjes integral representation of (Radon) measures.[56] One notable difference with the Fourier transform of integrable functions is that the Riemann–Lebesgue lemma fails for measures.[18] In the case that dμ = f(x) dx, then the formula above reduces to the usual definition for the Fourier transform of f. In the case that μ is the probability distribution associated to a random variable X, the Fourier–Stieltjes transform is closely related to the characteristic function, but the typical conventions in probability theory take eiξx instead of e−i2πξx.[16] In the case when the distribution has a probability density function this definition reduces to the Fourier transform applied to the probability density function, again with a different choice of constants.
The Fourier transform may be used to give a characterization of measures. Bochner's theorem characterizes which functions may arise as the Fourier–Stieltjes transform of a positive measure on the circle.[18]
Furthermore, the Dirac delta function is a finite Borel measure. Its Fourier transform is a constant function (whose value depends on the form of the Fourier transform used).
Locally compact abelian groups
[edit]The Fourier transform may be generalized to any locally compact abelian group. A locally compact abelian group is an abelian group that is at the same time a locally compact Hausdorff topological space so that the group operation is continuous. If G is a locally compact abelian group, it has a translation invariant measure μ, called Haar measure. For a locally compact abelian group G, the set of irreducible, i.e. one-dimensional, unitary representations are called its characters. With its natural group structure and the topology of uniform convergence on compact sets (that is, the topology induced by the compact-open topology on the space of all continuous functions from to the circle group), the set of characters Ĝ is itself a locally compact abelian group, called the Pontryagin dual of G. For a function f in L1(G), its Fourier transform is defined by[18]
The Riemann–Lebesgue lemma holds in this case; f̂(ξ) is a function vanishing at infinity on Ĝ.
The Fourier transform on T = R/Z is an example; here T is a locally compact abelian group, and the Haar measure μ on T can be thought of as the Lebesgue measure on [0,1). Consider the representation of T on the complex plane C that is a 1-dimensional complex vector space. There are a group of representations (which are irreducible since C is 1-dim) where for .
The character of such representation, that is the trace of for each and , is itself. In the case of representation of finite group, the character table of the group G are rows of vectors such that each row is the character of one irreducible representation of G, and these vectors form an orthonormal basis of the space of class functions that map from G to C by Schur's lemma. Now the group T is no longer finite but still compact, and it preserves the orthonormality of character table. Each row of the table is the function of and the inner product between two class functions (all functions being class functions since T is abelian) is defined as with the normalizing factor . The sequence is an orthonormal basis of the space of class functions .
For any representation V of a finite group G, can be expressed as the span ( are the irreps of G), such that . Similarly for and , . The Pontriagin dual is and for , is its Fourier transform for .
Gelfand transform
[edit]The Fourier transform is also a special case of Gelfand transform. In this particular context, it is closely related to the Pontryagin duality map defined above.
Given an abelian locally compact Hausdorff topological group G, as before we consider space L1(G), defined using a Haar measure. With convolution as multiplication, L1(G) is an abelian Banach algebra. It also has an involution * given by
Taking the completion with respect to the largest possibly C*-norm gives its enveloping C*-algebra, called the group C*-algebra C*(G) of G. (Any C*-norm on L1(G) is bounded by the L1 norm, therefore their supremum exists.)
Given any abelian C*-algebra A, the Gelfand transform gives an isomorphism between A and C0(A^), where A^ is the multiplicative linear functionals, i.e. one-dimensional representations, on A with the weak-* topology. The map is simply given by It turns out that the multiplicative linear functionals of C*(G), after suitable identification, are exactly the characters of G, and the Gelfand transform, when restricted to the dense subset L1(G) is the Fourier–Pontryagin transform.
Compact non-abelian groups
[edit]The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is compact. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators.[57] The Fourier transform on compact groups is a major tool in representation theory[58] and non-commutative harmonic analysis.
Let G be a compact Hausdorff topological group. Let Σ denote the collection of all isomorphism classes of finite-dimensional irreducible unitary representations, along with a definite choice of representation U(σ) on the Hilbert space Hσ of finite dimension dσ for each σ ∈ Σ. If μ is a finite Borel measure on G, then the Fourier–Stieltjes transform of μ is the operator on Hσ defined by where U(σ) is the complex-conjugate representation of U(σ) acting on Hσ. If μ is absolutely continuous with respect to the left-invariant probability measure λ on G, represented as for some f ∈ L1(λ), one identifies the Fourier transform of f with the Fourier–Stieltjes transform of μ.
The mapping defines an isomorphism between the Banach space M(G) of finite Borel measures (see rca space) and a closed subspace of the Banach space C∞(Σ) consisting of all sequences E = (Eσ) indexed by Σ of (bounded) linear operators Eσ : Hσ → Hσ for which the norm is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism of C*-algebras into a subspace of C∞(Σ). Multiplication on M(G) is given by convolution of measures and the involution * defined by and C∞(Σ) has a natural C*-algebra structure as Hilbert space operators.
The Peter–Weyl theorem holds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: if f ∈ L2(G), then where the summation is understood as convergent in the L2 sense.
The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development of noncommutative geometry.[citation needed] In this context, a categorical generalization of the Fourier transform to noncommutative groups is Tannaka–Krein duality, which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions.
Alternatives
[edit]In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), and standing waves are not localized in time – a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notably transients, or any signal of finite extent.
As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms or time–frequency distributions to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, fractional Fourier transform, Synchrosqueezing Fourier transform,[59] or other functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.[27]
Example
[edit]The following figures provide a visual illustration of how the Fourier transform's integral measures whether a frequency is present in a particular function. The first image depicts the function which is a 3 Hz cosine wave (the first term) shaped by a Gaussian envelope function (the second term) that smoothly turns the wave on and off. The next 2 images show the product which must be integrated to calculate the Fourier transform at +3 Hz. The real part of the integrand has a non-negative average value, because the alternating signs of and oscillate at the same rate and in phase, whereas and oscillate at the same rate but with orthogonal phase. The absolute value of the Fourier transform at +3 Hz is 0.5, which is relatively large. When added to the Fourier transform at -3 Hz (which is identical because we started with a real signal), we find that the amplitude of the 3 Hz frequency component is 1.
However, when you try to measure a frequency that is not present, both the real and imaginary component of the integral vary rapidly between positive and negative values. For instance, the red curve is looking for 5 Hz. The absolute value of its integral is nearly zero, indicating that almost no 5 Hz component was in the signal. The general situation is usually more complicated than this, but heuristically this is how the Fourier transform measures how much of an individual frequency is present in a function
-
Real and imaginary parts of the integrand for its Fourier transform at +5 Hz.
-
Magnitude of its Fourier transform, with +3 and +5 Hz labeled.
To re-enforce an earlier point, the reason for the response at Hz is because and are indistinguishable. The transform of would have just one response, whose amplitude is the integral of the smooth envelope: whereas is
Applications
[edit]Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation of differentiation in the time domain corresponds to multiplication by the frequency,[note 6] so some differential equations are easier to analyze in the frequency domain. Also, convolution in the time domain corresponds to ordinary multiplication in the frequency domain (see Convolution theorem). After performing the desired operations, transformation of the result can be made back to the time domain. Harmonic analysis is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics.
Analysis of differential equations
[edit]Perhaps the most important use of the Fourier transformation is to solve partial differential equations. Many of the equations of the mathematical physics of the nineteenth century can be treated this way. Fourier studied the heat equation, which in one dimension and in dimensionless units is The example we will give, a slightly more difficult one, is the wave equation in one dimension,
As usual, the problem is not to find a solution: there are infinitely many. The problem is that of the so-called "boundary problem": find a solution which satisfies the "boundary conditions"
Here, f and g are given functions. For the heat equation, only one boundary condition can be required (usually the first one). But for the wave equation, there are still infinitely many solutions y which satisfy the first boundary condition. But when one imposes both conditions, there is only one possible solution.
It is easier to find the Fourier transform ŷ of the solution than to find the solution directly. This is because the Fourier transformation takes differentiation into multiplication by the Fourier-dual variable, and so a partial differential equation applied to the original function is transformed into multiplication by polynomial functions of the dual variables applied to the transformed function. After ŷ is determined, we can apply the inverse Fourier transformation to find y.
Fourier's method is as follows. First, note that any function of the forms satisfies the wave equation. These are called the elementary solutions.
Second, note that therefore any integral satisfies the wave equation for arbitrary a+, a−, b+, b−. This integral may be interpreted as a continuous linear combination of solutions for the linear equation.
Now this resembles the formula for the Fourier synthesis of a function. In fact, this is the real inverse Fourier transform of a± and b± in the variable x.
The third step is to examine how to find the specific unknown coefficient functions a± and b± that will lead to y satisfying the boundary conditions. We are interested in the values of these solutions at t = 0. So we will set t = 0. Assuming that the conditions needed for Fourier inversion are satisfied, we can then find the Fourier sine and cosine transforms (in the variable x) of both sides and obtain and
Similarly, taking the derivative of y with respect to t and then applying the Fourier sine and cosine transformations yields and
These are four linear equations for the four unknowns a± and b±, in terms of the Fourier sine and cosine transforms of the boundary conditions, which are easily solved by elementary algebra, provided that these transforms can be found.
In summary, we chose a set of elementary solutions, parametrized by ξ, of which the general solution would be a (continuous) linear combination in the form of an integral over the parameter ξ. But this integral was in the form of a Fourier integral. The next step was to express the boundary conditions in terms of these integrals, and set them equal to the given functions f and g. But these expressions also took the form of a Fourier integral because of the properties of the Fourier transform of a derivative. The last step was to exploit Fourier inversion by applying the Fourier transformation to both sides, thus obtaining expressions for the coefficient functions a± and b± in terms of the given boundary conditions f and g.
From a higher point of view, Fourier's procedure can be reformulated more conceptually. Since there are two variables, we will use the Fourier transformation in both x and t rather than operate as Fourier did, who only transformed in the spatial variables. Note that ŷ must be considered in the sense of a distribution since y(x, t) is not going to be L1: as a wave, it will persist through time and thus is not a transient phenomenon. But it will be bounded and so its Fourier transform can be defined as a distribution. The operational properties of the Fourier transformation that are relevant to this equation are that it takes differentiation in x to multiplication by i2πξ and differentiation with respect to t to multiplication by i2πf where f is the frequency. Then the wave equation becomes an algebraic equation in ŷ: This is equivalent to requiring ŷ(ξ, f) = 0 unless ξ = ±f. Right away, this explains why the choice of elementary solutions we made earlier worked so well: obviously f̂ = δ(ξ ± f) will be solutions. Applying Fourier inversion to these delta functions, we obtain the elementary solutions we picked earlier. But from the higher point of view, one does not pick elementary solutions, but rather considers the space of all distributions which are supported on the (degenerate) conic ξ2 − f2 = 0.
We may as well consider the distributions supported on the conic that are given by distributions of one variable on the line ξ = f plus distributions on the line ξ = −f as follows: if Φ is any test function, where s+, and s−, are distributions of one variable.
Then Fourier inversion gives, for the boundary conditions, something very similar to what we had more concretely above (put Φ(ξ, f) = ei2π(xξ+tf), which is clearly of polynomial growth): and
Now, as before, applying the one-variable Fourier transformation in the variable x to these functions of x yields two equations in the two unknown distributions s± (which can be taken to be ordinary functions if the boundary conditions are L1 or L2).
From a calculational point of view, the drawback of course is that one must first calculate the Fourier transforms of the boundary conditions, then assemble the solution from these, and then calculate an inverse Fourier transform. Closed form formulas are rare, except when there is some geometric symmetry that can be exploited, and the numerical calculations are difficult because of the oscillatory nature of the integrals, which makes convergence slow and hard to estimate. For practical calculations, other methods are often used.
The twentieth century has seen the extension of these methods to all linear partial differential equations with polynomial coefficients, and by extending the notion of Fourier transformation to include Fourier integral operators, some non-linear equations as well.
Fourier-transform spectroscopy
[edit]The Fourier transform is also used in nuclear magnetic resonance (NMR) and in other kinds of spectroscopy, e.g. infrared (FTIR). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in magnetic resonance imaging (MRI) and mass spectrometry.
Quantum mechanics
[edit]The Fourier transform is useful in quantum mechanics in at least two different ways. To begin with, the basic conceptual structure of quantum mechanics postulates the existence of pairs of complementary variables, connected by the Heisenberg uncertainty principle. For example, in one dimension, the spatial variable q of, say, a particle, can only be measured by the quantum mechanical "position operator" at the cost of losing information about the momentum p of the particle. Therefore, the physical state of the particle can either be described by a function, called "the wave function", of q or by a function of p but not by a function of both variables. The variable p is called the conjugate variable to q. In classical mechanics, the physical state of a particle (existing in one dimension, for simplicity of exposition) would be given by assigning definite values to both p and q simultaneously. Thus, the set of all possible physical states is the two-dimensional real vector space with a p-axis and a q-axis called the phase space.
In contrast, quantum mechanics chooses a polarisation of this space in the sense that it picks a subspace of one-half the dimension, for example, the q-axis alone, but instead of considering only points, takes the set of all complex-valued "wave functions" on this axis. Nevertheless, choosing the p-axis is an equally valid polarisation, yielding a different representation of the set of possible physical states of the particle. Both representations of the wavefunction are related by a Fourier transform, such that or, equivalently,
Physically realisable states are L2, and so by the Plancherel theorem, their Fourier transforms are also L2. (Note that since q is in units of distance and p is in units of momentum, the presence of the Planck constant in the exponent makes the exponent dimensionless, as it should be.)
Therefore, the Fourier transform can be used to pass from one way of representing the state of the particle, by a wave function of position, to another way of representing the state of the particle: by a wave function of momentum. Infinitely many different polarisations are possible, and all are equally valid. Being able to transform states from one representation to another by the Fourier transform is not only convenient but also the underlying reason of the Heisenberg uncertainty principle.
The other use of the Fourier transform in both quantum mechanics and quantum field theory is to solve the applicable wave equation. In non-relativistic quantum mechanics, Schrödinger's equation for a time-varying wave function in one-dimension, not subject to external forces, is
This is the same as the heat equation except for the presence of the imaginary unit i. Fourier methods can be used to solve this equation.
In the presence of a potential, given by the potential energy function V(x), the equation becomes
The "elementary solutions", as we referred to them above, are the so-called "stationary states" of the particle, and Fourier's algorithm, as described above, can still be used to solve the boundary value problem of the future evolution of ψ given its values for t = 0. Neither of these approaches is of much practical use in quantum mechanics. Boundary value problems and the time-evolution of the wave function is not of much practical interest: it is the stationary states that are most important.
In relativistic quantum mechanics, Schrödinger's equation becomes a wave equation as was usual in classical physics, except that complex-valued waves are considered. A simple example, in the absence of interactions with other particles or fields, is the free one-dimensional Klein–Gordon–Schrödinger–Fock equation, this time in dimensionless units,
This is, from the mathematical point of view, the same as the wave equation of classical physics solved above (but with a complex-valued wave, which makes no difference in the methods). This is of great use in quantum field theory: each separate Fourier component of a wave can be treated as a separate harmonic oscillator and then quantized, a procedure known as "second quantization". Fourier methods have been adapted to also deal with non-trivial interactions.
Finally, the number operator of the quantum harmonic oscillator can be interpreted, for example via the Mehler kernel, as the generator of the Fourier transform .[30]
Signal processing
[edit]The Fourier transform is used for the spectral analysis of time-series. The subject of statistical signal processing does not, however, usually apply the Fourier transformation to the signal itself. Even if a real signal is indeed transient, it has been found in practice advisable to model a signal by a function (or, alternatively, a stochastic process) which is stationary in the sense that its characteristic properties are constant over all time. The Fourier transform of such a function does not exist in the usual sense, and it has been found more useful for the analysis of signals to instead take the Fourier transform of its autocorrelation function.
The autocorrelation function R of a function f is defined by
This function is a function of the time-lag τ elapsing between the values of f to be correlated.
For most functions f that occur in practice, R is a bounded even function of the time-lag τ and for typical noisy signals it turns out to be uniformly continuous with a maximum at τ = 0.
The autocorrelation function, more properly called the autocovariance function unless it is normalized in some appropriate fashion, measures the strength of the correlation between the values of f separated by a time lag. This is a way of searching for the correlation of f with its own past. It is useful even for other statistical tasks besides the analysis of signals. For example, if f(t) represents the temperature at time t, one expects a strong correlation with the temperature at a time lag of 24 hours.
It possesses a Fourier transform,
This Fourier transform is called the power spectral density function of f. (Unless all periodic components are first filtered out from f, this integral will diverge, but it is easy to filter out such periodicities.)
The power spectrum, as indicated by this density function P, measures the amount of variance contributed to the data by the frequency ξ. In electrical signals, the variance is proportional to the average power (energy per unit time), and so the power spectrum describes how much the different frequencies contribute to the average power of the signal. This process is called the spectral analysis of time-series and is analogous to the usual analysis of variance of data that is not a time-series (ANOVA).
Knowledge of which frequencies are "important" in this sense is crucial for the proper design of filters and for the proper evaluation of measuring apparatuses. It can also be useful for the scientific analysis of the phenomena responsible for producing the data.
The power spectrum of a signal can also be approximately measured directly by measuring the average power that remains in a signal after all the frequencies outside a narrow band have been filtered out.
Spectral analysis is carried out for visual signals as well. The power spectrum ignores all phase relations, which is good enough for many purposes, but for video signals other types of spectral analysis must also be employed, still using the Fourier transform as a tool.
Other notations
[edit]Other common notations for include:
In the sciences and engineering it is also common to make substitutions like these:
So the transform pair can become
A disadvantage of the capital letter notation is when expressing a transform such as or which become the more awkward and
In some contexts such as particle physics, the same symbol may be used for both for a function as well as it Fourier transform, with the two only distinguished by their argument I.e. would refer to the Fourier transform because of the momentum argument, while would refer to the original function because of the positional argument. Although tildes may be used as in to indicate Fourier transforms, tildes may also be used to indicate a modification of a quantity with a more Lorentz invariant form, such as , so care must be taken. Similarly, often denotes the Hilbert transform of .
The interpretation of the complex function f̂(ξ) may be aided by expressing it in polar coordinate form in terms of the two real functions A(ξ) and φ(ξ) where: is the amplitude and is the phase (see arg function).
Then the inverse transform can be written: which is a recombination of all the frequency components of f(x). Each component is a complex sinusoid of the form e2πixξ whose amplitude is A(ξ) and whose initial phase angle (at x = 0) is φ(ξ).
The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted F and F(f) is used to denote the Fourier transform of the function f. This mapping is linear, which means that F can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function f) can be used to write F f instead of F(f). Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value ξ for its variable, and this is denoted either as F f(ξ) or as (F f)(ξ). Notice that in the former case, it is implicitly understood that F is applied first to f and then the resulting function is evaluated at ξ, not the other way around.
In mathematics and various applied sciences, it is often necessary to distinguish between a function f and the value of f when its variable equals x, denoted f(x). This means that a notation like F(f(x)) formally can be interpreted as the Fourier transform of the values of f at x. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example, is sometimes used to express that the Fourier transform of a rectangular function is a sinc function, or is used to express the shift property of the Fourier transform.
Notice, that the last example is only correct under the assumption that the transformed function is a function of x, not of x0.
As discussed above, the characteristic function of a random variable is the same as the Fourier–Stieltjes transform of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined
As in the case of the "non-unitary angular frequency" convention above, the factor of 2π appears in neither the normalizing constant nor the exponent. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponent.
Computation methods
[edit]The appropriate computation method largely depends how the original mathematical function is represented and the desired form of the output function. In this section we consider both functions of a continuous variable, and functions of a discrete variable (i.e. ordered pairs of and values). For discrete-valued the transform integral becomes a summation of sinusoids, which is still a continuous function of frequency ( or ). When the sinusoids are harmonically-related (i.e. when the -values are spaced at integer multiples of an interval), the transform is called discrete-time Fourier transform (DTFT).
Discrete Fourier transforms and fast Fourier transforms
[edit]Sampling the DTFT at equally-spaced values of frequency is the most common modern method of computation. Efficient procedures, depending on the frequency resolution needed, are described at Discrete-time Fourier transform § Sampling the DTFT. The discrete Fourier transform (DFT), used there, is usually computed by a fast Fourier transform (FFT) algorithm.
Analytic integration of closed-form functions
[edit]Tables of closed-form Fourier transforms, such as § Square-integrable functions, one-dimensional and § Table of discrete-time Fourier transforms, are created by mathematically evaluating the Fourier analysis integral (or summation) into another closed-form function of frequency ( or ).[60] When mathematically possible, this provides a transform for a continuum of frequency values.
Many computer algebra systems such as Matlab and Mathematica that are capable of symbolic integration are capable of computing Fourier transforms analytically. For example, to compute the Fourier transform of cos(6πt) e−πt2 one might enter the command integrate cos(6*pi*t) exp(−pi*t^2) exp(-i*2*pi*f*t) from -inf to inf
into Wolfram Alpha.[note 7]
Numerical integration of closed-form continuous functions
[edit]Discrete sampling of the Fourier transform can also be done by numerical integration of the definition at each value of frequency for which transform is desired.[61][62][63] The numerical integration approach works on a much broader class of functions than the analytic approach.
Numerical integration of a series of ordered pairs
[edit]If the input function is a series of ordered pairs, numerical integration reduces to just a summation over the set of data pairs.[64] The DTFT is a common subcase of this more general situation.
Tables of important Fourier transforms
[edit]The following tables record some closed-form Fourier transforms. For functions f(x) and g(x) denote their Fourier transforms by f̂ and ĝ. Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse.
Functional relationships, one-dimensional
[edit]The Fourier transforms in this table may be found in Erdélyi (1954) or Kammler (2000, appendix).
Function | Fourier transform unitary, ordinary frequency |
Fourier transform unitary, angular frequency |
Fourier transform non-unitary, angular frequency |
Remarks | |
---|---|---|---|---|---|
Definitions | |||||
101 | Linearity | ||||
102 | Shift in time domain | ||||
103 | Shift in frequency domain, dual of 102 | ||||
104 | Scaling in the time domain. If |a| is large, then f(ax) is concentrated around 0 and spreads out and flattens. | ||||
105 | The same transform is applied twice, but x replaces the frequency variable (ξ or ω) after the first transform. | ||||
106 | nth-order derivative.
As f is a Schwartz function | ||||
106.5 | Integration.[65] Note: is the Dirac delta function and is the average (DC) value of such that | ||||
107 | This is the dual of 106 | ||||
108 | The notation f ∗ g denotes the convolution of f and g — this rule is the convolution theorem | ||||
109 | This is the dual of 108 | ||||
110 | For f(x) purely real | Hermitian symmetry. z indicates the complex conjugate. | |||
113 | For f(x) purely imaginary | z indicates the complex conjugate. | |||
114 | Complex conjugation, generalization of 110 and 113 | ||||
115 | This follows from rules 101 and 103 using Euler's formula: | ||||
116 | This follows from 101 and 103 using Euler's formula: |
Square-integrable functions, one-dimensional
[edit]The Fourier transforms in this table may be found in Campbell & Foster (1948), Erdélyi (1954), or Kammler (2000, appendix).
Function | Fourier transform unitary, ordinary frequency |
Fourier transform unitary, angular frequency |
Fourier transform non-unitary, angular frequency |
Remarks | |
---|---|---|---|---|---|
Definitions | |||||
201 | The rectangular pulse and the normalized sinc function, here defined as sinc(x) = sin(πx)/πx | ||||
202 | Dual of rule 201. The rectangular function is an ideal low-pass filter, and the sinc function is the non-causal impulse response of such a filter. The sinc function is defined here as sinc(x) = sin(πx)/πx | ||||
203 | The function tri(x) is the triangular function | ||||
204 | Dual of rule 203. | ||||
205 | The function u(x) is the Heaviside unit step function and a > 0. | ||||
206 | This shows that, for the unitary Fourier transforms, the Gaussian function e−αx2 is its own Fourier transform for some choice of α. For this to be integrable we must have Re(α) > 0. | ||||
208 | For Re(a) > 0. That is, the Fourier transform of a two-sided decaying exponential function is a Lorentzian function. | ||||
209 | Hyperbolic secant is its own Fourier transform | ||||
210 | Hn is the nth-order Hermite polynomial. If a = 1 then the Gauss–Hermite functions are eigenfunctions of the Fourier transform operator. For a derivation, see Hermite polynomial. The formula reduces to 206 for n = 0. |
Distributions, one-dimensional
[edit]The Fourier transforms in this table may be found in Erdélyi (1954) or Kammler (2000, appendix).
Function | Fourier transform unitary, ordinary frequency |
Fourier transform unitary, angular frequency |
Fourier transform non-unitary, angular frequency |
Remarks | |
---|---|---|---|---|---|
Definitions | |||||
301 | The distribution δ(ξ) denotes the Dirac delta function. | ||||
302 | Dual of rule 301. | ||||
303 | This follows from 103 and 301. | ||||
304 | This follows from rules 101 and 303 using Euler's formula: | ||||
305 | This follows from 101 and 303 using | ||||
306 | This follows from 101 and 207 using | ||||
307 | This follows from 101 and 207 using | ||||
308 | Here it is assumed is real. For the case that alpha is complex see table entry 206 above. | ||||
309 | Here, n is a natural number and δ(n)(ξ) is the nth distribution derivative of the Dirac delta function. This rule follows from rules 107 and 301. Combining this rule with 101, we can transform all polynomials. | ||||
310 | Dual of rule 309. δ(n)(ξ) is the nth distribution derivative of the Dirac delta function. This rule follows from 106 and 302. | ||||
311 | Here sgn(ξ) is the sign function. Note that 1/x is not a distribution. It is necessary to use the Cauchy principal value when testing against Schwartz functions. This rule is useful in studying the Hilbert transform. | ||||
312 | 1/xn is the homogeneous distribution defined by the distributional derivative | ||||
313 | This formula is valid for 0 > α > −1. For α > 0 some singular terms arise at the origin that can be found by differentiating 320. If Re α > −1, then |x|α is a locally integrable function, and so a tempered distribution. The function α ↦ |x|α is a holomorphic function from the right half-plane to the space of tempered distributions. It admits a unique meromorphic extension to a tempered distribution, also denoted |x|α for α ≠ −1, −3, ... (See homogeneous distribution.) | ||||
Special case of 313. | |||||
314 | The dual of rule 311. This time the Fourier transforms need to be considered as a Cauchy principal value. | ||||
315 | The function u(x) is the Heaviside unit step function; this follows from rules 101, 301, and 314. | ||||
316 | This function is known as the Dirac comb function. This result can be derived from 302 and 102, together with the fact that as distributions. | ||||
317 | The function J0(x) is the zeroth order Bessel function of first kind. | ||||
318 | This is a generalization of 317. The function Jn(x) is the nth order Bessel function of first kind. The function Tn(x) is the Chebyshev polynomial of the first kind. | ||||
319 | γ is the Euler–Mascheroni constant. It is necessary to use a finite part integral when testing 1/|ξ| or 1/|ω|against Schwartz functions. The details of this might change the coefficient of the delta function. | ||||
320 | This formula is valid for 1 > α > 0. Use differentiation to derive formula for higher exponents. u is the Heaviside function. |
Two-dimensional functions
[edit]Function | Fourier transform unitary, ordinary frequency |
Fourier transform unitary, angular frequency |
Fourier transform non-unitary, angular frequency |
Remarks | |
---|---|---|---|---|---|
400 | The variables ξx, ξy, ωx, ωy are real numbers. The integrals are taken over the entire plane. | ||||
401 | Both functions are Gaussians, which may not have unit volume. | ||||
402 | The function is defined by circ(r) = 1 for 0 ≤ r ≤ 1, and is 0 otherwise. The result is the amplitude distribution of the Airy disk, and is expressed using J1 (the order-1 Bessel function of the first kind).[66] | ||||
403 | This is the Hankel transform of r−1, a 2-D Fourier "self-transform".[67] | ||||
404 |
Formulas for general n-dimensional functions
[edit]Function | Fourier transform unitary, ordinary frequency |
Fourier transform unitary, angular frequency |
Fourier transform non-unitary, angular frequency |
Remarks | |
---|---|---|---|---|---|
500 | |||||
501 | The function χ[0, 1] is the indicator function of the interval [0, 1]. The function Γ(x) is the gamma function. The function Jn/2 + δ is a Bessel function of the first kind, with order n/2 + δ. Taking n = 2 and δ = 0 produces 402.[68] | ||||
502 | See Riesz potential where the constant is given by The formula also holds for all α ≠ n, n + 2, ... by analytic continuation, but then the function and its Fourier transforms need to be understood as suitably regularized tempered distributions. See homogeneous distribution.[note 8] | ||||
503 | This is the formula for a multivariate normal distribution normalized to 1 with a mean of 0. Bold variables are vectors or matrices. Following the notation of the aforementioned page, Σ = σ σT and Σ−1 = σ−T σ−1 | ||||
504 | Here[69] Re(α) > 0 |
See also
[edit]- Analog signal processing
- Beevers–Lipson strip
- Constant-Q transform
- Discrete Fourier transform
- DFT matrix
- Fast Fourier transform
- Fourier integral operator
- Fourier inversion theorem
- Fourier multiplier
- Fourier series
- Fourier sine transform
- Fourier–Deligne transform
- Fourier–Mukai transform
- Fractional Fourier transform
- Indirect Fourier transform
- Integral transform
- Laplace transform
- Least-squares spectral analysis
- Linear canonical transform
- List of Fourier-related transforms
- Mellin transform
- Multidimensional transform
- NGC 4622, especially the image NGC 4622 Fourier transform m = 2.
- Nonlocal operator
- Quantum Fourier transform
- Quadratic Fourier transform
- Short-time Fourier transform
- Spectral density
- Symbolic integration
- Time stretch dispersive Fourier transform
- Transform (mathematics)
Notes
[edit]- ^ Depending on the application a Lebesgue integral, distributional, or other approach may be most appropriate.
- ^ Vretblad (2000) provides solid justification for these formal procedures without going too deeply into functional analysis or the theory of distributions.
- ^ In relativistic quantum mechanics one encounters vector-valued Fourier transforms of multi-component wave functions. In quantum field theory, operator-valued Fourier transforms of operator-valued functions of spacetime are in frequent use, see for example Greiner & Reinhardt (1996).
- ^ A possible source of confusion is the frequency-shifting property; i.e. the transform of function is The value of this function at is meaning that a frequency has been shifted to zero (also see Negative frequency).
- ^ The operator is defined by replacing by in the Taylor expansion of
- ^ Up to an imaginary constant factor whose magnitude depends on what Fourier transform convention is used.
- ^ The direct command
fourier transform of cos(6*pi*t) exp(−pi*t^2)
would also work for Wolfram Alpha, although the options for the convention (see Fourier transform § Other conventions) must be changed away from the default option, which is actually equivalent tointegrate cos(6*pi*t) exp(−pi*t^2) exp(i*omega*t) /sqrt(2*pi) from -inf to inf
. - ^ In Gelfand & Shilov 1964, p. 363, with the non-unitary conventions of this table, the transform of is given to be
from which this follows, with .
Citations
[edit]- ^ Khare, Butola & Rajora 2023, pp. 13–14
- ^ Kaiser 1994, p. 29
- ^ Rahman 2011, p. 11
- ^ Dym & McKean 1985
- ^ Fourier 1822, p. 525
- ^ Fourier 1878, p. 408
- ^ Jordan (1883) proves on pp. 216–226 the Fourier integral theorem before studying Fourier series.
- ^ Titchmarsh 1986, p. 1
- ^ Rahman 2011, p. 10.
- ^ Oppenheim, Schafer & Buck 1999, p. 58
- ^ Stade 2005, pp. 298–299.
- ^ Howe 1980.
- ^ Folland 1989
- ^ Fourier 1822
- ^ Arfken 1985
- ^ a b c d e Pinsky 2002
- ^ Proakis, John G.; Manolakis, Dimitris G. (1996). Digital Signal Processing: Principles, Algorithms, and Applications (3rd ed.). Prentice Hall. p. 291. ISBN 978-0-13-373762-2.
- ^ a b c d e Katznelson 1976
- ^ a b c d e f Stein & Weiss 1971
- ^ Rudin 1987, p. 187
- ^ Rudin 1987, p. 186
- ^ Folland 1992, p. 216
- ^ Wolf 1979, p. 307ff
- ^ Folland 1989, p. 53
- ^ Celeghini, Gadella & del Olmo 2021
- ^ a b Duoandikoetxea 2001
- ^ a b Boashash 2003
- ^ Condon 1937
- ^ Wolf 1979, p. 320
- ^ a b Wolf 1979, p. 312
- ^ Folland 1989, p. 52
- ^ Howe 1980
- ^ Paley & Wiener 1934
- ^ Gelfand & Vilenkin 1964
- ^ Kirillov & Gvishiani 1982
- ^ Clozel & Delorme 1985, pp. 331–333
- ^ de Groot & Mazur 1984, p. 146
- ^ Champeney 1987, p. 80
- ^ a b c Kolmogorov & Fomin 1999
- ^ Wiener 1949
- ^ Champeney 1987, p. 63
- ^ Widder & Wiener 1938, p. 537
- ^ Pinsky 2002, p. 131
- ^ Stein & Shakarchi 2003
- ^ Stein & Shakarchi 2003, p. 158
- ^ Chatfield 2004, p. 113
- ^ Fourier 1822, p. 441
- ^ Poincaré 1895, p. 102
- ^ Whittaker & Watson 1927, p. 188
- ^ Grafakos 2004
- ^ Grafakos & Teschl 2013
- ^ More generally, one can take a sequence of functions that are in the intersection of L1 and L2 and that converges to f in the L2-norm, and define the Fourier transform of f as the L2 -limit of the Fourier transforms of these functions.
- ^ "Applied Fourier Analysis and Elements of Modern Signal Processing Lecture 3" (PDF). January 12, 2016. Retrieved 2019-10-11.
- ^ Stein & Weiss 1971, Thm. 2.3
- ^ Pinsky 2002, p. 256.
- ^ Edwards 1982, pp. 53, 67, 72–73.
- ^ Hewitt & Ross 1970, Chapter 8
- ^ Knapp 2001
- ^ Correia, L. B.; Justo, J. F.; Angélico, B. A. (2024). "Polynomial Adaptive Synchrosqueezing Fourier Transform: A method to optimize multiresolution". Digital Signal Processing. 150: 104526. Bibcode:2024DSPRJ.15004526C. doi:10.1016/j.dsp.2024.104526.
- ^ Gradshteyn et al. 2015
- ^ Press et al. 1992
- ^ Bailey & Swarztrauber 1994
- ^ Lado 1971
- ^ Simonen & Olkkonen 1985
- ^ "The Integration Property of the Fourier Transform". The Fourier Transform .com. 2015 [2010]. Archived from the original on 2022-01-26. Retrieved 2023-08-20.
- ^ Stein & Weiss 1971, Thm. IV.3.3
- ^ Easton 2010
- ^ Stein & Weiss 1971, Thm. 4.15
- ^ Stein & Weiss 1971, p. 6
References
[edit]- Arfken, George (1985), Mathematical Methods for Physicists (3rd ed.), Academic Press, ISBN 9780120598205
- Bailey, David H.; Swarztrauber, Paul N. (1994), "A fast method for the numerical evaluation of continuous Fourier and Laplace transforms" (PDF), SIAM Journal on Scientific Computing, 15 (5): 1105–1110, Bibcode:1994SJSC...15.1105B, CiteSeerX 10.1.1.127.1534, doi:10.1137/0915067, archived from the original (PDF) on 2008-07-20, retrieved 2017-11-01
- Boashash, B., ed. (2003), Time–Frequency Signal Analysis and Processing: A Comprehensive Reference, Oxford: Elsevier Science, ISBN 978-0-08-044335-5
- Bochner, S.; Chandrasekharan, K. (1949), Fourier Transforms, Princeton University Press
- Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), Boston: McGraw-Hill, ISBN 978-0-07-116043-8
- Campbell, George; Foster, Ronald (1948), Fourier Integrals for Practical Applications, New York: D. Van Nostrand Company, Inc.
- Celeghini, Enrico; Gadella, Manuel; del Olmo, Mariano A. (2021), "Hermite Functions and Fourier Series", Symmetry, 13 (5): 853, arXiv:2007.10406, Bibcode:2021Symm...13..853C, doi:10.3390/sym13050853
- Champeney, D.C. (1987), A Handbook of Fourier Theorems, Cambridge University Press
- Chatfield, Chris (2004), The Analysis of Time Series: An Introduction, Texts in Statistical Science (6th ed.), London: Chapman & Hall/CRC, ISBN 9780203491683
- Clozel, Laurent; Delorme, Patrice (1985), "Sur le théorème de Paley-Wiener invariant pour les groupes de Lie réductifs réels", Comptes Rendus de l'Académie des Sciences, Série I, 300: 331–333
- Condon, E. U. (1937), "Immersion of the Fourier transform in a continuous group of functional transformations", Proc. Natl. Acad. Sci., 23 (3): 158–164, Bibcode:1937PNAS...23..158C, doi:10.1073/pnas.23.3.158, PMC 1076889, PMID 16588141
- de Groot, Sybren R.; Mazur, Peter (1984), Non-Equilibrium Thermodynamics (2nd ed.), New York: Dover
- Duoandikoetxea, Javier (2001), Fourier Analysis, American Mathematical Society, ISBN 978-0-8218-2172-5
- Dym, H.; McKean, H. (1985), Fourier Series and Integrals, Academic Press, ISBN 978-0-12-226451-1
- Easton, Roger L. Jr. (2010), Fourier Methods in Imaging, John Wiley & Sons, ISBN 978-0-470-68983-7, retrieved 26 May 2020
- Edwards, R. E. (1979). Fourier Series. Vol. 64. New York, NY: Springer New York. doi:10.1007/978-1-4612-6208-4. ISBN 978-1-4612-6210-7.
- Edwards, R. E. (1982). Fourier Series. Vol. 85. New York, NY: Springer New York. doi:10.1007/978-1-4613-8156-3. ISBN 978-1-4613-8158-7.
- Erdélyi, Arthur, ed. (1954), Tables of Integral Transforms, vol. 1, McGraw-Hill
- Feller, William (1971), An Introduction to Probability Theory and Its Applications, vol. II (2nd ed.), New York: Wiley, MR 0270403
- Folland, Gerald (1989), Harmonic analysis in phase space, Princeton University Press
- Folland, Gerald (1992), Fourier analysis and its applications, Wadsworth & Brooks/Cole
- Fourier, J.B. Joseph (1822), Théorie analytique de la chaleur (in French), Paris: Firmin Didot, père et fils, OCLC 2688081
- Fourier, J.B. Joseph (1878) [1822], The Analytical Theory of Heat, translated by Alexander Freeman, The University Press (translated from French)
- Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich; Jeffrey, Alan (2015), Zwillinger, Daniel; Moll, Victor Hugo (eds.), Table of Integrals, Series, and Products, translated by Scripta Technica, Inc. (8th ed.), Academic Press, ISBN 978-0-12-384933-5
- Grafakos, Loukas (2004), Classical and Modern Fourier Analysis, Prentice-Hall, ISBN 978-0-13-035399-3
- Grafakos, Loukas; Teschl, Gerald (2013), "On Fourier transforms of radial functions and distributions", J. Fourier Anal. Appl., 19 (1): 167–179, arXiv:1112.5469, Bibcode:2013JFAA...19..167G, doi:10.1007/s00041-012-9242-5, S2CID 1280745
- Greiner, W.; Reinhardt, J. (1996), Field Quantization, Springer, ISBN 978-3-540-59179-5
- Gelfand, I.M.; Shilov, G.E. (1964), Generalized Functions, vol. 1, New York: Academic Press (translated from Russian)
- Gelfand, I.M.; Vilenkin, N.Y. (1964), Generalized Functions, vol. 4, New York: Academic Press (translated from Russian)
- Hewitt, Edwin; Ross, Kenneth A. (1970), Abstract harmonic analysis, Die Grundlehren der mathematischen Wissenschaften, Band 152, vol. II: Structure and analysis for compact groups. Analysis on locally compact Abelian groups, Springer, MR 0262773
- Hörmander, L. (1976), Linear Partial Differential Operators, vol. 1, Springer, ISBN 978-3-540-00662-6
- Howe, Roger (1980), "On the role of the Heisenberg group in harmonic analysis", Bulletin of the American Mathematical Society, 3 (2): 821–844, doi:10.1090/S0273-0979-1980-14825-9, MR 0578375
- James, J.F. (2011), A Student's Guide to Fourier Transforms (3rd ed.), Cambridge University Press, ISBN 978-0-521-17683-5
- Jordan, Camille (1883), Cours d'Analyse de l'École Polytechnique, vol. II, Calcul Intégral: Intégrales définies et indéfinies (2nd ed.), Paris
{{citation}}
: CS1 maint: location missing publisher (link) - Kaiser, Gerald (1994), "A Friendly Guide to Wavelets", Physics Today, 48 (7): 57–58, Bibcode:1995PhT....48g..57K, doi:10.1063/1.2808105, ISBN 978-0-8176-3711-8
- Kammler, David (2000), A First Course in Fourier Analysis, Prentice Hall, ISBN 978-0-13-578782-3
- Katznelson, Yitzhak (1976), An Introduction to Harmonic Analysis, Dover, ISBN 978-0-486-63331-2
- Khare, Kedar; Butola, Mansi; Rajora, Sunaina (2023), "Chapter 2.3 Fourier Transform as a Limiting Case of Fourier Series", Fourier Optics and Computational Imaging (2nd ed.), Springer, doi:10.1007/978-3-031-18353-9, ISBN 978-3-031-18353-9, S2CID 255676773
- Kirillov, Alexandre; Gvishiani, Alexei D. (1982) [1979], Theorems and Problems in Functional Analysis, Springer (translated from Russian)
- Knapp, Anthony W. (2001), Representation Theory of Semisimple Groups: An Overview Based on Examples, Princeton University Press, ISBN 978-0-691-09089-4
- Kolmogorov, Andrey Nikolaevich; Fomin, Sergei Vasilyevich (1999) [1957], Elements of the Theory of Functions and Functional Analysis, Dover (translated from Russian)
- Lado, F. (1971), "Numerical Fourier transforms in one, two, and three dimensions for liquid state calculations", Journal of Computational Physics, 8 (3): 417–433, Bibcode:1971JCoPh...8..417L, doi:10.1016/0021-9991(71)90021-0
- Müller, Meinard (2015), The Fourier Transform in a Nutshell. (PDF), Springer, doi:10.1007/978-3-319-21945-5, ISBN 978-3-319-21944-8, S2CID 8691186, archived from the original (PDF) on 2016-04-08, retrieved 2016-03-28; also available at Fundamentals of Music Processing, Section 2.1, pages 40–56
- Oppenheim, Alan V.; Schafer, Ronald W.; Buck, John R. (1999), Discrete-time signal processing (2nd ed.), Upper Saddle River, N.J.: Prentice Hall, ISBN 0-13-754920-2
- Paley, R.E.A.C.; Wiener, Norbert (1934), Fourier Transforms in the Complex Domain, American Mathematical Society Colloquium Publications, Providence, Rhode Island: American Mathematical Society
- Pinsky, Mark (2002), Introduction to Fourier Analysis and Wavelets, Brooks/Cole, ISBN 978-0-534-37660-4
- Poincaré, Henri (1895), Théorie analytique de la propagation de la chaleur, Paris: Carré
- Polyanin, A. D.; Manzhirov, A. V. (1998), Handbook of Integral Equations, Boca Raton: CRC Press, ISBN 978-0-8493-2876-3
- Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (1992), Numerical Recipes in C: The Art of Scientific Computing, Second Edition (2nd ed.), Cambridge University Press
- Proakis, John G.; Manolakis, Dimitri G. (1996). Digital Signal Processing: Principles, Algorithms and Applications (3 ed.). New Jersey: Prentice-Hall International. Bibcode:1996dspp.book.....P. ISBN 9780133942897. sAcfAQAAIAAJ.
- Rahman, Matiur (2011), Applications of Fourier Transforms to Generalized Functions, WIT Press, ISBN 978-1-84564-564-9
- Rudin, Walter (1987), Real and Complex Analysis (3rd ed.), Singapore: McGraw Hill, ISBN 978-0-07-100276-9
- Simonen, P.; Olkkonen, H. (1985), "Fast method for computing the Fourier integral transform via Simpson's numerical integration", Journal of Biomedical Engineering, 7 (4): 337–340, doi:10.1016/0141-5425(85)90067-6, PMID 4057997
- Smith, Julius O. "Mathematics of the Discrete Fourier Transform (DFT), with Audio Applications --- Second Edition". ccrma.stanford.edu. Retrieved 2022-12-29.
We may think of a real sinusoid as being the sum of a positive-frequency and a negative-frequency complex sinusoid.
- Stade, Eric (2005). Fourier Analysis. Wiley. doi:10.1002/9781118165508. ISBN 978-0-471-66984-5.
- Stein, Elias; Shakarchi, Rami (2003), Fourier Analysis: An introduction, Princeton University Press, ISBN 978-0-691-11384-5
- Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton, N.J.: Princeton University Press, ISBN 978-0-691-08078-9
- Taneja, H.C. (2008), "Chapter 18: Fourier integrals and Fourier transforms", Advanced Engineering Mathematics, vol. 2, New Delhi, India: I. K. International Pvt Ltd, ISBN 978-8189866563
- Titchmarsh, E. (1986) [1948], Introduction to the theory of Fourier integrals (2nd ed.), Oxford University: Clarendon Press, ISBN 978-0-8284-0324-5
- Vretblad, Anders (2000), Fourier Analysis and its Applications, Graduate Texts in Mathematics, vol. 223, New York: Springer, ISBN 978-0-387-00836-3
- Whittaker, E. T.; Watson, G. N. (1927), A Course of Modern Analysis (4th ed.), Cambridge University Press
- Widder, David Vernon; Wiener, Norbert (August 1938), "Remarks on the Classical Inversion Formula for the Laplace Integral", Bulletin of the American Mathematical Society, 44 (8): 573–575, doi:10.1090/s0002-9904-1938-06812-7
- Wiener, Norbert (1949), Extrapolation, Interpolation, and Smoothing of Stationary Time Series With Engineering Applications, Cambridge, Mass.: Technology Press and John Wiley & Sons and Chapman & Hall
- Wilson, R. G. (1995), Fourier Series and Optical Transform Techniques in Contemporary Optics, New York: Wiley, ISBN 978-0-471-30357-2
- Wolf, Kurt B. (1979), Integral Transforms in Science and Engineering, Springer, doi:10.1007/978-1-4757-0872-1, ISBN 978-1-4757-0874-5
- Yosida, K. (1968), Functional Analysis, Springer, ISBN 978-3-540-58654-8
External links
[edit]- Media related to Fourier transformation at Wikimedia Commons
- Encyclopedia of Mathematics
- Weisstein, Eric W. "Fourier Transform". MathWorld.
- Fourier Transform in Crystallography