Jump to content

Wiener deconvolution: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m there is an error in the formula: no S(f) there.
(42 intermediate revisions by 25 users not shown)
Line 1: Line 1:
[[File:Image restoration (motion blur, Wiener filtering).png|thumb|350px|From left: Original image, blurred image, image deblurred using Wiener deconvolution.]]
[[File:Image restoration (motion blur, Wiener filtering).png|thumb|350px|From left: Original image, blurred image, image [[deblurring|deblurred]] using Wiener deconvolution.]]
In [[mathematics]], '''Wiener deconvolution''' is an application of the [[Wiener filter]] to the [[noise]] problems inherent in [[deconvolution]]. It works in the [[frequency domain]], attempting to minimize the impact of deconvoluted noise at frequencies which have a poor [[signal-to-noise ratio]].
In [[mathematics]], '''Wiener deconvolution''' is an application of the [[Wiener filter]] to the [[noise]] problems inherent in [[deconvolution]]. It works in the [[frequency domain]], attempting to minimize the impact of deconvolved noise at frequencies which have a poor [[signal-to-noise ratio]].


The Wiener deconvolution method has widespread use in [[image]] deconvolution applications, as the frequency spectrum of most visual images is fairly well behaved and may be estimated easily.
The Wiener deconvolution method has widespread use in [[image]] deconvolution applications, as the frequency spectrum of most visual images is fairly well behaved and may be estimated easily.
Line 10: Line 10:
Given a system:
Given a system:


:<math>\ y(t) = h(t)*x(t) + v(t)</math>
:<math>\ y(t) = (h*x)(t) + n(t)</math>


where <math>*</math> denotes [[convolution]] and:
where <math>*</math> denotes [[convolution]] and:


*<math>\ x(t)</math> is some input signal (unknown) at time <math>\ t </math>.
*<math>\ x(t)</math> is some original signal (unknown) at time <math>\ t </math>.
*<math>\ h(t)</math> is the known [[impulse response]] of a [[LTI system theory|linear time-invariant]] system
*<math>\ h(t)</math> is the known [[impulse response]] of a [[LTI system theory|linear time-invariant]] system
*<math>\ v(t)</math> is some unknown additive noise, [[statistical independence|independent]] of <math>\ x(t)</math>
*<math>\ n(t)</math> is some unknown additive noise, [[statistical independence|independent]] of <math>\ x(t)</math>
*<math>\ y(t)</math> is our observed signal
*<math>\ y(t)</math> is our observed signal



Our goal is to find some <math>\ g(t)</math> so that we can estimate <math>\ x(t)</math> as follows:
Our goal is to find some <math>\ g(t)</math> so that we can estimate <math>\ x(t)</math> as follows:


:<math>\ \hat{x}(t) = g(t)*y(t)</math>
:<math>\ \hat{x}(t) = (g*y)(t)</math>

where <math>\ \hat{x}(t)</math> is an estimate of <math>\ x(t)</math> that minimizes the [[mean square error]]


where <math>\ \hat{x}(t)</math> is an estimate of <math>\ x(t)</math> that minimizes the [[mean square error]].
: <math>\ \epsilon(t) = \mathbb{E} \left| x(t) - \hat{x}(t) \right|^2</math>,


with <math>\ \mathbb{E}</math> denoting the [[Expected value|expectation]].
The Wiener deconvolution filter provides such a <math>\ g(t)</math>. The filter is most easily described in the [[frequency domain]]:
The Wiener deconvolution filter provides such a <math>\ g(t)</math>. The filter is most easily described in the [[frequency domain]]:


:<math>\ G(f) = \frac{H^*(f)S(f)}{ |H(f)|^2 S(f) + N(f) }</math>
:<math>\ G(f) = \frac{H^*(f)S(f)}{ |H(f)|^2 + N(f) }</math>


where:
where:


* <math>\ G(f)</math> and <math>\ H(f)</math> are the [[Fourier transform]]s of <math>\ g</math> and <math>\ h</math>, respectively at frequency <math>\ f </math>.
* <math>\ G(f)</math> and <math>\ H(f)</math> are the [[Fourier transform]]s of <math>\ g(t)</math> and <math>\ h(t)</math>,
* <math>\ S(f)</math> is the mean [[power spectral density]] of the input signal <math>\ x(t)</math>
* <math>\ S(f) = \mathbb{E}|X(f)|^2 </math> is the mean [[power spectral density]] of the original signal <math>\ x(t)</math>,
* <math>\ N(f)</math> is the mean power spectral density of the noise <math>\ v(t)</math>
* <math>\ N(f) = \mathbb{E}|V(f)|^2 </math> is the mean power spectral density of the noise <math>\ n(t)</math>,
* <math>X(f)</math>, <math>Y(f)</math>, and <math>V(f)</math> are the Fourier transforms of <math>x(t)</math>, and <math>y(t)</math>, and <math>n(t)</math>, respectively,
* the superscript <math>{}^*</math> denotes [[complex conjugate|complex conjugation]].
* the superscript <math>{}^*</math> denotes [[complex conjugate|complex conjugation]].



The filtering operation may either be carried out in the time-domain, as above, or in the frequency domain:
The filtering operation may either be carried out in the time-domain, as above, or in the frequency domain:
Line 42: Line 44:
:<math>\ \hat{X}(f) = G(f)Y(f)</math>
:<math>\ \hat{X}(f) = G(f)Y(f)</math>


(where <math>\ \hat{X}(f)</math> is the Fourier transform of <math>\hat{x}(t)</math>) and then performing an [[inverse Fourier transform]] on <math>\ \hat{X}(f)</math> to obtain <math>\ \hat{x}(t)</math>.
and then performing an [[inverse Fourier transform]] on <math>\ \hat{X}(f)</math> to obtain <math>\ \hat{x}(t)</math>.


Note that in the case of images, the arguments <math>\ t </math> and <math>\ f </math> above become two-dimensional; however the result is the same.
Note that in the case of images, the arguments <math>\ t </math> and <math>\ f </math> above become two-dimensional; however the result is the same.
Line 52: Line 54:
:<math>
:<math>
\begin{align}
\begin{align}
G(f) & = \frac{1}{H(f)} \left[ \frac{ |H(f)|^2 }{ |H(f)|^2 + \frac{N(f)}{S(f)} } \right] \\
G(f) & = \frac{1}{H(f)} \left[ \frac{ 1 }{ 1 + 1/(|H(f)|^2 \mathrm{SNR}(f))} \right]
& = \frac{1}{H(f)} \left[ \frac{ |H(f)|^2 }{ |H(f)|^2 + \frac{1}{\mathrm{SNR}(f)}} \right]
\end{align}
\end{align}
</math>
</math>


Here, <math>\ 1/H(f)</math> is the inverse of the original system, and <math>\ \mathrm{SNR}(f) = S(f)/N(f)</math> is the [[signal-to-noise ratio]]. When there is zero noise (i.e. infinite signal-to-noise), the term inside the square brackets equals 1, which means that the Wiener filter is simply the inverse of the system, as we might expect. However, as the noise at certain frequencies increases, the signal-to-noise ratio drops, so the term inside the square brackets also drops. This means that the Wiener filter attenuates frequencies dependent on their signal-to-noise ratio.
Here, <math>\ 1/H(f)</math> is the inverse of the original system, <math>\ \mathrm{SNR}(f) = S(f)/N(f)</math> is the [[signal-to-noise ratio]], and <math>\ |H(f)|^2 \mathrm{SNR}(f)</math> is the ratio of the pure filtered signal to noise spectral density. When there is zero noise (i.e. infinite signal-to-noise), the term inside the square brackets equals 1, which means that the Wiener filter is simply the inverse of the system, as we might expect. However, as the noise at certain frequencies increases, the signal-to-noise ratio drops, so the term inside the square brackets also drops. This means that the Wiener filter attenuates frequencies according to their filtered signal-to-noise ratio.


The Wiener filter equation above requires us to know the spectral content of a typical image, and also that of the noise. Often, we do not have access to these exact quantities, but we may be in a situation where good estimates can be made. For instance, in the case of photographic images, the signal (the original image) typically has strong low frequencies and weak high frequencies, and in many cases the noise content will be relatively flat with frequency.
The Wiener filter equation above requires us to know the spectral content of a typical image, and also that of the noise. Often, we do not have access to these exact quantities, but we may be in a situation where good estimates can be made. For instance, in the case of photographic images, the signal (the original image) typically has strong low frequencies and weak high frequencies, while in many cases the noise content will be relatively flat with frequency.


== Derivation ==
== Derivation ==
Line 65: Line 66:
As mentioned above, we want to produce an estimate of the original signal that minimizes the mean square error, which may be expressed:
As mentioned above, we want to produce an estimate of the original signal that minimizes the mean square error, which may be expressed:


:<math>\ \epsilon(f) = \mathbb{E} \left| X(f) - \hat{X}(f) \right|^2</math>
:<math>\ \epsilon(f) = \mathbb{E} \left| X(f) - \hat{X}(f) \right|^2</math> .


The equivalence to the previous definition of <math>\epsilon</math>, can be derived using [[Plancherel theorem]] or [[Parseval's theorem]] for the [[Fourier transform]].
where <math>\ \mathbb{E}</math> denotes [[Expected value|expectation]].


If we substitute in the expression for <math>\ \hat{X}(f)</math>, the following rearrangements can be made:
If we substitute in the expression for <math>\ \hat{X}(f)</math>, the above can be rearranged to


:<math>
:<math>
Line 94: Line 95:
:<math>\ \mathbb{E}\Big\{X(f)V^*(f)\Big\} = \mathbb{E}\Big\{V(f)X^*(f)\Big\} = 0</math>
:<math>\ \mathbb{E}\Big\{X(f)V^*(f)\Big\} = \mathbb{E}\Big\{V(f)X^*(f)\Big\} = 0</math>


Also, we are defining the power spectral densities as follows:
Substituting the power spectral densities <math>\ S(f) </math> and <math>\ N(f) </math>, we have:

:<math>\ S(f) = \mathbb{E}|X(f)|^2</math>
:<math>\ N(f) = \mathbb{E}|V(f)|^2</math>

Therefore, we have:


:<math>
:<math>
\epsilon(f) = \Big[ 1-G(f)H(f) \Big]\Big[ 1-G(f)H(f) \Big]^ * S(f) + G(f)G^*(f)N(f)
\begin{align}
\epsilon(f) & = \Big[ 1-G(f)H(f) \Big]\Big[ 1-G(f)H(f) \Big]^* S(f) \\
& {} + G(f)G^*(f)N(f)
\end{align}
</math>
</math>


To find the minimum error value, we [[Holomorphic function|differentiate]] with respect to <math>\ G(f)</math> and set equal to zero. As this is a complex value, <math>\ G^*(f)</math> acts as a constant.
To find the minimum error value, we calculate the [[Wirtinger derivatives|Wirtinger derivative]] with respect to <math>\ G(f)</math> and set it equal to zero.


:<math>\
:<math>\
\frac{d\epsilon(f)}{dG(f)} = G^*(f)N(f) - H(f)\Big[1 - G(f)H(f)\Big]^* S(f) = 0
\frac{d\epsilon(f)}{dG(f)} = 0 \Rightarrow G^*(f)N(f) - H(f)\Big[1 - G(f)H(f)\Big]^* S(f) = 0
</math>
</math>


Line 117: Line 110:


== See also ==
== See also ==
* [[Information field theory]]

* [[Deconvolution]]
* [[Deconvolution]]
* [[Wiener filter]]
* [[Wiener filter]]
Line 123: Line 116:
* [[Blind deconvolution]]
* [[Blind deconvolution]]
* [[Fourier transform]]
* [[Fourier transform]]
{{Commons|File:Image restoration (motion blur, Wiener filtering).png|An example of Wiener deconvolution on motion blured image (and source codes in MATLAB/GNU Octave).}}

* {{Commons|File:Image restoration (motion blur, Wiener filtering).png|An example of Wiener deconvolution on motion blured image (and source codes in MATLAB/GNU Octave).}}


== References ==
== References ==
Line 131: Line 123:


== External links ==
== External links ==

* [http://www.owlnet.rice.edu/~elec539/Projects99/BACH/proj2/blind/bd.html Comparison of different deconvolution methods.]
* [http://www.owlnet.rice.edu/~elec539/Projects99/BACH/proj2/blind/bd.html Comparison of different deconvolution methods.]
* [http://cnx.org/content/m13144/latest/ Deconvolution with a Wiener filter]

[[category:Signal processing]]
[[category:Image processing]]
[[Category:Estimation theory]]


[[Category:Signal estimation]]
[[fr:Déconvolution de Wiener]]
[[Category:Image noise reduction techniques]]
[[pl:Dekonwolucja Wienera]]

Revision as of 08:00, 26 November 2024

From left: Original image, blurred image, image deblurred using Wiener deconvolution.

In mathematics, Wiener deconvolution is an application of the Wiener filter to the noise problems inherent in deconvolution. It works in the frequency domain, attempting to minimize the impact of deconvolved noise at frequencies which have a poor signal-to-noise ratio.

The Wiener deconvolution method has widespread use in image deconvolution applications, as the frequency spectrum of most visual images is fairly well behaved and may be estimated easily.

Wiener deconvolution is named after Norbert Wiener.

Definition

Given a system:

where denotes convolution and:

  • is some original signal (unknown) at time .
  • is the known impulse response of a linear time-invariant system
  • is some unknown additive noise, independent of
  • is our observed signal

Our goal is to find some so that we can estimate as follows:

where is an estimate of that minimizes the mean square error

,

with denoting the expectation. The Wiener deconvolution filter provides such a . The filter is most easily described in the frequency domain:

where:

  • and are the Fourier transforms of and ,
  • is the mean power spectral density of the original signal ,
  • is the mean power spectral density of the noise ,
  • , , and are the Fourier transforms of , and , and , respectively,
  • the superscript denotes complex conjugation.

The filtering operation may either be carried out in the time-domain, as above, or in the frequency domain:

and then performing an inverse Fourier transform on to obtain .

Note that in the case of images, the arguments and above become two-dimensional; however the result is the same.

Interpretation

The operation of the Wiener filter becomes apparent when the filter equation above is rewritten:

Here, is the inverse of the original system, is the signal-to-noise ratio, and is the ratio of the pure filtered signal to noise spectral density. When there is zero noise (i.e. infinite signal-to-noise), the term inside the square brackets equals 1, which means that the Wiener filter is simply the inverse of the system, as we might expect. However, as the noise at certain frequencies increases, the signal-to-noise ratio drops, so the term inside the square brackets also drops. This means that the Wiener filter attenuates frequencies according to their filtered signal-to-noise ratio.

The Wiener filter equation above requires us to know the spectral content of a typical image, and also that of the noise. Often, we do not have access to these exact quantities, but we may be in a situation where good estimates can be made. For instance, in the case of photographic images, the signal (the original image) typically has strong low frequencies and weak high frequencies, while in many cases the noise content will be relatively flat with frequency.

Derivation

As mentioned above, we want to produce an estimate of the original signal that minimizes the mean square error, which may be expressed:

.

The equivalence to the previous definition of , can be derived using Plancherel theorem or Parseval's theorem for the Fourier transform.

If we substitute in the expression for , the above can be rearranged to

If we expand the quadratic, we get the following:

However, we are assuming that the noise is independent of the signal, therefore:

Substituting the power spectral densities and , we have:

To find the minimum error value, we calculate the Wirtinger derivative with respect to and set it equal to zero.

This final equality can be rearranged to give the Wiener filter.

See also

References

  • Rafael Gonzalez, Richard Woods, and Steven Eddins. Digital Image Processing Using Matlab. Prentice Hall, 2003.