Jump to content

Talk:Convolution/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2Archive 3Archive 4

Introduction

I think the introduction to this article needs cleaning up, as 'A convolution is a kind of very general moving average' is vague and doesn't offer any helpful insights.

I agree, and changed the intro. Now it is more complicated, but much more objective. Also replaced the "fixed filter impulse response", which is a narrow use, with LTI system which is broader. --D1ma5ad (talk) 22:52, 2 March 2008 (UTC)
Not sure what you mean. What LTI system can not be described as a filter? Dicklyon (talk) 03:20, 3 March 2008 (UTC)
I was thinking about a control system. I guess you could classify it as a filter, but it is not written so in any other article I could find. If you pretend to switch back to filter, at least wikify it to point to Filter#Signal_processing.
--D1ma5ad (talk) 12:57, 7 March 2008 (UTC)

I originally wrote:

  • In electrical engineering, the output of a linear system is the convolution of the input with the system's response to an impulse.

Akella changed it to:

  • In Systems Science, the output of a linear system is the convolution of the input with the system's response to an impulse.

I have a problem with that, because I know of no such discipline as Systems Science, and neither, at this point, does Wikipedia. I know of "system analysis," but as a rather imprecisely defined terms that once may have had an engineering meaning, and then became something like a high-ranking computer programmer...

I realize that the scope is much broader than electrical engineering, but I was trying to list some contexts, that might be familiar to readers, in which convolution makes an appearance.

Also, in electrical engineering, the phrase "linear system" is often used to refer to precisely the kind of system described by a one-dimensional convolution integral. Whereas, in other fields, such as, say, Linear algebra, the scope of the word "linear system" is much broader, so you couldn't just say "linear system" and leave it at that.

So, I've changed it to:

  • In electrical engineering and other disciplines, the output of a linear system is the convolution of the input with the system's response to an impulse

Dpbsmith 23:58, 26 Feb 2004 (UTC)

It might be worth noting that the university I'm at has a School of Systems Engineering, which includes departments of Electrical Engineering and Cybernetics (as well as Computer Science). Perhaps the term we're after is somewhere amongst that lot. - IMSoP 00:34, 27 Feb 2004 (UTC)
I'm certainly not going to get into an edit war over it. I do think electrical engineering should be listed first, for the very personal POV reason that it's where I first encountered convolutions, and that any other disciplines that are mentioned should be ones that are a) readily recognizable to a reader, and b) ones in which any graduate with a degree in that discipline would instantly recognize the word "convolution" and know what it meant.... Dpbsmith 11:53, 27 Feb 2004 (UTC)
How about "In disciplines such as Electrical Engineering and Cybernetics...", just to be a bit less vague than "and other disciplines"? - IMSoP 18:11, 27 Feb 2004 (UTC)
Your call. Your edit. "Be bold." Since I think it's OK as is, I won't change it myself, but I certainly wouldn't revert it if you change it. In what course did you encountered the convolution integral and with what discipline do you associate it? Dpbsmith 00:44, 28 Feb 2004 (UTC)
I believe that the current version ("In mathematics and, in particular, functional analysis, convolution is a mathematical operator ...") gives a much broader sense of what convolution is. I also came to know convolution on an electrical engineering class on which it is used a lot, but it is, essentially, a mathematical operation! It is like saying that Laplace transform belongs to electronics or control engineering because it is used there extensively. Just like when Laplace lived, there was no electronics, I believe that convolution predates electrical engineering[citation needed]. --D1ma5ad (talk) 22:00, 2 March 2008 (UTC)

Constant Factors

Does the factor in the convolution theorem apply where Laplace transforms are concerned? I can see where it comes in for Fourier transforms, but not Laplace transforms. --Glengarry 01:34, 15 Jul 2004 (UTC)

Constant factors like this only depend upon the normalization convention used for the transform in question. (There are normalizations of the Fourier transform, for example, where the constant is unity.) —Steven G. Johnson 04:56, Jul 15, 2004 (UTC)


Etymology

Anybody know where the term "convolution" comes from? It'd be nice to add this bit of historical trivia.

It is a translation of the German word "Faltung," which may also be translated as "wrinkle." I'm not certain, but it is possible that the concept originated in the German-speaking world. Lovibond 15:24, 6 June 2007 (UTC)

Merging from PlanetMath

The PlanetMath has a nice GFDL article on convolution, see http://planetmath.org/?op=getobj&from=objects&id=2790 Anybody willing to merge that stuff? Oleg Alexandrov (talk) 02:17, 24 November 2005 (UTC)

Hi - The link to 'Convolution' on Planet Math in the external links section seems to be broken. --anon

Now it works. I guess the PlanetMath websiste was down or something. Oleg Alexandrov (talk) 22:42, 6 January 2006 (UTC)

in risk theory the distribution of the sum of n i.i.d random variables is found by convolution.

multi-dimensional convolution

For things like image processing we have 2d convolution . Could someone who knows about this stuff explain how this works, and the rules for separating orders (I think it's the same as the theorem that allows 2d FT to be composed using two 1D FTs?) --njh 05:25, 19 April 2006 (UTC)

Convolution kernel

Convolution kernel points here. This article should define it. - 72.58.19.66 05:55, 23 May 2006 (UTC)

shouldn't that be rather point to / be defined in integral transform? — MFH:Talk 13:49, 2 June 2006 (UTC)

Convolution of measures

The notation μ×ν (used in section Convolution of measures) is not explained here, neither in Borel measure or any other (more or less "directly") linked page. I am not used to work on this subject, but from the background I have, I suspect it should rather be denoted as a tensor product . (The only definitions I know for a "cross product" are the vector cross product and the Cartesian product of sets, but not of maps.) — MFH:Talk 13:49, 2 June 2006 (UTC)

Convolution Matrices needed

Please add a page listing different image processing convolution kernel matrices since there seems to be no good general reference for them on the web. Include the matrix values as well as a description of what the convolution kernel does.

it might be nice to see a discussion of convergence issues. E.g. the convolution operator F(f):=f*g is a linear operator on L^1 if g is in L^1. What other spaces does that hold for?

Integration range

You wrote: "The integration range depends on the domain on which the functions are defined." Could you, please, specify the endpoints of the integration interval more specifically.

For example, if f has a normal distribution and g has a uniform distribution in the interval (a,b). How exactly are the endpoints of the integration interval.

Or if f has a normal distribution and g(x) has the distribution a*(b+1)*(1-a*x)**b in the interval (0,1/a). How exactly are the endpoints of the integration interval.

Or if f has a normal distribution and g(x) has the exponential distribution λ*e**(-λ*x) in the interval (0,infinity). How exactly are the endpoints of the integration interval.

Can you give a general rule?

Fast Convolution

I added some bare bones information about fast convolution. If someone that knows more about it would add some juice, that'd be nice. -Mojodaddy

Linear Convolution Versus Circular Convolution

There's a very thin article on circular convolution, I think that there should be a section here comparing linear to circular. -Mojodaddy

Definition

The definition is not satisfying. What exactly are f and g? Something like ? --Bfrey 14:58, 10 May 2006 (UTC)

As of now, the introduction defines f and g as functions. I don't believe that their set needs to be defined, but a more formal definition could be included, if you think needed. As for me, that notation is not very understandable, regular english is better.
--D1ma5ad (talk) 21:43, 2 March 2008 (UTC)


As far as I can see, the limits in the integral after the change of variable are wrong. Shouldn't the second integral be:

Or am I going mad? Oli Filth 22:38, 1 May 2007 (UTC)

I believe you are indeed going mad =] . I also believe that the definition with as limits is in fact less generic then with . Let me explain:
If and then
and since
--D1ma5ad (talk) 21:04, 2 March 2008 (UTC)
Perhaps we're talking at cross purposes here? When I wrote this, I was referring to the article in its then-current state. The change of variables the article was then referring to would also require an accompanying change in the integral limits. Oli Filth(talk) 21:36, 2 March 2008 (UTC)

Simple practical application

I'm moving this off the article because it is licensed under creative commons attribution noncommercial share-alike 2.5. Noncommercial material cannot be used in articles to allow reuse of Wikipedia on commercial sites like answers.com (see copyright problems). --h2g2bob (talk) 11:28, 20 May 2007 (UTC)

Simple practical application

(transcribed from prof. Arthur Mattuck video lecture available @ MIT's OCW: 18.03 Differential Equations, Spring 2006 - Lecture 21: http://ocw.mit.edu/OcwWeb/Mathematics/18-03Spring-2006/CourseHome/index.htm)

Problem:

A nuclear power plant dumps radioactive waste at a rate of f(t) (kg/year).

The approximate radioactive material dumped in the interval [ti ; ti+1] is:

Starting at = 0. What is the amount of radioactive material present in the pile at time ?

(As more radioactive material is getting dumped, the existing material decays. The problem is focused only on the material that is still radioactive at time .)

Solution:

We model the radioactive decay with a simple exponential: If the initial amount is , then at time there is: material left.

( depends on the type of radioactive material used and for simplicity it is assumed there is only one type of material being dumped - thus is constant).

Replace in the figure above with and we have:


So amount dumped at time . How much of this material is still left at time ?

Amount left at time from that contribution is .

Total amount left at time (starting at ):

Let then the sum becomes an integral:

Radioactive decay of actual nuclear waste is more complicated in several ways. A radioactive element decays into a different element that typically is itself radioactive. The daughter element may emit a different kind of radiation -- e.g. alpha particles (helium nuclei), beta particles (electrons), gamma rays (photons), or neutrons. If you concentrate on radioactivity of just one kind, you could calculate the radioactivity at some future time with a convolution integral, but with a more complicated . In addition, the original nuclear waste probably included more than one kind of radioactive element.

After discussion with Mitrut (talk · contribs), it probably is ok to use. --h2g2bob (talk) 09:22, 26 May 2007 (UTC)

Relationship to Laplace transform

Should this be in the article?

--h2g2bob (talk) 21:28, 3 June 2007 (UTC)

This is just the convolution theorem, which is already mentioned: Convolution#Convolution_theorem. Oli Filth 21:30, 3 June 2007 (UTC)

"Convolution power" stuff

I've removed the convolution power stuff again, because it's still not defined adequately, and even if it was, I don't believe it belongs in this article. See my comments below:



The meaning of hasn't been defined. Or is this supposed to be the definition?


The meaning of hasn't been defined. Or is this supposed to be the definition?



The meaning of hasn't been defined. Or is this supposed to be the definition?

Besides which, the notation is now inconsistent. In , the superscript term indicates how many times to convolve by itself. uses the same notation (i.e. ), but means something completely different. Do you have a reference that uses this notation?

More importantly, I think that "convolution power" as portrayed here is actually defined via the convolution theorem, i.e.

In which case, all of these aren't basic properties of convolution at all, merely a set of definitions. Therefore, it should belong in its own article, or perhaps in the convolution theorem article. Oli Filth 17:47, 4 July 2007 (UTC)


I think the Taylor series' property is an important property of the connvolution. You may write a new article about the Convolution power if you would like to, but for now I put it back. I surely think the Taylor series' property should be here.

Bombshell 18:05, 4 July 2007 (UTC)

This isn't a "property" of convolution. It's merely a set of definitions, and so shouldn't be in the "Properties" section. Can you point to any of them which is an actual property of convolution?

Your binomial expansion isn't correct; it should be .
But let's look at this. As an example, take , the rect function (an function) and . We get:
where that's the triangular function.
On the other hand, we have:
Note that you haven't defined , but I'm assuming that it follows from the convolution theorem definition.
Clearly these aren't equal. You could argue that of course it's not going to work, because . But even if we set to solve this problem, the right-hand side goes to infinity!
Off the top of my head, I can't think of an which satisfies both sides.

Only because you've defined it as such above! This isn't a property.

What does this even mean?
Can you cite a reference which actually uses these "equalities"? Oli Filth 19:06, 4 July 2007 (UTC)

I'm sorry, but I made some mistakes in the equations, of course one needs to change 1 by the Dirac delta

Now the equations are exact: for your example:

we have on the one hand:

and on the other:

The last equation becomes:


I 've used these properties in the studie of renewal theory, where there used to make the solution of the renewel equation "easier":

E.g.: renewal equation: G = g + G * F

Thus: and therefore

However I admit making a lot of mistakes the first time, but I think the Taylor series property is very useful

Bombshell 22:14, 4 July 2007 (UTC)

Ok, but now your corrected implications (1st and 3rd in the Convolution#Taylor series section) are just direct applications of the convolution theorem. So they're not particular to Taylor series; this would work with any basic identity. The 2nd implication is still pointless, because it's still just a repetition of the definition given earlier. I'm too tired to think about the 4th implication, but as far as I can see, it still doesn't say anything useful, because it's entirely dependent on the definition of .
There's also still the problem of inconsistent notation, and a lack of any references for this stuff.
Therefore, I'm going to remove this once again, because it's completely out of place in this article. If you want to continue with it, please create the Convolution power article. That way, it can be edited and fixed without affecting the convolution article.
Oli Filth 22:59, 4 July 2007 (UTC)
I may be wrong here, but I seem to remember that the convolution of two functions f and g is as differentiable as the more differentiable of f and g, which seems to preclude f ∗ g ever being the Dirac distribution δ. If so, this kills the proposed definition of negative convolution power f∗(−n). Sullivan.t.j 23:49, 4 July 2007 (UTC)
I think you're wrong, because as I just demonstrated, in renewal theory, one computes:

You can prove it as follows:

(properties of convolution)
(taylor series)

which proves it

I'll start a new article Convolution power.

You're welcome to enhance it

Bombshell 16:23, 5 July 2007 (UTC)

Proof of Associativity

Is there maybe a way that a proof for commutativity could be added, using simple substitution (z = x-y) I'm sorry but I have no experience with Latex. Ameeya 19:07, 21 July 2007 (UTC)

Derivative of convolution

The Convolution#Differentiation rule section was recently updated from:

to

I'm pretty sure it was correct the first time.

We know (using Laplace transform#Proof of the Laplace transform of a function's derivative) that:

and that:

Therefore, I've changed it back for now. Oli Filth 15:36, 29 August 2007 (UTC)

Sorry, my mistake, thank you for correcting me :) Crisófilax 16:16, 30 August 2007 (UTC)

Mathworld lists it as the sum of the two terms: http://mathworld.wolfram.com/Convolution.html Can someone look it up in a textbook or verify numerically in Matlab? I'm changing it back to a sum. AhmedFasih (talk) 18:45, 25 February 2008 (UTC)

Update: I tried a simple test (convolving a triangle with a sinusoid, then differentiating) in Matlab, the Mathworld version, D(f*g)=Df*g+f*Dg, is numerically equivalent to the sum of the two expressions previously given here. I am inclined to believe the Mathworld version. AhmedFasih (talk) 18:56, 25 February 2008 (UTC)
A few points:
  • I'm aware that Mathworld differs, but I'd stake my life on the fact that it's incorrect on this one.
  • See the derivation above for why I think Mathworld is wrong.
  • Numerical evaluations of discrete-time convolution can't prove anything about the continuous-time convolution. (The most they can do is indicate what may be the case.)
  • However, it would seem you've messed up your experiment; try the code below:
t = [-4*pi:0.01:4*pi];
f = sin(t);
g = zeros(size(t));
g(length(t)/2 - 1 - (0:200)) = linspace(1,0,201);
g(length(t)/2 + (0:200)) = linspace(1,0,201);
Df = f(2:end) - f(1:end-1);
Dg = g(2:end) - g(1:end-1);
Df_g = conv(Df, g);
f_Dg = conv(f, Dg);
fg = conv(f, g);
Dfg = fg(2:end) - fg(1:end-1);
figure
cla, hold on, plot(Dfg, 'b'), plot(f_Dg, 'r'), plot(Df_g, 'k'), plot(Df_g + f_Dg, 'm')
Obviously, if D(f*g) = D(f)*g = f*D(g), then clearly D(f)*g + f*D(g) = 2.D(f)*g, which is what the example above shows.
  • Either way, you and I playing around with Matlab is original research; this can't be the basis of anything in the article.
Based on all of this, I'm going to remove the statement of the "convolution rule" until we can get this straightened out. Oli Filth(talk) 20:04, 25 February 2008 (UTC)
Actually, I'm not. See the ref that Michael Slone cited below, or [1], or p.582 of "Digital Image Processing", Gonzalez + Woods, 2nd. ed. I think we can safely assume that Mathworld is wrong on this one. Oli Filth(talk) 20:14, 25 February 2008 (UTC)
Yes, MathWorld just flubbed it. The derivative is just another impulse response convolution, and these commute. There's no add involved; someone who was editing that page probably got confused, thinking the * was a mutiply. Dicklyon (talk) 03:56, 26 February 2008 (UTC)
FWIW, Mathworld is now corrected. Oli Filth(talk) 22:05, 2 April 2008 (UTC)

In the discrete case (if one sums over all of Z), one can directly compute that D(f * g) = (Df * g). Theorem 9.3 in Wheeden and Zygmund asserts (omitting some details) that if f is in Lp and K is a sufficiently smooth function with compact support, then D(f*K) = f*(DK). The proof appears on pp. 146 – 147. I am no analyst, but this appears to support the claim that convolution does not respect the Leibniz rule. Michael Slone (talk) 20:03, 25 February 2008 (UTC)

I feel you, I just looked it up in my Kamen/Heck "Fundamentals of signals and systems," 2nd ed., p. 125 and you are 100% right. Whew, a research problem just got a little bit easier, thanks much. AhmedFasih (talk) 13:11, 26 February 2008 (UTC)

Comments

  • I would like to see some discussion of the convolution in several variables, since this is important in areas of mathematics outside signal processing, such as partial differential equations and harmonic analysis. For instance, solutions to a linear pde such as the heat equation can be obtained on suitable domains by taking a convolution with a fundamental solution. I also note that some of the applications listed in the article clearly require convolutions of several variables. However, the current article focuses exclusively on the one-variable case. I think this is a rather serious limitation.
  • Nothing is said here about the domain of definition, merely that ƒ and g are two "functions". I would like to see some discussion of the fact that the convolution is well-defined (by the formula given) if ƒ and g are two Lebesgue integrable functions (i.e., L1 functions). Moreover, if just ƒ is L1 and g is Lp, then ƒ*g is Lp.
  • Furthermore, continuing the above comment, some of the basic analytic properties of convolution should also be covered. Among these are the estimate that if ƒ ∈ L1(Rd) and g ∈ Lp(Rd), then
From this estimate, a great many important results follow on the convergence in the mean of convolutions of functions. It can be used, for instance, to show that smooth functions with compact support are dense in the Lp spaces. The process of smoothing a function by taking a convolution with a mollifier also deserves to be included as this has applications, not only in proving the aforementioned result, but is also a ubiquitous principle used in applications (such as Gaussian blur).
  • I also feel that a section should be created which at least mentions convolution of a function with a distribution (and related definitions), since these are significant for applications to PDE where one needs to be able to make sense of expressions such as ƒ*δ where δ is the delta function. The article itself seems to treat convolution with a distribution as well-defined implicitly. I would prefer to have this made explicit, as well as the precise conditions under which the convolution may be defined.

I'm not sure how to reorganize the article. I am leaning towards an expansion of the "Definition" section to include the case of several variables, and moving some of the discussion particular to signal processing somewhere else (I'm not sure where yet). The definition section should, I think, be followed by a "Domain of definition" section containing the details of what sort of functions (and distributions) are allowed. This should probably be followed by "Properties" (I think the circular and discrete convolutions should be grouped together with the other generalizations given towards the end of the article). I would then like to expand the "Properties" section. siℓℓy rabbit (talk) 14:03, 16 July 2008 (UTC)

I would prefer the article to build up to generalizations, like several variables. Many important points can be made first, with just the one-variable case. Fourier transforms can be defined in multiple dimensions, but we don't begin the article with that.
--Bob K (talk) 21:29, 17 July 2008 (UTC)
It is my intention to build up to generalizations (such as the convolution of a function with a distribution). However, I don't think the convolution on Rd is a very substantial generalization. The discrete version immediately following the "Definition" section is much more substantial, and probably less often used. siℓℓy rabbit (talk) 22:53, 17 July 2008 (UTC)
Not that it matters, but I would have guessed that the discrete version is the most commonly used form in this "digital age" of FIR filtering.
--Bob K (talk) 01:07, 18 July 2008 (UTC)

References

Some additional references I plan to use in my improvements of the article are:

  • Sobolev, V.I. (2001) [1994], "Convolution of functions", Encyclopedia of Mathematics, EMS Press.
  • Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, ISBN 3-540-12104-8, MR 0717035.
  • Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, ISBN 0-691-08078-X.
  • Titchmarsh, E (1948), Introduction to the theory of Fourier integrals (2nd ed.) (published 1986), ISBN 978-0828403245.

I plan to add more to this list as I progress. siℓℓy rabbit (talk) 15:07, 16 July 2008 (UTC)

Cyclic discrete convolution

I think this notation is too cryptic:

Please keep in mind that this is an encyclopedia, and we have plenty of space.

--Bob K (talk) 15:46, 18 July 2008 (UTC)

I think that notation is extremely clarifying; we should probably add it alongside the more traditional notation, since we have room. Dicklyon (talk) 16:00, 18 July 2008 (UTC)


The "ugly" one, i.e.:

is understandable to me. It's what you get if you periodically extend the b sequence with period n, reverse it in time, delay it by k, and then sum the product of the sequences over the extent of a (or equivalently to infinity).

So a more elegant way of making the point without losing clarity is:

      for k=0,1,2,...,n-1

where bn is the periodic extension of b:

Those who already understand normal convolution will be able to leverage that insight to help them understand this. Those who don't understand normal convolution either shouldn't be here at all, or they should be inspired to go back and brush up on it.

--Bob K (talk) 20:44, 18 July 2008 (UTC)

Why not just leverage the circularity of the mod function at least:
Dicklyon (talk) 21:26, 18 July 2008 (UTC)

That works too. But since this whole article is about convolution, it's better to reinforce that relationship, IMHO. Since they are not mutually exclusive we can, and probably should, mention both explanations. Let the reader pick his own poison.

--Bob K (talk) 21:34, 18 July 2008 (UTC)

I think the section looks pretty good now. One thing I would like is a good reference for the periodization of the discrete functions. A text on Fourier series seems a natural place to look. I'll see what I can dig up. siℓℓy rabbit (talk) 03:44, 19 July 2008 (UTC)

Translation invariance

I have added a short and rough section on translation invariance. Obviously more should be said, since this is "the" most important property of convolutions. Hopefully we can iterate to get this new addition of mine into a more reasonable shape. siℓℓy rabbit (talk) 16:12, 24 July 2008 (UTC)

Note to associativity

(H * δ') * 1 = (H' * δ) * 1 = (δ * δ) * 1 = δ * 1 = 1

H * (δ' * 1) = H * (δ * 1') = H * (δ * 0) = H * 0 = 0

where H represents heaviside's step function whose derivative is dirac's delta function

    • sorry for the form in which I am presenting this, but I am not well familiarized with input of math equations
  • hello, this article seems to be about convolution of functions. If you want to consider distributions, then you must impose another assumption, for example: One of the distributions must have a compact support for associativity to hold. In your example, certainly only Dirac has compact support. 78.128.195.197 (talk) 21:37, 10 March 2009 (UTC)
  • * sorry for errors, i was writing about kommutativity. For association to hold, all but one of distributions must have compact support. At least i think so. 78.128.195.197 (talk) 23:23, 10 March 2009 (UTC)

Intuition

I am planning on adding a section to this article with some intuitive conceptions of the convolution (I am using Professor David Jerison's "bank account" analogy) with regards to its usage in solving ordinary differential equations with rest initial conditions. This intuition is primarily useful for introductory undergraduate differential equations, and as far as I can tell, not so much for advanced mathematics. This is possibly more relevant to the article superposition principle or Green's function and I would not have any objections if it moved over there, but it would be nice to see some a more layman explanation for this. — Edward Z. Yang(Talk) 22:10, 23 April 2009 (UTC)

Ed, I reverted the long intuition example section. If this kind of example can be sourced, maybe we can use it; but I didn't have the impression that it was particularly helpful or intuitive, being basically a bunch of integral math and such, just like the rest of the article. Convolution is not so tricky that we need to resort to bank deposits to explain it, I think. Dicklyon (talk) 05:18, 24 April 2009 (UTC)
Source is here, albeit for a different example. This also works, although I am not sure if the link will always be available there. While the convolution may not be tricky (it is a relatively simple mathematical operation), many people will find the existence of a concrete example and/or usage of the term to help them understand what a convolution is, and why they might care. I have not restored the section. — Edward Z. Yang(Talk) 13:54, 24 April 2009 (UTC)
Those don't support you novel approach of making a continuous compound interest problem out of it, which is itself an unfamiliar concept to anyone who is likely to have trouble with the math. And they're not exactly WP:RS. Dicklyon (talk) 15:44, 24 April 2009 (UTC)
Sure. We can remove the analogy from it, and then reframe the section as a proof of the fundamental solution theorem. I would call them reputable sources under the self-publishing articles, but if that's not sufficient I can also find alternative sources. — Edward Z. Yang(Talk) 16:45, 24 April 2009 (UTC)
In an area with so much published, there's probably no reason to resort to self-published material. Dicklyon (talk) 19:37, 24 April 2009 (UTC)
It seems to me that because it's so elementary, you're not going to find peer-reviewed material appropriate for this article. Would a textbook be acceptable, in your eyes? — Edward Z. Yang(Talk) 22:12, 24 April 2009 (UTC)

Special characters template

I don't think this article needs the special characters template at all. The template makes sense if an article would unexpectedly have special characters (for example, if they appeared in a biography). But so many math articles have "special" characters that it's not helpful to put a template on every math article. — Carl (CBM · talk) 10:38, 16 May 2009 (UTC)

Good point, but that doesn't help the guy who's looking at his first Wikipedia article and seeing weird empty boxes. In fact I put up with those empty boxes long after my first article, because I simply didn't know what to do about them. I thought everyone was seeing the same thing I was seeing and that someday Wikipedia would fix "their problem".
--Bob K (talk) 13:12, 16 May 2009 (UTC)
If we wanted a note on every article with special characters, we would do that directly in the software. I don't think we should put a template on the vast majority of mathematics articles; in practice we don't seem to use it very much. — Carl (CBM · talk) 13:17, 16 May 2009 (UTC)

intro section

This section is so incredibly abstract! Get a grip, you lofty mathematicians! I much prefer Wolfram's opening sentence: "A convolution is an integral that expresses the amount of overlap of one function g as it is shifted over another function f." -Reddaly (talk) 22:05, 4 August 2008 (UTC)

Good point. I prefer starting out very basic and working up to the "heights". Then readers can drop out when they find themselves outside their own comfort zone.
--Bob K (talk) 01:12, 6 August 2008 (UTC)
Could some application be mentioned in the intro? (A more detailed entry about the application can be put in the applications section.) This introduction is very mathematical, which may not be appropriate because the audience is not strictly mathematicians. I propose the following blurb be added to the into: "In physical systems, the convolution operation models the interaction of signals with systems. That is, the output y(t) of a electronic filter may be predicted by convolving the input signal x(t) with a characterization of the system h(t); y(t) = x(t) * h(t). neffk (talk) 22:12, 22 June 2009 (UTC)
Something like this should be added to the lead. 173.75.157.179 (talk) 01:03, 9 October 2009 (UTC)

Explanation of revert

Translation invariant operators on Lp spaces are by definition operators that commute with all translations. This is completely standard terminology: see any good book on Fourier analysis (E.g. Stein and Weiss, Duoandikotxea, Grafakos, etc.) 71.182.236.76 (talk) 12:44, 31 October 2009 (UTC)

differentiation and convolution

Two positive integrable and infinitely differentiable functions may have a nowhere continuous convolution. That is there are infinitely differentiable (even entire) probability densities with a nowhere continuous convolution. If this happens when then the functions are badly unbounded.

I have constructed examples in the paper below and wrote a small correction to wikipedia some time ago, which has been "corrected". So I have added a line to the differentiation subtitle and a reference to my paper. I hope this will not appear too selfish.

MR1654158 (99m:60026)

Uludağ, A. Muhammed(F-GREN-FM) On possible deterioration of smoothness under the operation of convolution. (English summary) J. Math. Anal. Appl. 227 (1998), no. 2, 335--358.

As the author states in the introduction: ``It is known that, as a rule, the operation of convolution improves smoothness. In this paper, the above statement is looked at critically. An example is constructed of two probability densities which are restrictions to $\bold R$ of entire functions, though their convolution possesses an infinite essential supremum on each interval. —Preceding unsigned comment added by 134.206.80.237 (talk) 17:52, 4 March 2010 (UTC)

Photos, audio?

This article would benefit from some photographic examples. Photos would be a much more accessible to the general public than abstract signals. For example, a photo, a Gaussian-blurred version, and a version convolved by a round tophat would show how the kernel affects the quality of the output. Also, the effect of a sharpening filter would show that convolution isn't just about blurring.

Similarly, an audio example playing a clean signal, an echo kernel (the echo of a Dirac delta), and then the same signal convolved by the echo kernel. —Ben FrantzDale (talk) 21:49, 26 September 2010 (UTC)

Just added a sharpening example (gold spider). Hope it's alright!
Tinos (talk) 13:17, 25 February 2011 (UTC)

transforms

The first reference given (Bracewell) may be a great book, but it seems to be assuming from the very beginning that the reader is acquainted with a general idea of a transform. The WP lemma transform (mathematics) redirects to Integral transform. But what is a transform in general? Nobody seems to care to give a definition. --84.177.81.254 (talk) 17:35, 25 May 2011 (UTC)

Think of it as a function whose input values and output values are not numbers, instead they are functions. Some times the phrased is used for other things as well, such as the input and output values are vectors. (Such as linear transforms in linear algebra). Thenub314 (talk) 04:55, 26 May 2011 (UTC)
Thanks! So the elements of a lambda algebra must be transforms, too, I think. --84.177.86.232 (talk) 17:37, 26 May 2011 (UTC)

Does it really have to be a spider?

{{reqdiagram}}

Every time I refer to this article that thing catches me by surprise and I jump ten feet in the air. Why can't we convolve a nice butterfly? — Preceding unsigned comment added by 69.242.232.143 (talk) 03:18, 10 July 2011 (UTC)

I also find the image not quite to my taste. It needs to be too large in order for the effects of the convolution to be visible. A tiny part of the image (showing the hairs on the legs) could be much smaller and yet illustrate the sharpening effect more clearly. Sławomir Biały (talk) 14:33, 10 July 2011 (UTC)
Yes, it is too big. I think someone should crop off a leg; ideally the original artist. Tomgg (talk) 09:56, 9 August 2011 (UTC)
I agree. The same was done on Depth of Field (see Talk:Depth_of_field#Image_of_wolf_spider) for Wikipedia:Accessibility reasons (citing arachnophobia). —Ben FrantzDale (talk) 16:30, 9 August 2011 (UTC)
Another request for this - I can't finish the article because of that damn thing! — Preceding unsigned comment added by 141.163.156.59 (talk) 12:21, 24 November 2011 (UTC)
Done! Sławomir Biały (talk) 12:26, 24 November 2011 (UTC)
No one has requested that the image be removed. Rather, people would like it to be replaced. I think we should keep it in the article until a better one comes along. Yaris678 (talk) 12:51, 24 November 2011 (UTC)
I disagree. Many readers have already complained that the image is a significant impediment to reading the article. It should therefore be removed until someone produces a satisfactory replacement. It has been here for over four months already, despite objections. We shouldn't wait indefinitely for someone to make an alternative. Sławomir Biały (talk) 13:05, 24 November 2011 (UTC)