Jump to content

Talk:Nyquist–Shannon sampling theorem: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Kestasjk (talk | contribs)
Line 757: Line 757:
This would point to reconstructable not being a real word, but reconstructible is. Reconstructiveness and reconstructibility might be.
This would point to reconstructable not being a real word, but reconstructible is. Reconstructiveness and reconstructibility might be.
--[[Special:Contributions/209.113.148.82|209.113.148.82]] ([[User talk:209.113.148.82|talk]]) 13:16, 5 April 2010 (UTC)
--[[Special:Contributions/209.113.148.82|209.113.148.82]] ([[User talk:209.113.148.82|talk]]) 13:16, 5 April 2010 (UTC)

== max data rate = (2H)(log_2_(V)) bps ==

Quoting from a lecture slide:

:In 1924, Henry Nyquist derived an equation expressing the maximum rate for a finite-bandwidth noiseless channel.
::H is the maximum frequency
::V is the number of levels used in each sample
::max data rate = (2H)(log_2_(V)) bps
:Example
::A noiseless 3000Hz channel cannot transmit binary signals at a rate exceeding 6000bps <i>(this would mean there are 2 "levels")</i>

I can't relate that very well to this article. I recognize the 2H parameter, but the "levels" referred to here I'm not sure where they come from.

Then it says Shannon extended Nyquist's work:
:The amount of thermal noise ( in a noisy channel) can be measured by a ratio of the signal power to the noise power ( aka signal-to-noise ratio). The quantity (10)log_10_(S/N) is called decibels.
::H is the bandwidth of the channel
::max data rate = (H)log_2_(1+S/N) bps
:Example
::A channel of 3000Hz bandwidth and a signal-to-noise ratio of 30dB cannot transmit binary signals at a rate exceeding 30,000bps.

Just bringing this up because people looking for clarification from computer communication lectures might find the presentation a bit odd, take it or leave it. [[User:Kestasjk|kestasjk]] ([[User talk:Kestasjk|talk]]) 06:47, 26 April 2010 (UTC)

Revision as of 06:47, 26 April 2010

WikiProject iconTelecommunications B‑class
WikiProject iconThis article is within the scope of WikiProject Telecommunications, a collaborative effort to improve the coverage of Telecommunications on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.
WikiProject iconMathematics B‑class Mid‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-priority on the project's priority scale.
WikiProject iconElectronics Unassessed
WikiProject iconThis article is part of WikiProject Electronics, an attempt to provide a standard approach to writing articles about electronics on Wikipedia. If you would like to participate, you can choose to edit the article attached to this page, or visit the project page, where you can join the project and see a list of open tasks. Leave messages at the project talk page
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.

this T factor issue is coming up again.

remember that "Note about scaling" that was taken out here ?

well, the difference of this article from the common (and flawed, from some of our perspectives) convention of sampling with the unnormalized Dirac comb and including a passband gain of T in the reconstruction filter is starting to have a consequence. i still think we should continue to do things they way we are (why repeat the mistake of convention?) but people have begun to object to this scaling (because it "not in the textbooks" even though it is in at least one).

anyway, Dick, BobK, anyone else want to mosey on over to Talk:Zero-order hold and take a look and perchance offer some comment? r b-j 21:01, 26 January 2007 (UTC)[reply]

OK, I gave it my best shot. Dicklyon 23:06, 26 January 2007 (UTC)[reply]
Hello again. I certainly can't match Doug's passion for this subject. And I can't improve on Rbj's arguments. I haven't given this as much thought as you guys, but at first glance, it seems to me that the root of the problem is our insistence that "sampling" is correctly modelled by the product of a signal with a Dirac comb. We only do that to "prove" the sampling theorem in a cool way that appeals to newbies. (It certainly sucked me in about 40 years ago.) But there is a reason why Shannon did it his way.
Where the comb really comes from is not the sampling process, but rather it is an artifact of the following bit of illogic: Suppose we have a bandlimited spectrum on interval -B < f < B, and we do a Fourier series expansion of it, as per Shannon. That produces a function, S(f), that only represents the original spectrum in the interval -B < f < B. Outside that interval, S(f) is periodic, which is physically meaningless. But if we ignore that detail, and perform an inverse Fourier transform of S(f), voilà... the Dirac comb emerges for the first time.
Then we compound our mistake by defining sampling to be the product of a signal with a Dirac comb that we created out of very thin air.  I'd say that puts us on very thin ice.
--Bob K 23:14, 26 January 2007 (UTC)[reply]
Thin ice is right. Taking transforms of things that aren't square integrable is asking for trouble. Doing anything with "signals" that aren't square integrable is asking for trouble. But as long as we're doing it, might as well not make matters worse by screwing it up with funny time units. There's good reason for this approach in analysing a ZOH, of course, but one still does want to remain cognizant of the thin ice. Dicklyon 23:40, 26 January 2007 (UTC)[reply]
I totally agree about the units. I'd just like to reiterate that even without the "square-integrable" issue, what justification do we have for treating S(f) as non-compact (if that's the right terminology)? I.e., what right do we have to assign any importance to its values outside the (-B, B) domain? Similarly, when we window a time-series of samples and do a DFT, the inverse of the DFT is magically periodic. But that is just an artifact of inversing the DFT instead of the DTFT. It means nothing. It is the time-domain manifestation of a frequency-domain approximation.
If this issue seems irrelevant to the discussion, I apologize. But my first reaction to the ZOH article was "the Dirac comb is not necessary here". One should be able to have a perfectly good article without it. But I need to go and really read what everybody has said there. Maybe I will be able to squeeze that in later today.
--Bob K 16:16, 27 January 2007 (UTC)[reply]
the liklihood of crashing through the ice is no greater than that of crashing in Richard Hamming's airplane designed using Riemann instead of Lebesgue integration. why would nearly all of these texts including O&S (which i have always considered kinda a formal reference book, not so much for describing cool DSP tricks, but more as a rigorous description of simply what is going on) have no problem with using the Dirac comb? They instead like to convolve with the F.T. of the Dirac comb (which is, itself, a Dirac comb) which is more complicated than just using the shifting theorem caused by the sinusoids in the Fourier series of the Dirac comb. wouldn't that have to be even thinner ice, yet these textbooks do it anyway. their only problem is the misplaced T factor.
BTW, Dick, i agree with you that
is more compact and nicer than
but also less recognizable. it's just like
instead of
except it is harder to see the scaling of time in the infinitely thin delta. r b-j 08:02, 27 January 2007 (UTC)[reply]
r b-j 08:02, 27 January 2007 (UTC)[reply]
I'm not up on the history of this thread, but FWIW I like     better than   .   And I like   best,   because it's easiest to see that its integral is T.
--Bob K 16:31, 27 January 2007 (UTC)[reply]
Bob, good point, and that's why we stuck with that form. Dicklyon 17:11, 27 January 2007 (UTC)[reply]
I just read your response at ZOH, and the point about scaling the width instead of the amplitude is compelling. That elevates     up a notch in my estimation. --Bob K 16:44, 29 January 2007 (UTC)[reply]
Robert, re the thin ice in textbooks like O&S, it's OK, but it's too bad they don't put the necessary disclaimers, references, or whatever to allow a mathematician to come in and understand the conditions under which the things they derive make sense. It's about enough for engineers, because they're all too willing to let the mathematical niceties slide, but then that makes it tricky when people try to use and extend the ideas or try to make them rigorous. So we end up arguing... not that we have any real disagreement at this point, but so often I see things where a Fourier transform is assumed to exist even when there is no way within delta functions and such even. Dicklyon 17:11, 27 January 2007 (UTC)[reply]

I find confusing the traditional textbook discussion of using of a Dirac comb to represent discrete sampling. I am also not sure that I agree with the assertions made here that it all wrong ('mistake'). As I understand it, delta functions only having meaning with multiplication AND integration over infinity. So, simply multiplying a Dirac comb by a function does not, on its own, represent discrete sampling. One must also perform the integration. Doesn't this correct the dimensional issues ('T factor')? —Preceding unsigned comment added by 168.103.74.126 (talk) 17:43, 27 March 2010 (UTC)[reply]

"As we have seen" statement

The article says: As we have seen, Borel also used around that time what became known as the cardinal series... This looks like copied and pasted from somewhere. Borel is not referred earlier in the article. The "As we have seen" mention should be removed. —Preceding unsigned comment added by 80.168.247.52 (talkcontribs)

Is it not clear that that's part of an attributed block quote? Dicklyon 20:16, 7 June 2007 (UTC)[reply]

citation for theorem statement

Finell has requested a citation for the statement of the theorem. I agree that's a good idea, but the one we have stated now was not intended to be a quote, just a good statement of it. It may take a while to find a great quotable statement of the theorem, but I'll look for some. Here's one that's not too bad. Something you also find incorrect ones, which say that the sampling frequency above twice the highest frequency is necessary for exact reconstruction; that's true for the particular reconstruction formula normally used, but it not a part of what the sampling theorem says. That's why I'm trying to be careful about wording says necessary and/or sufficient in various places. Dicklyon 22:18, 29 October 2007 (UTC)[reply]

Nyquist-Shannon sampling theorem and quantum physics?

When I browsed through the article, I felt that there might be a connection to what is known as the "duality" of time and energy in quantum Physics. Partly because the interrelation of limiting frequency and time spacing of signals seems to originate in the properties of the Fourier transform, partly because from physics it is known that the longer you look, the more precise your measurement can be. Does anyone feel compentent to comment on this (maybe even in the article)? Peeceepeh (talk) 10:50, 22 May 2008 (UTC)[reply]

The Fourier transform pair (time and frequency) are indeed a Heisenberg dual, i.e. they satisfy the Heisenberg uncertainty relationship. I'm not sure if this is what you were alluding to.
I'm not sure I see a direct connection to the sampling theorem, though. Oli Filth(talk) 11:38, 22 May 2008 (UTC)[reply]

Sampling and Noisy Channels

At Bell Labs, I was given the impression that "Shannon's Theorem" was about more than just the "Nyquist rate". It was also about how much information per sample was available, for an imperfect communication channel with a given signal-to-noise ratio. Kotelnivkov should be mentioned here, because he anticipated this result. The primary aim of Kotelnikov and Shannon was to understand "transmission capacity".

The Nyquist rate was an old engineering rule of thumb, known long before Nyquist. The problem of sampling first occured in the realm of facsimile transmission of images over telegraph wire, which began in the 19th century. By the 1910s, people understood the theory of scanning -- scanning is "analog" in the horizontal direction, but it "samples" in the vertical direction. People designed shaped apetures, for example raised cosine, which years later was discovered again as a filter window by Hamming (the head of division 1135 where I worked at Bell Labs, but he left shortly before I arrived).

And of course mathematicians also knew about sampling rate of functions built up from bandlimited fourier series. But again, I do not believe Whittiker or Cauchey or Nyquist discovered what one woudl call the "sampling theorem", because they did not consider the issue of channel noise or signals or messages.

Also, it seems folks have invented the term "Nyquist-Shannon" for this article. It is sometimes called "Shannon-Kotelnikov" theorem. You could argue for "Kotelnikov-Shannon", but I believe Shannon developed the idea of digital information further than the esteemed Vladimir Alexandrovich. I hesitate to comment here, after seeing the pages of argument above, but I hope you will consider consulting a professional electrical engineer about this, because I believe the article has some problems. DonPMitchell (talk) 22:29, 9 September 2008 (UTC)[reply]

See channel capacity, Shannon–Hartley theorem, and noisy channel coding theorem to connect with what you're thinking of. As for the invention of the name Nyquist–Shannon, that and Shannon–Nyquist are not nearly as common as simply Nyquist sampling theorem, but somewhat more sensible, seems to me; check these books and others; let us know if you find another more common or more appropriate term. Dicklyon (talk) 01:53, 10 September 2008 (UTC)[reply]

Simplifications?

Bob K, can you explain your major rewrite of the "Mathematical basis for the theorem" section? I'm not a huge fan of how this section was done before, but I think we had it at least correct. Now I think I have to start over and check your version, some of what I'm not so sure I understand. Dicklyon (talk) 21:10, 12 September 2008 (UTC)[reply]

Hi Dick,
I thought it was obvious (I'm assuming you noticed Nyquist–Shannon_sampling_theorem#math_Eq.1), but I'm happy to explain. I wasn't aware of the elegant Poisson summation formula back when we were creating the "bloated proof" that I just replaced. Without bothering with Dirac comb functions and their transforms, it simply says that a uniformly sampled function in one domain can be used to construct a periodically extended version of the continuous function's transform in the other domain. The proof is quite easy and does not involve continuous Fourier transforms of periodic functions (frowned on by the mathematicians). And best of all, it's an internal link... no need to repeat it here. Or I could put it in a footnote, if you like that better.
Given that starting point, it is obvious that can be recovered from under the conditions assumed by Shannon. All that's left is the math to derive the reconstruction formula.
Is that what you wanted to know?
--Bob K (talk) 22:28, 12 September 2008 (UTC)[reply]

As I see it, the main problem with this new version of the proof is that it doesn't appeal to most people's way of thinking about sampling ... many will think about picking off measurements in the time domain. Furthermore, there really should be two versions of the proof, one that works in the time domain and one that works in the frequency domain. Although Bob K might disagree, I think the time domain proof (that was once on this page) is fine, and it should use the Dirac comb. But the application of the Dirac comb involves more than just multiplication of the Dirac comb by the function being sampled. Also needed is integration over the entire time domain. Oddly, I don't see this seemingly important step in textbooks. —Preceding unsigned comment added by 136.177.20.13 (talk) 18:50, 27 March 2010 (UTC)[reply]

I'm in agreement that the proof that existed earlier was far more clear than what we see now, but 136, could you be more specific about what you mean by your last three sentences? How is multiplication of the function being sampled by a Dirac comb inadequate to fully performing the sampling operation? What goes in is the function being sampled, and what comes out is a sequence of Dirac impulses weighted by the sample values; the sample values fully defines the Dirac comb sampled signal in the time domain. 70.109.175.221 (talk) 20:15, 27 March 2010 (UTC)[reply]

This is '136' again. I'm still working this out myself, and I'm probably wrong on a few details (I'm not a mathematician, but a scientist!) but think first of just one delta function d(t-t0) and how it is applied to a function f(t), we multiply the function by the delta function and then integrate over all space (t in this case). So, Int[f(t) . delta(t-t0)] dt = f(t0). This, in effect, samples the time series f(t) at the point t0. And, if you like, the 'units' on the delta function are the inverse of its arguement, so integrating over all space doesn't change the dimensional value of the sample. Now, the comb function is the sum of delta functions. To sample the time series with the comb function we have a sum of integrated applications of each individual delta function. So, Sum_k Int[f(t) . delta(t - k.t0)] dt and this will equal a bunch of discrete samples. What I'm still figuring out is how this is normalized. Recall that int[d(t)]dt = 1. For the comb function this normalizing integral is infinite, but I think you can get around this by first considering n delta functions, then take the limit as n goes to infinity. You'd need to multiply some of the results by 1/n. —Preceding unsigned comment added by 168.103.74.126 (talk) 20:36, 27 March 2010 (UTC) A related issue is how it is we should treat convolution of the comb function. Following on from my discussion of how discrete sampling might be better expressed (right above this paragraph), it appears to me that convolution will involve an integral for the convolution itself, an infinite sum over all delta functions in the comb, and another infinite integration to handle the actual delta-function sampling of the time series. —Preceding unsigned comment added by 168.103.74.126 (talk) 22:00, 27 March 2010 (UTC)[reply]

You might want to take this up on USENET at comp.dsp. Essentially, the sampling operation, the multiplication of the dirac deltas are what samples f(t). To rigorously determine the weight of each impulse, mathematically, we don't need to integrate from -inf to +inf, but only need to integrate from sometime before t0 to sometime after t0. But multiplication by the dirac comb keeps the f(t) information at the discrete sample times and throws away all of the other information about f(t). You don't integrate over all t for the whole comb. For any sample instance, you integrate from, say, 1/2 sample time before to 1/2 sample after the sample instance. 70.109.175.221 (talk) 05:59, 28 March 2010 (UTC)[reply]

This is '136' again. I'd also like to say that the formula 3, which is supposed to show the time domain version of the sampling theorem results, kind of makes a mess of needed obvious symmetry between multiplication and convolution in the time and frequency domains. So, the multiplication of the rectangle function in the frequency domain (to band limit results) should obviously be seen as convolution of the sinc function in the time domain (which amounts to interpolation). What we have right now does not make any of this clear (and, at least at first glance, seems wrong). Compare the mathematical development with the main formula under 'interpolation as convolution' on the Whittaker-Shannon page. This formula should be popping out here on the sampling page as well. So, I'm afraid what we have on this page is not really a 'simplification'. Instead, it is really just a mess. —Preceding unsigned comment added by 75.149.43.78 (talk) 18:04, 4 April 2010 (UTC)[reply]

Nyquist–Shannon sampling theorem is not correct?

Dear Sir/Madam,

Sorry, but I think that Nyquist–Shannon sampling theorem about the sampling rate is not correct.

Could you please be so kind to see the papers below?

http://www.ieindia.org/pdf/88/88ET104.pdf

http://www.ieindia.org/pdf/89/89CP109.pdf

http://www.pueron.org/pueron/nauchnakritika/Th_Re.pdf

Also I believe the following rule could be applied:

"If everything else is neglected you could divide the sampling rate Fd at factor of four (4) in order to find the guaranteed bandwidth (-3dB) from your ADC in the worst case sampling of a sine wave without direct current component (DC= 0)."

I hope that this is useful to clarify the subject.

The feedback is welcomed. Best and kind regards

Petre Petrov ppetre@caramail.com —Preceding unsigned comment added by 78.90.230.235 (talk) 21:30, 24 December 2008 (UTC)[reply]

I think most mathematicians are satisfied that the proof of the sampling theorem is sound. At any rate, article talk pages are for discussing the article itself, not the subject in general... Oli Filth(talk|contribs) 22:00, 24 December 2008 (UTC)[reply]
Incidentally, I've had a brief look at those papers. They are pretty incoherent, and seem mostly concerned with inventing new terminology, and getting confused in the process. Oli Filth(talk|contribs) 22:24, 24 December 2008 (UTC)[reply]
I believe that Mr. Petrov is very confused, yet does have a point. He's confused firstly by thinking that the sampling theorem is somehow associated with the converse, which is that if you sample at a rate less than twice the highest frequency, information about the signal will necessarily be lost. As we said in this talk page before, that converse is not what the sampling theorem says and is not generally true. I think the what Petrov has shown (confusingly) is a counter-example, dis-proving that converse. In particular, that if you know your signal is a sinusoid, you can reconstruct it with many few samples. This is not really a very interesting result and is not related to the sampling theorem, which, by the way, is true. Dicklyon (talk) 05:38, 25 December 2008 (UTC)[reply]
On second look, I think I misinterpreted. It seems to me now that Petrov is saying you need 4 samples per cycle (as opposed to 1/4, which I though at first), and that the sampling theorem itself is not true. Very bogus. Dicklyon (talk) 03:12, 26 December 2008 (UTC)[reply]

Dear All, Many thanks for your attention. May be I am confused but I would like to say that perhaps you did not have pay enough attention to the “Nyquist theorem” and the publication stated by above. I’m really sorry if my English is not enough comprehensible. I would like to ask the following questions:

  1. Do you think that H. Nyquist really formulated clearly “sampling theorem” applicable to real analog signal conversion and reconstruction?
  2. . What is the mathematical equation of the simplest real band limited signal (SBLS)?
  3. Do you know particular cases when the SBLS can be reconstructed with signal sampling factor (SSF) N= Fd/Fs <2?
  4. Do you know particular cases when the SBLS can not be reconstructed with SSF N= 2?
  5. Do you know something written by Nyquist, Shannon, Kotelnikov, etc. which gives you possibility to evaluate the maximal amplitude errors during the sampling the SBLS, SS or CS with N>2? (Emax, etc, Please see the formulas and the tables in the papers).
  6. What is the primary effect with sampling SS, CS and SBLS with SF N=2?
  7. Do not you think that clarifying the terminology is one possible way to clarify the subject and to advance in the good direction?
  8. If the “classical sampling theorem” is not applicable to the signal conversion and cannot pass the test of SBLS, SS and CS to what it is applicable and true?

I hope that you will help me to clarify the subject. BR P Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 09:22, 25 December 2008 (UTC)[reply]

Petrov, I don't think anyone ever claimed that Nyquist either stated or proved the sampling theorem. Shannon did, as did some of the other guys mentioned, however. I'm most familiar with Shannon's proof, and with decades of successful engineering applications of the principle. Using the constructive reconstruction technique mentioned, amplitude errors are always zero when the conditions of the theorem are satisfied. If you can rephrase some of your questions in more normal terms, I might attempt answers. Dicklyon (talk) 03:12, 26 December 2008 (UTC)[reply]
He should take it to comp.dsp. They'll set him straight. 71.254.7.35 (talk) 04:02, 26 December 2008 (UTC)[reply]

Rephrasing

Hello! Marry Christmas to all! If I understand clearly:

  1. Nyquist has never formulated or proved “sampling theorem” but there are “Nyqiust theorem/zone/frequency/criteria” etc? (PP: Usually the things are named after the author? Or this is a joke?)
  2. Shannon has proved “sampling theorem” applicable to real world signal conversion and reconstruction ? (PP: It is strange because I have read the papers of the “guys” (Kotelnikov included) and I have found nothing applicable to the real world! Just writings of theoreticians who do not understand the sampling and conversions processes?)
  3. Yes the engineering applications have done a lot to mask the failure of the theoreticians to explain and evaluate the signal conversion!
  4. The amplitude errors are zero?? (PP: This is false!. The errors are not zero and the signal cannot be reconstructed “exactly” or “completely! Try and you will see them!)
  5. Starting the rephrasing:
    • N< 2 is “under sampling”.
    • N=2 is “Shannon (?) sampling” or just “sampling”.
    • N>2 is “over sampling”.
    • SBLS is “the simplest band limited signal” or according to me “analog signal with only two lines into its spectrum which are a DC component and a sine or co-sine wave”.
  6. comp.dsp will set me straight? (PP: OK).

I hope the situation now is clearer. P.Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 06:40, 26 December 2008 (UTC)[reply]

A proof of the sampling theorem is included in one of (I don't remember which) "A Mathematical Theory of Communication" or "Communication in the presence of noise", both by Shannon.
The "amplitude errors" are zero, assuming we're using ideal converters (i.e. no quantisation errors, which the sampling theorem doesn't attempt to deal with), and ideal filters. In other words, the signal can be reconstructed perfectly; the mathematical proof is very simple.
I'm not sure you're going to get very far by introducing your own terminology and concepts ("SBLS", "sampling factor", etc.), because no-one will understand what you're talking about! Oli Filth(talk|contribs) 13:10, 26 December 2008 (UTC)[reply]
  1. ???A Mathematical Theory of Communication" or "Communication in the presence of noise", both by Shannon?? I have read them carefully. Nothing is applicable to the sampling and ADC. Please specify the page and line number. Please specify how this publications are related to the real conversion of an analog signal.
  2. Perhaps I will not advance with my terminology but at least I will not repeating unrelated to the signal conversion "proven" theory.
  3. Errors are inevitable. You will never reconstruct "exactly" an analog signal coveted into digital for. Try it and you will see!
  4. About the amplitude error. Could you please pay attention to the Figure 5 at page 55 at http://www.ieindia.org/pdf/89/89CP109.pdf. You will see clearly the difference between the amplitude of the signal and the maximal sample. OK?
BR P. Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 15:21, 26 December 2008 (UTC)[reply]
The sampling theorem doesn't attempt to deal with implementation limitations such as quantisation, non-linearities and non-ideal filters. No-one has claimed that it does.
You can reconstruct a bandlimited analogue signal to an arbitrary degree of accuracy. Just use tighter filters and higher-resolution converters.
What you've drawn there is the result of a "stair-case" reconstruction filter, i.e. a filter with impulse response . This is not the ideal reconstruction filter; it doesn't fully eliminate the images. In practice, a combination of oversampling and compensation filters can reduce the image power to a negligible level (for any definition of "negligible") and hence eliminate the "amplitude errors". None of this affects the sampling theorem!
In summary, no-one is disputing the fact that if you use sub-optimal/non-ideal converters and filters, you won't get the same result as the sampling theorem predicts. Oli Filth(talk|contribs) 15:33, 26 December 2008 (UTC)[reply]


Hello again!

I am really sorry but we are talking about different things.

I am not sure that you are understanding my questions. and answers.

I am not disputing any filters at the moment.

Only the differences between the amplitude of the samples and the amplitude of the converted signal.

Also I am not sure that you have read "the classics" in the sampling theory.

Also, please note that there is a difference between the "analog multiplexing" (analog telephony discussed by the "classics" during 1900-1950) and analog to digital conversion and reconstruction.

I wish you good luck with the "classics" in the sampling theory! BR P Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 15:46, 26 December 2008 (UTC)[reply]

You started this conversation with "I think that Nyquist–Shannon sampling theorem about the sampling rate is not correct", with links to papers that discussed "amplitude errors" as if there was some mistake in the sampling theorem. That is what I have been talking about! If you believe we're talking about different things, then yes, I must be misunderstanding your questions! Perhaps you'd like to re-state exactly what you see as the problem with the sampling theorem.
As for filters, as far as your paper is concerned, it's entirely about filters, although you may not realise it. In your diagram, you're using a sub-optimal filter, and that is the cause of your "amplitude errors". Oli Filth(talk|contribs) 15:59, 26 December 2008 (UTC)[reply]
Joke?

Petrov, you ask "Usually the things are named after the author? Or this is a joke?" This is clear evidence that you have not bothered to read the article that you are criticizing. Please consider doing so, or keeping quiet. Dicklyon (talk) 00:44, 27 December 2008 (UTC)[reply]


Hello!

Ok.

I will repeat some of the questions again in more simple and clear form:

  • Where H. Nyquist has formulated or proved clearly stated “sampling theorem” applicable in signal conversion theory? (paper, page, line number?)
  • Where is the original clear definition of Nyquist theorem mention in Wikipedia (? (paper, page, line number?)
  • Where Shannon has formulated or proved “sampling theorem” applicable in signal conversion theory with ADC? (paper, page, line number?)
  • What we will lose if we remove the papers of Nyquist and Shannon from the signal conversion theory and practice with ADC ?
  • What is your definition of “band limited” signal discussed by Shannon and Kotelnikov?
  • Is it possible to reconstruct an analog signal which in fact is with infinite accuracy if you cut in to finite number of bits and put into circuitry with finite precision and unpredictable accuracy (As you know there are no exact value in electronics)?
  • The number e =2.7... and pi=3.14... are included in most of the real signal. How you will reconstruct them “exactly” or “completely”?

I am waiting for the answers

Br

P.Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 10:40, 27 December 2008 (UTC)[reply]

I don't know why you keep requesting where Nyquist proved it; the article already summarises the history of the theorem. As we've already stated, Shannon presents a proof in "Communication in the presence of noise"; it is quoted directly in the article. As we've already stated, this is an idealised model. Just as in all aspects of engineering, practical considerations impose compromises; in this case it's bandwidth and non-linearities. As we've already stated, no-one is claiming that the original theorem attempts to deal with these imperfections. I don't know why you keep talking about practical imperfections as if they invalidate the theorem; they don't, because the theorem is based on an idealised model.
By your logic, we might as well say that, for instance, LTI theory and small-signal transistor models are invalid, because the real world isn't ideal! Oli Filth(talk|contribs) 11:57, 27 December 2008 (UTC)[reply]


"If a function x(t) contains no frequencies higher than B cps, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart."

PP: Imagine that you have a sum of a DC signal and a SS signal.

How you will completely determine them by giving only 2 or even 3 points?

OK? —Preceding unsigned comment added by 78.90.230.235 (talk) 10:49, 27 December 2008 (UTC)[reply]

The theorem and the article aren't talking about 2 or 3 points. They're talking about an infinite sequence of points.
However, as it happens, in the absence of noise, one can theoretically determine all the parameters of a sinusoid with just three samples (up to aliases). I imagine that if one had four samples, one could determine the DC offset as well. However, this is not what the theorem is talking about. Oli Filth(talk|contribs) 11:57, 27 December 2008 (UTC)[reply]


PP: "They're talking about an infinite sequence of points." Where did you find that? paper, page, line number?

"I imagine that if one had four samples, one could determine the DC offset as well" This is my paper. Normally should be covered by the "classical theorem". OK? —Preceding unsigned comment added by 78.90.230.235 (talk) 12:17, 27 December 2008 (UTC)[reply]

It's pretty clear that you haven't read the original papers very carefully (or misunderstood them)! In "Communication in the presence of noise", Theorem #1 states it. Yes, it's true that the word "infinite" is not used in the prose, but then look at limits of the summation in Eq.7.
As for your paper, it's already a known fact (in fact, it's obvious; four equations in four unknowns), and is not in the scope of the sampling theorem (although you can probably derive the same result from the theorem). Oli Filth(talk|contribs) 12:25, 27 December 2008 (UTC)[reply]

PP: H. Nyquist, "Certain topics in telegraph transmission theory", Trans. AIEE, vol. 47, pp. 617-644, Apr. 1928 Reprint as classic paper in: Proc. IEEE, Vol. 90, No. 2, Feb 2002.

Question: Where in that publication is the "Sampling theorem"?

"I don't know why you keep requesting where Nyquist proved it..." You are stating that there is "Nyquit theorem?" (please see the article in Wikipedia). There should be a statement and a proof. OK? Where they are? —Preceding unsigned comment added by 78.90.230.235 (talk) 12:43, 27 December 2008 (UTC)[reply]

There is no article on "Nyquist theorem", only a redirect to this article. Please stop asking the same question over and over again; both Dick and I have already answered it, and the article already explains it. Oli Filth(talk|contribs) 12:47, 27 December 2008 (UTC)[reply]


PP: http://www.stanford.edu/class/ee104/shannonpaper.pdf page 448, Theorem I:

1. First failure for SS, CS or SBLS sampled and zero crossings. (One failure is enough!)

What failure? The only point of contention is in the nature of the inequality (i.e. an open or closed bound). It is generally accepted today that it is true only for an open bound. The article discusses in the introduction and in the section "Critical frequency". Again, it is clear that you haven't actually read the article.

2. Second failure: "completely" is wrong word.

Please don't tell me you're talking about your "amplitude errors" again...

3. Third failure: It is about "function" not about " a signal". Every "signal" is a "function", but not every "function" is a "signal". OK?

How is this a failure?

4. Forth failure: "common knowledge"??? Is that a proof?

No. What follows is a proof.

5. Fifth failure: No phase in the Fourier series! The phase is inherent part of the signal!

isn't constrained to be real, and neither is (and hence neither is ). Oli Filth(talk|contribs) 13:11, 27 December 2008 (UTC)[reply]

Imagine same number of failures for another theorem, e.g. Pythagoras theorem! Will you defend it in that case? —Preceding unsigned comment added by 78.90.230.235 (talk) 12:56, 27 December 2008 (UTC)[reply]

PP: "...F(ω) isn't constrained to be real, and neither is f(t)...".

You could write any equation, but cannot produce any signal.OK?

Sorry, I am talking about real signals with real functions and I am forced to evaluate the errors. You can produce the signals and test the equipment. Please excuse me. May be my mistake to start that talk. —Preceding unsigned comment added by PetrePetrov (talkcontribs) 13:19, 27 December 2008 (UTC)[reply]

"Real" as opposed to "complex"... i.e. phase is included. Oli Filth(talk|contribs) 13:21, 27 December 2008 (UTC)[reply]


PP: Hello! Again, I have looked at the papers of the "classics" in the field. May be the following chronology of the events in the field of the “sampling” theorem is OK:

1. Before V. Kotelnikov: H. Nyquist did not formulated any “sampling theorem”. His analysis (?) even of the DC (!) is really strange for an engineer. (Please see the referenced papers). No sense to be mention in sampling, SH, ADC and DAC systems. In "analog multiplexing telephony" is OK.

2. V. Kotelnikov (1933) For the first time formulated theorems, but unfortunately incomplete because did not include the necessarily definitions and calculations. No ideas on errors! May be should be mention just to see the difference between the theory and the practice.

3. C. Shannon.(1949 ) In fact repetition of part of that given by V. Kotelnikov. There is no even clearly formulated proof of something utilizable in ADC. No excuse for 1949! The digital computers were created!

No understanding of the signals (even theoretical understanding) to test its “theorems”. the necessarily definitions and calculations. No ideas of errors! No idea of application of an oscilloscope and multimeter!


4. Situation now: No full theory describing completely the conversion of the signals from analog to digital form and reconstruction.

But there are several good definitions and verifiable in practice theorem to evaluate the errors of non sampling the SS and CS into their maximums. Verifiable even with an analogous oscilloscope and multimeter!


I hope that is good and acceptable. BR

P Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 08:41, 28 December 2008 (UTC)[reply]

I'm going to say this one last time. The sampling theorem doesn't attempt to deal with "errors", such as those caused by non-ideal filters. Please stop stating the same thing time and time again; everyone already knows that the theorem is based on an ideal case. It has nothing to do with "multimeters and oscilloscopes". The only theoretical difference between "analog multiplexing" and A-D conversion is the quantisation. To say that there is "no understanding of the signals..." is total nonsense. Please stop posting the same mis-informed points!
Incidentally, Nyquist uses the term "D.C." in the context of "DC-wave", as the opposite of "Carrier wave"; we would call these "baseband" and "passband" signalling today.
If you have something on "the conversion of the signals from analog to digital form and reconstruction" from a Reliable source, then please post it here, and we'll take a look. Your own papers aren't going to do it, I'm afraid. However, even if you do find something, it's unlikely to make it into the article, because the article is about the original theorem. Oli Filth(talk|contribs) 11:37, 28 December 2008 (UTC)[reply]

Hello!

1. No need to repeat more times. From my point of view the "Nyquist-Shannon theorem" does not exists, and what exists is not applicable fully (even largely)into practice. You are free to think that it exists and the people use it.

  • And you are free to not to accept it! (although saying "it doesn't exist" is meaningless...) Yes, of course people use it. It's been the basis of a large part of information theory, comms theory and signal-processing theory for the last 60 years or so.

2. Please note that there are "representative" (simplified but still utilizable) and "non-representative" ("oversimplified" and not usable ) models. The "original theorem" is based on the "oversimplified" model and is not representable.

  • You still haven't said why. Remember, one can approximate the ideal as closely as one desires.

3. I have seen the "DC" of Nyquist before your note and I am not accepting it.

  • I have no idea what you mean, I'm afraid.

4. Because I am not a "reliable source" I will not spam any more the talks here.

  • You're free to write what you like on the talk page (within reason - see WP:TALK). However, we can only put reliable material into the article itself.

5. If you insist on the "original theorem", please copy and paste "exactly" the texts of Nyquist, Shannon, Kotelnikov,etc. which you think are relevant to the subject and let the readers to put their own remarks and conclusions outside the "original" texts. You could put your own, of course. OK?

  • The article already has the exact text from Shannon's paper. I'm not sure what more you expect?

6. I have put here a lot of questions and texts without individual answers. If Wikipedia keep them someone will answer and comment them (may be).

  • I believe I've answered all the meaningful questions. But yes, this text will be kept.

7. I do not believe that my own papers will change something in the better direction, but someone will change it because the theory (with “representative” models) and the practice should go in the same direction and the errors (“differences”) should be evaluated.

  • The cause of your "errors" is already well understood. For instance, CD players since the late 1980s onwards use oversampling DACs and sinc-compensation filters to eliminate these "errors". That's not due to a limitation in the theory, it's due to hardware limitations. The solution can be explained with the sampling theorem. Oli Filth(talk|contribs) 15:09, 28 December 2008 (UTC)[reply]

Good luck again. I am not sure that I will answer promptly to any comment (if any) posted here.

BR Petre Petrov

Rapidly oscillating edits

I noticed some oscillation between 65.60.217.105 and Oli Filth about what to say about the conditions on x(t). I would suggest we remove the parenthetical comment

"(which exists if is square-integrable)"

For the following two reasons. First, it exists also in many other situations. Granted this is practically the most common. Second, it is not entirely clear the integral we then follow this statement with exists in if x(t) is square integrable. I do not think it detracts at all from the article to simply say that X(f) is the continuous Fourier transform of x(t). How do other people feel about this? Thenub314 (talk) 19:03, 3 January 2009 (UTC)[reply]

PS I think 65.60.217.105 thinks the phrase continuous Fourier transform is about the Fourier transform of x(t) being continuous, instead of being a synonym for "the Fourier transform on the real line." Thenub314 (talk) 19:14, 3 January 2009 (UTC)[reply]

I realise that I'm dangerously close to 3RR, so I won't touch this again today! The reason I've been reverting is that replacing "square-integrable" with "integrable" is incorrect (however, square-integrability is a sufficient condition for the existence of the FT; I can find refs if necessary). I'm not averse to removing the condition entirely; I'm not sure whether there was a reason for its inclusion earlier in the article's history. Oli Filth(talk|contribs) 19:10, 3 January 2009 (UTC)[reply]
I agree with your guess as to how 65.60.217.105 is interpreting "continuous"; see his comments on my talk page. Oli Filth(talk|contribs) 19:36, 3 January 2009 (UTC)[reply]
Yes, thanks for pointing me there. Hopefully my removal of "continuous" will satisfy him. I suppose I should put back "or square integrable". Dicklyon (talk) 20:08, 3 January 2009 (UTC)[reply]
Not a problem. I agree with you Oli that the Fourier transform exists, but the integral may diverge. I think it follows from Carleson's theorem about almost everywhere convergence of Fourier series that this happens at worst almost everywhere, but I don't off hand know of a reference that goes into this level of detail (and this would apply only to the 1-d transform).
Anyways I am definitely digressing. The conditions are discussed in some detail in the Fourier transform article, which we link to. So overall I would be slightly in favor of removing the condition entirely. But I think (Dicklyon)'s version works also. (Dicklyon), how do you feel about removing the parenthetical comment?
I wouldn't mind removing the parentheetical conditions. Dicklyon (talk) 22:06, 3 January 2009 (UTC)[reply]

Geometric interpretation of critical frequency

I'm not sure the new addition is correct. Specifically:

  • the parallel implied by "Just as the angles on a circle are parametrized by the half-open interval [0,2π) – the point 2π being omitted because it is already counted by 0 – the Nyquist frequency must be omitted from reconstruction" is invalid, not least because the Nyquist frequency is at π, not 2π.
  • the discussion of "half a point" is handwaving, which is only amplified by the use of scare quotes. And it's not clear how it makes sense in continuous frequency.
  • it's not made clear why the asymmetry disappears for complex signals.

Oli Filth(talk|contribs) 19:24, 14 April 2009 (UTC)[reply]

Critical frequency

This section is unnecessarily verbose. It is sufficient to point out that the samples of:

are identical to the samples of:

and yet the continuous functions are different (for sin(θ) ≠ 0).

--Bob K (talk) 19:29, 14 April 2009 (UTC)[reply]

higher dimensional nyquist theorem equivalent?

The Nyquist theorem applies to more than just time-series signals. The theorem also applies in 2-D (and higher) cases, such as in sampling terrain (for example), in defining the maximum reconstructable wavenumbers in the terrain. However, there is some debate as to whether the theorem applies directly, or whether it has subtle differences. Can anyone care to comment on that or derive it? I will attempt to do so following the derivations here, but I probably will lose interest before then.

It seems that it should apply directly given that the Fourier transform is a linear transform, but the debate has been presented so I thought it should go in discussion before the main page. Thanks.

Andykass (talk) 17:45, 12 August 2009 (UTC)[reply]

You need to ask for sources, not derivations. Dicklyon (talk) 02:03, 13 August 2009 (UTC)[reply]
Check the article on Poisson summation formula, and especially the cited paper Higgins: Five short stories... There is the foundation for sampling on rectangular and other lattices and an local abelian groups, connected with the name Kluvanek.--LutzL (talk) 08:24, 13 August 2009 (UTC)[reply]

Misinterpretation?

I've reverted this edit, because I believe you're misinterpreting what Shannon was saying. He was not saying that the time window T was some integral part of the sampling theorem, merely that in a bandwidth W and a time window T, there are a total of 2TW dimensions. To start talking about longest wavelengths and so on without an explicit source is original research, I'm afraid. Oli Filth(talk|contribs) 09:15, 19 August 2009 (UTC)[reply]

Following this edit, I'll add the following points:
  • T doesn't imply a "lower frequency bound". If you believe it does, then please provide a source that says this explicitly.
  • Your second quote isn't from Shannon. Again, please provide a source.

Oli Filth(talk|contribs) 14:59, 19 August 2009 (UTC)[reply]

Are you reading the same Shannon quote that I am? Shannon wrote exactly (bold emphasis is mine):

...and that we are allowed to use this channel for a certain period of time T. Without any further restrictions this would mean that we can use as signal functions any functions of time whose spectra lie entirely within the band W, and whose time functions lie within the interval T.

Proper reading of the quote relative to the lower bound is:

...we can use as signal functions any functions of time ... whose time functions lie within the interval T.

He is saying "we can use as signals" any "spectra" (i.e. input signals), "whose time functions lie within the interval T". In other words, the input signals have to have time functions which lie within the sampling duration. Shannon tells us that time functions are signals, "we can use as signal functions any functions of time". So Shannon is saying that the signals must lie within the sampling duration T. And that is common sense. How can you sample a signal which does not lie within the sampling duration? I can't imagine why you've wasted so much time on something so obvious and fully specified by the historic Shannon quote cite. The sampling duration interval T dictates the longest wave period that can be sampled. How can you sample a signal with a longer period than sampling duration T? So if the duration of T bounds the period of the wave of input signals, then it thus means T is a low frequency bound. That is not interpretation, rather is a direct mainstream accepted mathematical relationship. T is period, which dictates frequency by the relationship 1/T. When you bound largest period on the upper end to T, then frequency is bounded by 1/T on the lower value. That is a simple mathematical identity. There is no research nor interpretation involved. I am just amazed at the slowness of your mind. You are apparently not qualified to be an editor of this page, especially if you still do not understand after this multiple redundant explanation. You are apparently confusing wavelength (space domain) with wave period (time domain).
As for the 2nd blockquote in the edit I provided (i.e. the 3rd blockquote in the introduction section), it was my attempt to show what Shannon's original quote would be, if the emphasis was shared with the obvious sampling duration lower frequency bound-- feel free to edit it and make that more clear or delete that 2nd blockquote. Shannon's focus is obviously on the upper bound, because obviously most communication work is focused on high frequency issues, but Shannon did fully qualify the lower bound in his opening paragraph as I have explained above. I admire Shannon for his completeness, even he had no reason to be interested in Fat tail signals. Thus, I don't need to cite any other source but Shannon, as I am not interpreting anything, merely quoting what Shannon wrote. Feel free to edit my contribution to remove any portion that you can justify is "interpretation" or original research, but please keep the quote of what Shannon wrote on the matter and keep a coherent reason for including the quote. The lower bound is becoming more important lately, as the world is seeing the error of ignoring fat tail signals (e.g. fiat money systems, systemic risk, etc). I was predicting this 20 years ago. As for your threat to ban me for undoing your incorrect reverts, if ignorance is power, then go ahead. --Shelbymoore3 (talk) 22:49, 19 August 2009 (UTC)[reply]
If Shannon had meant to state what you have written (the second "quote" in your edits to the article), then he would have written it that way himself. But he didn't. It's not up to us to extrapolate from the sources beyond what they support. Again, as I said, please provide a source that explicitly supports your interpretation or I will remove it again, as everything you've currently written is original research.
A couple of points in response to your arguments above:
  • Of course you can sample a sinusoid whose period is longer than your observation window; what matters is the bandwidth of the signal, not its frequency.
  • It's obvious that any signal that has finite support in the frequency domain has infinite support in the time domain. What Shannon was getting at was in the context of signals of infinite time-domain support (the context was communication signals), where the dimensionality is exactly 2TW in any interval of length T. There is no lower bound implied by the sampling theorem; if there were such a bound, then obviously 2TW would no longer hold.
  • Furthermore, Shannon goes on to explain what he means by "a time function with the interval T".
  • I'm well aware that you meant "wave period" and not "wavelength" (although actually the difference is irrelevant in the context of sampling). Don't use my quoting of your original mistake as an excuse to guess at what I am "qualified" to do.
Incidentally, I have no power to ban you (I'm not an administrator), but I am able to report you for WP:incivility, disruptive editing and not providing sources. Before you make any further edits (whether to the article or to the talk page), I suggest you read the guideline on incivility very carefully. Oli Filth(talk|contribs) 23:49, 19 August 2009 (UTC)[reply]
The error in your understanding is based on your two statements, the first being correct in isolation and the second being false for "any" but true if you wrote "any sinusoid":
  • Of course you can sample a sinusoid whose period is longer than your observation window; what matters is the bandwidth of the signal, not its frequency.

  • It's obvious that any signal that has finite support in the frequency domain has infinite support in the time domain.
You are assuming that the composite signal being sampled is a sinusoid (sine wave), so that you can model the un-sampled portions of the wave in the time domain. Shannon did not make that less general assumption (you won't find the word sinusoid in the cited section of his paper). Rather he made a more general statement about "time function". For example with a Fat tail event, e.g. the collapse of the fiat economy or an earthquake or the observation of black swans, the event itself may be a very high frequency impulse wave at it's start and trailing ends, but have long period between the impulses. This is why the general way that Shannon stated it is so important to understand. Bandwidth alone won't capture the non-sinusoid composite signals, and that is why Shannon added the requirement "and whose time functions lie within the interval T". Shannon refers to the composite nature of the signals where he wrote "spectra".
Yes in fact Shannon does further define "time functions lie within the interval T" and it states exactly what I have stated in prior paragraph:

To be more precise, we can define a function to be limited to the time interval T if, and only if, all the samples outside this interval are exactly zero. Then we can say that any function limited to the bandwidth W and the time interval T can be specified by giving numbers 2TW.

So Shannon has clearly stated that bandwidth alone is not sufficient, and that you also have to bound on the period T. The sinusoid function can completely sampled by 2 times the maximum cycles in time interval T, but other composite signal functions will not. Thus Shannon stated the theorem in a more general mathematical way where he elaborates in the "III. GEOMETRICAL REPRESENTATION OF THE SIGNALS" that 2TW is a N-dimensional space.
Incidentally this is why bandwidth (e.g. more bits of storage) alone is not going to increase information, and can actually hide information (the disordered/rare signal).
Regarding your veiled threat to report me to an administrator (the impulsive attacks on my talk page have not been conducive to fostering an amicable debate), I really don't care what you do, because I just wanted to have this debate which I have already archived, will link to, and publish widely on the internet, to show how Wikipedia and knowledge in general is declining due to centralization of control, i.e. an increase in Entropy. If experts don't even understand the most basic theorem of sampling theory, then all statistics in world is flawed. I went through this debate over at NIST working group on anti-spam several years ago and got the same type of still unresolved misunderstanding. It amazes me that people can't read clearly what Shannon wrote. Besides I can link to the historical page, when ever I want to refer to the correct introduction of the Shannon-Nyquist sampling theorem. Isn't it in-civil of you to revert my edit three times before this debate on discussion page has been resolved? You accuse me on my talk page of reverting your reverts on my talk page-- circular straw-man. Why keep building your political case to ban me, and just focus on resolving the content debate with less threats on my talk page and thus more efficiency? You can not force me to defend myself here in this limited political jurisdiction, when I can simply supercede your power on the wider internet.
Note the discussion of the sparse signals above is related to the Nonuniform Sampling (see section) which Shannon's paper states uniquely identifies each signal in the 2-dimensional TW space, which is employed by compressed sensing discussed in Beyond Nyquist section. —Preceding unsigned comment added by Shelbymoore3 (talkcontribs) 01:30, 20 August 2009 (UTC)[reply]
You can complain all you like, but Wikipedia articles require sources for contestable claims, that's non-negotiable! I'm happy to be proven wrong (anything that enhances or corrects my understanding of signal theory is a good thing), but you're not going to be able to do that unless you provide a source that corroborates your interpretation. If this really is such a fundamental misunderstanding (as you say) then it should be easy to point at an authoritative source that discusses the idea of a lower frequency bound. Oli Filth(talk|contribs) 08:08, 20 August 2009 (UTC)[reply]
I am not complaining, I am explaining. See my reply below to Dicklyon about sources. We can simply quote Shannon, no need to find another source. We can remove all my words from the edit. Even if I do find a source which is supporting my understanding, you are still not likely to understand it, if you can't understand what Shannon wrote about T, the "time function" and the 2 dimensional space part of his theorem. Others will be adding comlexity on top of that foundation, and the issue I raised won't come up except for those doing work on non-sinusoid (non-deterministic) signals. Do you have any expertise in that direction? Can you explain to me how your interpretation of Shannon's theory applies to non-deterministic signals? Surely you understand your own interpretation well enough to explain it.
Shannon is saying that the "time functions lie within the interval T". Oli you are correct that for a fully constrained, periodic deterministic "time function" (e.g. sinusoid), then the number of samples is infinite support in the T domain. And thus there is no low frequency bound. These signals sit on a single line in the TW 2D space. But Shannon is also allowing in his theorem for signals which are not deterministically bound to that W domain and can be a 2D point any where in that space. He is saying that the "time function" must be deterministic within the sampling period. That is what he means when he says the 0 outside. --Shelbymoore3 (talk) 13:20, 20 August 2009 (UTC)[reply]
I don't know whether the sampling theorem fundamentally changes in the case of random processes; my intuition would be "no"! Again, I'm always happy to learn when I'm mistaken, so if you have a source that describes otherwise, then please present it.
I'm afraid you're not making yourself very clear in your second paragraph:
  • What do you mean by "fully constrained", and how do you relate it to the concepts of "periodic" and "deterministic"?
  • The "number of samples is infinite support" doesn't make sense; Shannon's definition of "a signal that lies in the interval T" is one whose samples outside T will be zero, that's not the same as saying it has finite support.
  • "deterministically bound to that W domain" again doesn't make a lot of sense; a signal can be anywhere in that 2TW space and still be deterministic.
  • I'm not sure Shannon was saying anything regarding deterministic vs. random, so I'm pretty sure that's not what is meant by "0 outside". Oli Filth(talk|contribs) 14:06, 20 August 2009 (UTC)[reply]
See the reply I gave to LutzL below. Note that 1 day periodic signal with a bandwidth of 0.1Hz will appear to be a random signal if sampled at an interval T of less than 1 day. Thus even if sampled at 0.05Hz for 1 hour, the sufficient finite support for the 0.1 Hz bandwidth would not provide infinite support for the sampling period interval. This is because the signal's time function is not continuous, so it is not deterministic a priori for any interval less than 1 day (rather it will appear to be random due to aliasing until T is 1 day or greater). By "fully constrained", I mean sampling rate of 0.5Hz and interval of 1 day, per the example I just provided. Obviously if we know the signal is continuous (how do we ever know that in real world?), then 2TW samples fully constrains the signal. Dis-continuous is more salient term here.

I agree with Oli Filth on the inappropriateness of this new interpretation and edits in the lead of this article. Statements like "this is ignored by most texts," without a cited source, are WP:OR and therefore inappropriate. There may be something to Shelbymoore3's point, though I must say I don't see it, but whatever it is, it's not part of the sampling theorem. Furthermore, he's clearly not hearing what Oli said above (e.g. above saying "You are assuming that the composite signal being sampled is a sinusoid" is an absurd reading of what Oli actually wrote); and he's ignoring the polite warnings and attempts to counsel, coming back with personal attacks against a long-time constructive editor. Shelbymoore3, it would be better to slow down, listen better, and learn how wikipedia works, than to bang your head against what is likely to be a pretty hard wall, since I'm not going to tolerate such edits any more than Oli did; the only reason he reverted you was that he got there first. Here's a suggestion: start on the talk page, pointing out what sources say; you've done a bit of that with Shannon, but going beyond what he said and putting in your own nonstandard interpretation of it is not going to work; find a source that supports your interpretation, or give it up. Dicklyon (talk) 06:45, 20 August 2009 (UTC)[reply]

  • Your personal opinion of what is appropriate is irrelevant, because you have not quoted Shannon's theorem to support your opinion. What I care is what Shannon wrote, since it is his theorem we are supposed documenting. I have already suggested on this discussion page, that I would accept an edit of my contribution, to discard what I wrote and retain only the exact quotes of what Shannon wrote regarding the period T and the specific definition of "time function", as I quoted Shannon above. That would remove all interpretation and leave it to the reader to decide what Shannon meant. Sorry to disappoint your failed attempt to be condescending, but there is no banging against wall, I have already published a link to and archive capture of (in case you delete my comments) this discussion page to a few million visitor websites, and your censorship will soon be widely known and also widely subverted. You can be proud of having your names on it and be exposed for it.
  • The issue of whether the sampled signal has a deterministic periodic "time function" equation such as a sinusoid is critical, because as Shannon has stated (as quoted below) in his definition of the "time function", we must be able to know the behavior of the signal outside the sampling window T:

To be more precise, we can define a function to be limited to the time interval T if, and only if, all the samples outside this interval are exactly zero. Then we can say that any function limited to the bandwidth W and the time interval T can be specified by giving numbers 2TW.

  • Your demand that I find sources who explain what Shannon meant is really absurd, as we can simply quote Shannon. Shannon is the source for his theorem. I doubt people who are intelligent enough to fully understand what Shannon wrote about the time period T and the definition of the "time function", have bothered to re-explain it, since it is blatantly obvious that Shannon has already explained it. Why would such very busy experts waste their time publishing redundant information, they simply cite Shannon. Perhaps we can find a source with experts who are into sampling Fat tail signals, as maybe they have had to explain their sampling window T in terms of basic sampling theory. Maybe some other reader of this page will be able to help us in that regard. If you are sincere at finding the truth, I suggest that you take the quotes I cited from Shannon, and you explain to me an alternative meaning other that what I have explained? And why don't you quote some sources for your alternative interpretation? Your non-compliance with this request is admission of defeat.
  • I am agreeable if we simply quoted Shannon about the time period requirement and definition of the "time function". So what do you claim is the standard interpretation of those quotes? Oli gave his interpretation at very top which was meaningless. Yes Shannon has stated that T and W' form N dimension space. So what? Why did Shannon mention this? I have told you why. What is your reason for Shannon devoting a whole section III to the geometry of signals in that N dimensional space? Is it because he is defining the limitations of the theory with respect to "time functions" that are not deterministic, employing a very general construct of a N dimension space. If the "time function" is deterministic, then Oli is correct that finite frequency support provides infinite support in the time domain. But my point is that Shannon was more general and is accomodating "time functions" that could be chaotic, e.g. Fat tail. I hope you understand that a "time function" is not required to be periodic and deterministic-- maybe that is the little epipheny you are missing? --Shelbymoore3 (talk) 13:05, 20 August 2009 (UTC)[reply]
Why should the time period T be mentioned in this article? This article is about the first three pages of the Shannon paper. The later ones, esp. section III is about what is today known as the Shannon-Hartley theorem on signal transmission rates over noisy channels. And functions that have zero samples outside some bounded intervall don't have anything like fat tails, see Whittaker-Shannon interpolation formula on how such a function looks like. Since the Fourier transform of such a function inside the frequency band is given by a trigonometric polynomial, there is no gap in the spectrum around zero.
The only point of yours that is remotely sensible is that from the samples inside a time interval with length T alone one cannot conclude anything about the function outside this interval, even under the assumption that the function is bandlimited. See again the interpolation formula. Especially there is nothing certain to be said about the Fourier spectrum, not below wavelength T (however, the band limit imposes a lower bound on the wavelengths) and moreso above. (Which by the way is the reason that the so called Huang-Hilbert transform is absurd.) If one includes additional assumptions, fast decay outside the intervall, or zero samples etc. then statements about the spectrum can be more securely quantified.--LutzL (talk) 14:00, 20 August 2009 (UTC)[reply]
T is on page 2 where the theorem is. Shannon-Hartley theorem applies to continuous-time channel, so it not applicable to sampling over finite period T, besides it has nothing to do with Fat tail signals. I do not understand the remainder of your point about zero samples.
Yes that my entire point, that "..and whose time functions lie within interval T". Suppose you are sampling an event (e.g. garage door opening) that occurs once a day (don't know that a priori), so if your time interval is less than a day, then you can not be assured to capture one period of that signal. Also note that the signal has a bandwidth W which is perhaps reciprocal of the 10 seconds to open the garage door. So the "time function" of the signal does not lie in the interval less than 1 day. There is a lower bound and an upper bound on the frequency. For intervals less than 1 day, the signal appears to be random, but this is aliasing. Can you deny this simple example of the dual bound? Shannon mentions both of these bounds in the first paragraph of the theorem:

...and that we are allowed to use this channel for a certain period of time T. Without any further restrictions this would mean that we can use as signal functions any functions of time whose spectra lie entirely within the band W, and whose time functions lie within the interval T.

--Shelbymoore3 (talk) 16:05, 20 August 2009 (UTC)[reply]
It's not clear what your example has to do with the sampling theorem, which presumes an infinite time and infinite set of samples. If there's a refinement of the theorem to finite T, where can we find that in a source? Shannon is talking about something completely different at that point (channels, not signal sampling). Dicklyon (talk) 16:44, 20 August 2009 (UTC)[reply]


(edit conflict) But you should be aware that any signal with limited support can not have its spectrum confined within a frequency band W. Shannon knew this, do you? You should, since this is the next sentence after your quote. And this article as well as the interpolation formula article are concerned with the functions that Shannon goes on to describe: functions that are exactly contained in the frequency band and are small outside the interval, that is falling reciprocal with the distance to the interval. No fat tails, and no gaps in the spectrum. As I said, this article is not concerned with section III, this is Shannon-Hartley or the noisy channel coding theorem. Here we deal with section II, which is complicated enough since some people still think that sine functions are admissible as signals in this context.--LutzL (talk) 16:52, 20 August 2009 (UTC)[reply]
The aliases occur due to high-frequency components, not low ones. Your garage door is a square wave (or something similar), with harmonics to infinity, and is therefore not properly bandlimited. If your "signal" were correctly bandlimited, then there wouldn't be a problem (theoretically).
Incidentally, you seem to be blurring the distinction between "random" and "deterministic, but unknown". If the signal to be sampled is random, then the samples will always be random. Similarly, if the signal is deterministic, the samples will always be deterministic. (I'm sure you're aware of that, it's just that you seem to be conflating the two in your recent posts.) Oli Filth(talk|contribs) 18:44, 20 August 2009 (UTC)[reply]
Both of you are going off on irrelevant, circular logic straw-men. It is quite simple to understand. First, note that for the example signal I provided above (period 1 day and perfect sine wave pulse width 10 seconds), if T is 1 day and W is 0.1 Hz, then the signal can be reconstructed with no aliasing, if the number of equally spaced samples is 2TW (i.e. 2 x 1 day x 0.1 Hz = 17,280 samples). Do you disagree with the prior sentence?
I assume you agree, thus we need only quote Shannon to show that his theorem applies to the above example:

If the function is limited to the time interval T and the samples are spaced 1/2W seconds apart, there will be a total of 2TW samples in the interval. All samples outside will be substantially zero. To be more precise, we can define a function to be limited to the time interval T if, and only if, all the samples outside this interval are exactly zero. Then we can say that any function limited to the bandwidth W and the time interval T can be specified by giving 2TW numbers.

For the sake of conceptual understanding, ignore any complication of real world impossibility of sampling a sine wave pulse with bandwidth W, we can simply increase W and that does not affect my conceptual point about T. Shannon mentions we should ignore this aliasing issue (we can smooth when we reconstruct with low pass filter):

Although it is not possible to fulfill both of these conditions exactly, it is possible to keep the spectrum within the band W, and to have the time function very small outside the interval T.

The above is a slam-dunk logic. There can not be any possible retort, except to introduce irrelevant straw-men. I have provided an example and quoted the best possible source Shannon, that clearly shows that the sampling interval T has a lower bound for discontinuous signals. For continuous signals the lower bound of T is 1/W only applies if the samples are equally spaced. For discontinuous signals, the lower bound of T is the period of the maximum discontinuity, i.e. of the Fat tail. Will you continue to withhold this important aspect of the theorem from the main Wiki page?
I propose the following be added to the introduction of the main Wiki page, then provide a citation to Shannon's quote above:

For continuous-time signals, the sampling interval T must be at least 1/W for equally spaced samples, otherwise the lower bound is unlimited except by factors such as the resolution of the sampling device. For discontinuous pulse, e.g. Fat tail, signals then the lower bound of the sampling interval T is the period between pulses, and the number of equally spaced samples is 2TW.

However, I will acknowledge that perhaps Shannon was only concerned with the continuous portions of a discontinuous signal, because of what he wrote near the end of section II, but it is my understanding that this was mentioned last because his prior discussion of theorem is fully generalized to discontinuous signals when he mentioned 2TW equally spaced samples ("...spaced 1/2W seconds apart"...), and the following is obviously only applicable to the continuous signals:

The numbers used to specify the function need not be the equally spaced samples used above. For example, the samples can be unevenly spaced, although, if there is considerable bunching, the samples must be known very accurately to give a good reconstruction of the function. The reconstruction process is also more involved with unequal spacing. One can further show that the value of the function and its derivative at every other sample point are sufficient. The value and first and second derivatives at every third sample point give a still different set of parameters which uniquely determine the function. Generally speaking, any set of independent numbers associated with the function can be used to describe it.

The above Shannon quote actually states implicitly that equal spaced samples must be used for discontinuous signal functions, because obviously a discontinuous function has dependent values (= 0) in the discontinuity:

...Generally speaking, any set of independent numbers associated with the function can be used to describe it.

—Preceding unsigned comment added by Shelbymoore3 (talkcontribs) 03:24, 21 August 2009 (UTC)[reply]
Further evidence that Shannon was aware that his sampling theorem is applicable to discontinuous signals is contained in section XII. CONTINUOUS SOURCES of his paper. He explained that continuous signals may be broken into discontinuous ones and that the aliasing error (due to high frequencies at the discontinuity) could be quantified and tolerated (e.g. check sums for digital data sent over analog channel):

If the source is producing a continuous function of time, then without further data we must ascribe it an infinite rate of generating information. In fact, merely to specify exactly one quantity which has a continuous range of possibilities requires an infinite number of binary digits. We cannot send continuous information exactly over a channel of finite capacity. Fortunately, we do not need to send continuous messages exactly. A certain amount of discrepancy between the original and the recovered messages can always be tolerated. If a certain tolerance is allowed, then a definite finite rate in binary digits per second can be assigned to a continuous source. It must be remembered that this rate depends on the nature and magnitude of the allowed error between original and final messages.

Dicklyon your point about the signal being infinite in time and T only applying to the channel is irrelevant. Oli your point that my example signal is not contained with the parameters given by Shannon, is not true as my explanation above shows. I think the problem you are having is you have been accustomed to applying what Shannon wrote only to the continuous time domain signals, and as I said in my original edit, that discontinuous signals such as Fat tail are also handled by Shannon's theorem-- the proof is explained above. You don't need to go introducing other theorems about channel noise, as that is irrelevant. Shannon's sampling theorem is applicable to any idealized signal, whether it be continuous or discontinuous during it's period. Shannon is obviously aware of that, by the very general way he wrote his theorem. If you can't bring yourself to understand such a simple concept, then there will be some limit to how many of your misunderstandings I can continue to reply to. I do not say this to be disrespectful, but rather because my time for this is limited. Thank you.
--Shelbymoore3 (talk) 02:25, 21 August 2009 (UTC)[reply]
Oli you erroneously claimed that the aliasing was due to not meeting the bandwidth requirement of my example signal, but I can prove you are wrong in 2 ways. First, it is obviously that if the signal is not sampled for at least 1 day in duration, then the pulses will sometimes not even appear in the samples. That is not high frequency aliasing, but aliasing due to insufficient sampling interval. Second, if the bandwidth of the pulse is taken to be W, then the even if I sample at 2TW, then if T is less than 1 day, I will still get the type of aliasing where sometimes the pulse is never showing up at all in my samples-- that is clearly not aliasing due to insufficient W support. --Shelbymoore3 (talk) 04:40, 21 August 2009 (UTC)[reply]
If you believe that the pulses not appearing in the sample stream is not due to high-frequency content aliasing, then you sorely misunderstand basic Fourier analysis and the sampling theorem. This is trivially discounted by first running your pulse train through a low-pass filter (which acts as anti-aliasing filter in this case), and then sampling. It may also be discounted by changing your sample interval to either extreme e.g. 1.1 minutes or 1 year + 1 minute; still some of your pulses will not appear in the sample output. Oli Filth(talk|contribs) 10:06, 21 August 2009 (UTC)[reply]
Incorrect. The low-pass pre-filter has the same problem as the the sampling device would, in that it can't see any amplitude from the coming pulse until up to T <= 1 day has elapsed. --121.97.54.2 (talk) 17:23, 21 August 2009 (UTC)[reply]
What you are describing is time windowing --> filtering --> sampling, which is not the same as filtering --> windowing --> sampling. Oli Filth(talk|contribs) 17:12, 22 August 2009 (UTC)[reply]
Transposing "windowing --> filtering" does not change the duration of time your system is going to need to wait before it can output any reconstructed signal. The filter is still going to need 1 day of input on that sparse signal example I had provided. --Shelbymoore3 (talk) 11:14, 23 August 2009 (UTC)[reply]
By definition, the filter will have the input from + to - infinity. Oli Filth(talk|contribs) 11:24, 23 August 2009 (UTC)[reply]
Infinity is a looong time to wait in real world. That is why my ENTIRE point of the debate still stands that in real-world, the maximum period between sparse events determines our sampling interval (i.e. I think of this as a low frequency bound). And the larger implication of this, is that by definition we do not know the maximum period of Fat tail events. So this means that for Fat tail phenomena, sampling theory tells us that we can not get a clue about the future from sampling an independent Fat tail channel-- possibly only Mutual information can help us. My goal was to get across to the student of Nyquist-Shannon that assuming infinite models apply to real world can be very dangerous, which we will soon all see in our lives:
http://www.professorfekete.com/articles.asp --Shelbymoore3 (talk) 01:33, 24 August 2009 (UTC)[reply]
Infinity is fine in my original hypothetical counterexample! You can think of it as a lower frequency bound if you like, but you'd be doing your understanding a disservice, as it's really nothing to do with frequency (at least in the Fourier-analysis sense). Oli Filth(talk|contribs) 19:25, 24 August 2009 (UTC)[reply]
(friendly tone again) My point remains that for a sparse event signal where the time-limit is known a priori, your infinite time pre-filter can not convert the time-limit bound into a bandlimited bound. Instead must consider the time-limited bound aliasing in the non-infinite time pre-filter and in the thus non-perfectly bandlimited sampling. Thus time-limit bound is a low frequency bound in the broadest definition of the word "frequency"-- agreed not in the fourier sense but that is irrelevant in my point. Additionally my point immediately below is that for Fat tail signals, by definition the time-limit is not known a priori, thus sampling/measuring itself can be entirely counter productive, unless you have Mutual information.
"Sigh"! That's why I stated that the filter should be placed before the time windowing. Then its output will be truly bandlimited. Doing it in this order will give you a totally different result to doing it the other way round (or not at all). No events will be "lost".
I'm glad we agree that it's not frequency in the Fourier sense; a lot of this discussion thread could've been avoided if you'd stated in that in the first place! Regards, Oli Filth(talk|contribs) 08:39, 25 August 2009 (UTC)[reply]
(friendly tone) Some think that measuring is better than not measuring at all, but because they may be under the illusion from an infinite sampling model, they will get just less precision, but in fact the result can be completely opposite than the target signal, i.e. Fat tail in prior paragraph. Students of science are trained to develop a blind faith that infinity can hide in elegant closed analytical form the Second law of thermodynamics trend towards maximum disorder, i.e. maximum information or capacity to do work. In short, science is a faith in a the stability of a shared order-- one that can not be permanent. This is why any universal theory that is not founded upon a world of maximum disorder will never be the final one. This is IMHO why space-time is not the totality of the universe, why Big Band and infinite time are nonsense, as neither describe what is at the "infinite" edge that can never be reached by our perception of order-- disorder. --Shelbymoore3 (talk) 04:01, 24 August 2009 (UTC)[reply]
Go and learn some math. Bandlimited signals, about which the theorem only speaks, are infinitely smooth, even analytic and have finite energy=L2-norm and never a finite support. No band-limited signal in that sense is periodic (apart from the zero signal). There is no discontinuity. The aspect of discontinuity in real world signals should be discussed in the more general sampling article. And of course, since the sinc-system is orthogonal, the error in the reconstructed signal is (proportional to) the sum of squares of the samples you leave out. So if anything interesting happens outside the sampling interval, the error will be big. However, if something not to drastic happens far away from the interval, then the error inside the interval will still be small. It's an interpolation formula, after all. And where Shannon speaks of samples outside the interval, he means the points of the equally spaced sequence. Up to today, there is very little certain about unequally spaced (that is, without a periodic pattern) samples. In that regard, Shannon's sentence is like Fermat's last theorem. (See Higgins: "Five stories...")--LutzL (talk) 06:27, 21 August 2009 (UTC)[reply]
Correct is no such thing as a perfectly band-limited signal in the real world, and Shannon admits that, as I already quoted and will repeat his quote again:

Although it is not possible to fulfill both of these conditions exactly, it is possible to keep the spectrum within the band W, and to have the time function very small outside the interval T.

What we do is trade a hopefully little bit aliasing smoothed in the reconstruction for the fact that all signals in the real world are somewhat discontinuous. So in my example, just ignore the higher harmonics from the edge of the pulse, as we just smooth those away on the reconstruction. And in fact, we do this for every real signal in the world. So please stop the nonsense about Shannon's theorem not applying to signals that have a weird shape, such as the Fat tail example I provided. I do understand your point that if we modeled my example signal analytically such that it had a period of 1 day and sine wave pulse of 10 seconds with some extremely steep falloff, then it would most definitely be subject to Shannon's theorem and no one here would deny that, then you would have a high band W in order to capture that falloff, but still T would need to be 1 day, else would need a sampling device with near infinite accuracy. In other words, it may be that we could measure an earthquake ahead of time with infinitely accurate (noise free) sampling device, but in practice we can't do it. For Fat tail you will need to lengthen the sampling interval instead. I am sympathetic to your point as whether this discussion applies more generally to sampling and not specifically to the Shannon-Nyquist theorem, but let me ask you to explain how we eliminate Shannon's statement "and whose time functions lie within interval T"? That statement is going to need finite support in the real world, and I don't think we want to say that the most fundamental theorem doesn't apply to real world signals. And I want to point out that Shannon-Nyquist is telling us what are the bounds of our sampling criteria. It is important that people understand that in the real world, the bound is both frequency and some tradeoff of interval, power, accuracy (did I miss any factors)? If people were more aware of this, there would be a lot of less nonsense statistics out there. I am pretty confident quantum theory is a mess because the measuring devices are aliasing. Yeah we never will know what to do about randomly unequal samples on real world signals, unless we have initial data (mutual information). The problem I have in general with your line of argument, is that we live in the real world and Shannon's paper was about a real world system. In this world we live in, nothing is an absolute. Everything (mass, energy, space-time, thoughts, etc) these are just perceptions (some shared through resonance). I know what our space-time is contained in-- disorder, but that is off topic except to make you think a little bit about the pity of the absolute and you pushing away Shannon's theorem to the perfect world that does not exist. --Shelbymoore3 (talk) 08:21, 21 August 2009 (UTC)[reply]
In case I wasn't sufficiently clear, I think you are wrong Lutz to claim inapplicability of Shannon-Nyquist to the problem of choosing a suitable interval T. The theorem is all about that. Shannon-Nyquist gives us the initial fundamental understanding of the relationship between W and T. Specifically, the theorem explains that for an idealized signal (perfectly continuous aka analytical aka deterministic aka fully constrained), then the choice of interval T is irrelevant, and that the only requirement for 0 aliasing is we need 2TW samples, where W is the idealized band for our signal. Shannon also explains that real world signals will require us to choose a suitable trade-off between W and T such that the aliasing is minimized:

Although it is not possible to fulfill both of these conditions exactly, it is possible to keep the spectrum within the band W, and to have the time function very small outside the interval T.

So stop telling me that Shannon-Nyquist does not apply to real world signals and stop telling me that it doesn't give us the initial concept of the relationship between the W and T. All I am asking we do, is make this clear to the readers, so they understand that Shannon-Nyquist sets up the initial framework for all the rest of the work on those tradeoffs, e.g. the other theorems you all mentioned about noise, etc.. I do admit that I goofed on my proposed edits, because I tried to frame this tradeoff as a dual bound, whereas it is really a tradeoff. The quantitative tradeoff choice will be affected by factors outside the scope of Shannon-Nyquist, but the initial concept of the existence of such a tradeoff is laid out by Shannon-Nyquist. Again my case is that if we only mention the 2TW and do not mention this tradeoff, then we are leaving out a huge conceptual part of the theorem, because the theorem is for real world signals, as the quote from Shannon above attests. I should hopefully be able to rest my case now, unless you retort with something significantly new or false. --Shelbymoore3 (talk) 09:04, 21 August 2009 (UTC)[reply]
I'm not sure if we are getting somewhere with this discussion. In some way I feel that this discussion misses the target Shannon was aiming at. Shannon was not concerned with sampling for reconstruction. His concern was: Given an ideal channel with bandwidth W, that is, any signal that gets into this channel is cut off at this bandwidth by an ideal bandpass filter, that is, if one wants an unperturbed signal, one has to put a bandwith constrained signal function in. Then how many different data points can one pass through this channel inside a time interval T, so that they are exactly recoverable at the other end of the channel. His proposed and proven answer is 2WT data points. And this follows nicely from the properties of the sinc-interpolation formula, which can be proven in different ways. Since one would have to start a bandlimited signal not only at the Big Bang, but at time minus infinity, this idealized modell is not true in practice. So practically one gets less than those 2WT data points. But this is not the concern in section II. The tradeoff mentioned in the quote is to restrict oneselves to exactly bandlimited functions, which are then necessarily not exactly zero outside the interval, but can be assumed to be zero at the sampling points outside the interval. You see, there is no connection to strawmen like "garage doors" or "sine functions" or the new one, "earth quakes", because they have nothing to do with signal transmission.--LutzL (talk) 10:17, 21 August 2009 (UTC)[reply]
The choice of a low-pass pre-filter governs the tradeoff between W and T, then refer to what I wrote previously, because thus you did not refute any of my points. Sigh. Btw, the Big Bang is a nonsense, as is the concept of infinite time because order can't be infinite without violating the 2nd law of thermodynamics stating that universe trends to maximum disorder, but that is (not entirely) off topic and I have a paper coming out on that this month. --121.97.54.2 (talk) 17:06, 21 August 2009 (UTC)[reply]
Well we are getting somewhere perhaps, if I then ask you what will be the sampling interval for the low-pass pre-filter? You see it is still the same problem, that you in practice need a longer sampling interval for sparse signals. Sampling theory for the real world will not allow you to sample sparse signals with arbitrarily small T. Agree or disagree? Then we have to debate if this off topic of this theorem of sampling theory.
The point from the very start is that 2TW is not a sufficient constraint for sparse or Fat tail signals. There is a practical constraint of the minimum sampling interval, i.e. minimum value for T. Shannon mentions this constraint, because the time-function for the signal must lie within T. If the band W of our sparse signal is 0.1Hz, but the sparseness ranges up to 1 day, then the sparse time signal is not going to lie within T = 1/W. Thus Shannon's theorem is saying we will need to choose a larger duration for T. If we pre-filter the sparse signals to band 1/60*60*24 Hz, then we will loose our signal to aliasing. Excuse me but I am so sleepy my eyes won't stay open, so I am not sure if this will make sense when I read it next. I reserve the right to edit this response in future. --121.97.54.2 (talk) 18:01, 21 August 2009 (UTC)[reply]
PROOF: If as Lutz and Oli both suggest, we do low-pass pre-filter at 1/60*60*24 Hz (i.e. 1/day), then 2TW samples means T must be no less than one day, otherwise we get less than 2 samples! A signal can not be reconstructed with less than 2 samples! So there is the UNDENIABLE proof right there! Thus T >= 1/W. If we low-pass pre-filter at some higher band W, then T decreases, but the larger sampling interval was just passed to the low-pass pre-filter, which is simply another sampling device, i.e. you kicked the can down the road but didn't avoid the requirement that "the time function lies within the sampling interval T" as the theorem states. So I think we can conclude now that I was correct, that for sparse signals (Fat tail) the theorem requires T which is no less than the largest period between sparse events. I realize that for an idealized signal (or after you have passed it through your ideal output pre-filter), then 2TW just tells us how many samples we need, not whether they need to spaced evenly over the entire T, but in the real world we can't kick the can down the road, because the pre-filter is subject to the sampling theory also, so no matter how we slice it, then we need T evenly spaced samples (some where in our pre-filter chain). --Shelbymoore3 (talk) 12:28, 22 August 2009 (UTC)[reply]

(outdent) I must say that I've lost track at what your argument is any more (maybe you could re-state it concisely?). No-one is suggesting that one is able to reconstruct a signal outside the observation window; that's obviously impossible (in the general case). This is still not the same as saying that there's a "lower frequency bound", which was your original argument. Incidentally, a filter is not a "sampling device". Oli Filth(talk|contribs) 17:19, 22 August 2009 (UTC)[reply]

Shelbymoore3, I strongly advise not restating the argument. What's needed is a secondary source from which we can work. It is not OK to restate the sampling theorem based on an idiosyncratic interpretation of Shannon's original work. We quote what he said about sampling, which is the bit that hundreds of secondary sources quote, prove, and expound upon. That's what this article is about. If you want to go beyond that, bring a source that goes that way. Otherwise, please give it up, because more talking here is not going to be productive in the way you like. Dicklyon (talk) 19:47, 22 August 2009 (UTC)[reply]
Dicklyon how can you know what I like? I am very very happy with the result of this discussion, irregardless of whether you censor and refuse to quote what Shannon wrote. I note that you guys removed the change to the title that I made from "Misinterpretation?" to "Arbitrarily small T?". You removed the more accurate title which summarized the specific misinterpretation that is discussed in this section. What good does it do to pack every possible misinterpretation into one section? Please change the title to something mutually agreeable and more specific. Your efforts to top-down censor (even on the discussion page) are obvious for all readers to see. Dicklyon as you read below, I simply do not understand the benefit to withholding the full specification of the requirements on the signal, as per an exact quote of what Shannon wrote? Just because the implications on sparse signals does not interest you, you withholding the full theorem meaning from the reader, is a disservice to humanity. I must say you seem to be a grumpy person.
Indeed, the fact that wikipedia arguments make me grumpy is one of my many failings. I'm looking at Shannon's paper here (took me a while to find it since you stuck an inappropriate extra word "sampling" into the quote), and I don't think that sentence has anything to do with the theorem that follows it, which is self-contained and is proved without reference to any interval T. He later counts the samples in the interval T as 2*T*W. I don't see how you get from that to the stuff you've been proposing to add. Dicklyon (talk) 00:37, 23 August 2009 (UTC)[reply]
As I explained below, the Shannon quote on the main page about "x(t)" fully specifies the theorem, because the T requirement is implicit if there exists an x(t) that can be sampled. However, as I point out below, the 1st sentence of the 3rd paragraph of the main article has an erroneous summary of the "x(t)" Shannon quote, in essense it removes the requirement of T. You see that requirement on T and time function is implicit in the quote containing "x(t)". If you want to explain that "x(t)" quote, then you must mention that the time function exists within the sampling interval T. --Shelbymoore3 (talk) 01:40, 23 August 2009 (UTC)[reply]
I don't get that interpretation, nor have I seen anything like it in a source. In the Shannon paper itself, as well as in sources that talk about this theorem, the reconstruction is generally shown as a sum over an infinite number of samples, which implies the complete lack of the concept of a sampling interval T. See for example [1]. If the interval is finite, perfect reconstruction is not possible; that's not what the theorem is about. Dicklyon (talk) 01:49, 23 August 2009 (UTC)[reply]
See next (outdent) below...
As for Dickly's repeated assertion that I have not quoted sources, I will repeat again, that I have only asked that we quote Shannon explicitly. We only need to quote his statement that in addition to the 2TW, Shannon also requires that "the time function lie within the sampling interval T". I do not need another source to quote Shannon. I am saying we don't have to make any interpretation at all, we can simply add this to the bottom of the Introduction section:

In addition to the sampling rate requirement 2TW, Shannon also stipulated that "the time function must lie within the sampling interval T".

Oli I appreciate your tone and your good faith request for me to re-summarize the issues. I think to keep it simple, we could just leave it as the above change required to the main page.
Oli notwithstanding that we keep it simple and just quote Shannon in a short one sentence addition to the end of the Introduction section, I will re-summarize for you. Yes I only propose to say that we can not reconstruct a signal outside the sampling interval, well that is also not exactly correct, so more precisely let's just quote Shannon as per above. There is some misundertanding between you and I on syntax, but we apparently agree on the semantics. The point is that the signal's time function must be deterministic within the sampling interval. The way Shannon stated it, is best. My use of other words, e.g. "deterministic", "fully constrained", "continuous", etc.. lead to misunderstandings on syntax. Let's stick with Shannon's exact quote instead. What I originally mean by "low frequency bound" is that if the signal has a long period of events (e.g. sparse signals, Fat tail), then I view that has a low frequency requirement, i.e. the signal lies outside the sampling interval. I think we got ourselves into disagreement based on syntax, not based on semantics, where we apparently agree. —Preceding unsigned comment added by Shelbymoore3 (talkcontribs) 23:50, 22 August 2009 (UTC)[reply]
Oli an analog filter (e.g. capacitance, impedance, and inductance, or their analogs in mechanics) is apparently not a sampling device, but in the case of filtering a sparse signal, the analog filter can not sense the sparse event until it has occurred. So in that respect it has the same sampling interval requirement. And I say "apparently" because actually in the real world a filter does not have infinite resolution, so therefor it is a sampling device-- just the aliasing is usually not significant enough to consider. But for sparse signals, the sampling theory may apply to the filter. My point is made already. You can can stop reading here. Now let me ramble off topic of this page a bit just for entertainment value (well actually it is relevant but in original research only). See one of the problems people have is they think in terms of absoluteness of perception, but perception is only relative to the universe's trend to maximum disorder (2nd law of thermo). Time-space is contained within disorder, it is not an absolute, just look at the complex plane of the Lorentz equations. —Preceding unsigned comment added by Shelbymoore3 (talkcontribs) 00:07, 23 August 2009 (UTC)[reply]
I believe Dicklyon's concern was that article talk pages are for discussing improvements to the article, whereas this thread has now diverged into arguing about the subject matter, which is not really the purpose of the talk page. Providing a source would bring a swift end to the matter.
You're right, there is certainly some confusion on the terminology you're using. I think the notion that we all apparently agree is: sampling a time-windowed signal preserves no information about the signal outside the window. Whilst we could add your suggested prose to the article, I believe it would be superfluous, as the article doesn't attempt to address the notion of signals with finite time-domain support, nor even the notion of the dimensionality of the signal space. Oli Filth(talk|contribs) 00:10, 23 August 2009 (UTC)[reply]
And no, an LTI filter is unequivocally not a sampling device, has infinite resolution, and never introduces any aliases! Oli Filth(talk|contribs) 00:14, 23 August 2009 (UTC)[reply]
Incorrect, an LTI filter only has an infinite resolution given infinite time (continuous-time) sampling interval, and the resolution degrades as the sampling interval approaches the lower limit of the interval within which lies the time function of the signal. Wait I will be reading the main page again to see if your assertion is correct that quoting Shannon is superfluous. --Shelbymoore3 (talk) 00:37, 23 August 2009 (UTC)[reply]
What do you understand "resolution" to mean? A hypothetical ideal LTI filter, by definition, operates from + to - infinity. I assumed such a filter when I originally brought it up. Oli Filth(talk|contribs) 00:45, 23 August 2009 (UTC)[reply]
I just want to understand how these infinite time models quantitatively interact with the real world, otherwise they are useless to me except as thought experiments towards the useful goal. See below (my prior edit) the Band-limited vs. Time-limited quantitative loss of infinite support (resolution) in the time domain. Infinite time for me is a fairytale that doesn't exist, because the only that is infinite in my model of the world is the trend to maximum disorder. The models that try to hide disorder in infinite time are strawmen, that have to be broken down over time by new science. --Shelbymoore3 (talk) 06:33, 23 August 2009 (UTC)[reply]
I have re-read the main page, and I agree that the first paragraph and the Shannon quote fully specifies the theorem, because by definition x(t) can not be defined if it doesn't lie within the sampling interval. I agree there is a need to explain that Shannon quote for the reader. But the problem is the third paragraph is removing the requirement that the "time function lie within the sampling interval T", so we need to fix this sentence on the main page:

In essence the theorem shows that an analog signal that has been sampled can be perfectly reconstructed from the samples if the sampling rate exceeds 2B samples per second, where B is the highest frequency in the original signal.

--Shelbymoore3 (talk) 01:06, 23 August 2009 (UTC)[reply]
See next (outdent) below...
Exactly so, because in all sources we know of, that's not what the sampling theorem is about. I'm open to improving the article by the addition of such stuff, but only if we find sources that connect it to the topic of the sampling theorem. Here are places to look. Dicklyon (talk) 00:22, 23 August 2009 (UTC)[reply]
Incorrect, according to Wikipedia's policy, we do not need more than one source if that one source is canonical. Shannon is the source for what he wrote. Everyone using his theorem is implicitly using the requirements Shannon gave in the theorem. If everyone was incorrectly using his theorem (i.e. ignoring the requirement that "the time function must lie within the sampling interval T", then we would still have an obligation to point out the part of Shannon's theorem that the mainstream does not use. We are documenting the theorem itself-- try to remember that. The theorem is an orthogonal topic here on Wikipedia. --Shelbymoore3 (talk) 00:37, 23 August 2009 (UTC)[reply]
We already quote the theorem in its entirety, and its proof does not require this extra condition, nor is there anything in the theorem or its proof abokut a finite interval T. The exact reconstruction depends on the interval being infinite. With respect to the T and W limitations, Shannon says that "it is not possible to fulfill both of these conditions exactly" and then goes on to write a theorem involving only the W condition. Live with it. Dicklyon (talk) 01:52, 23 August 2009 (UTC)[reply]
See next (outdent) below... --Shelbymoore3 (talk) 02:12, 23 August 2009 (UTC)[reply]

(outdent) Dicklyon in reply to your claim above that the Shannon quote in first paragraph of the theorem, "time function must lie within interval T", does not apply to theorem and it is not used by anyone, I want you to note that the concise statement of the theorem involves a time function "x(t)". It is obvious to everyone that you can not sample your time function if it does not exist inside of your sampling interval, which is what Shannon wrote "time function must lie within interval T". Nobody on this earth is sampling in infinite time (and besides Shannon's paper is not about sampling in infinite time, it is about a communications system in the real world). So it incredibly obvious that is why Shannon mentioned the requirement, "time function must lie within interval T". In his concise statement of the theorem "x(t)", this requirement is implicit. The 1st sentence of the 3rd paragraph of main article does not say "sampled for infinite time", therefor that sentence is in error and disagreement with the theorem:

In essence the theorem shows that an analog signal that has been sampled can be perfectly reconstructed from the samples if the sampling rate exceeds 2B samples per second, where B is the highest frequency in the original signal.

--Shelbymoore3

--Shelbymoore3 (talk) 02:08, 23 August 2009 (UTC)[reply]

The above sentence from the main article is in error, because it states that merely sampling at 2W rate will reconstruct a signal of bandwidth W, which is false if the signal lies outside the sampling interval. So either the sentence has to state that the sampling interval is infinite, or it has to qualify that the signal lies within the sampling interval. --Shelbymoore3 (talk) 03:20, 23 August 2009 (UTC)[reply]

Here's what he said:
This "one answer" is a theorem that punts on the finite time interval, since that requirement "is not possible to fulfill", in order to get a "more useful way" of describing what's possible. Of course I agree that nobody in the world samples the infinite past and future. That's no barrier to to a mathematical theorem, though. On the other hand, if you have a source that interprets it differently, I'm all ears. Dicklyon (talk) 02:25, 23 August 2009 (UTC)[reply]
I am not disagreeing with the quote of that theorem-- it is complete because it not only works in the infinite case but it is also general enough to imply that if f(t) exists in your sampling window, then the requirement of T has been implicitly fulfilled. Thus Shannon never punted, he took the opening paragraph and put that T requirement in the more concise statement of the theorem. I am disagreeing with the 1st sentence of the 3rd paragraph of main article, which is in error (see my reasons above for why). --Shelbymoore3 (talk) 03:20, 23 August 2009 (UTC)[reply]
Let me expound on my reason why that sentence in the main article is inconsistent with the theorem as quoted. I wrote above, "So either the sentence has to state that the sampling interval is infinite, or it has to qualify that the signal lies within the sampling interval". The problem is one of syntax. "Signal" can mean many different things in the context of the 1st sentence of the 3rd paragraph. Shannon was clear (both in his concise statement, and in the paragraph that precedes it) that we must be sampling a signal that has a time function that lies within the interval. In other words, we must have infinite (or near infinite) support in the time domain, i.e. the time function must be deterministic given 2W samples taken any where within the interval. The 1st sentence of the 3rd paragraph removes that requirement, and it thus erroneous. We can fix that sentence very easily as I have suggested above, then we are done here. How hard can that be for you? --Shelbymoore3 (talk) 03:37, 23 August 2009 (UTC)[reply]
OK, good point. I just added "(infinite sequence of)". Feel free to remove the parens if you think that's not clear enough or strong enough. Dicklyon (talk) 03:33, 23 August 2009 (UTC)[reply]
Thank you very much. I express my humble appreciation. IMHO, your edit is quite sufficient for consistency with the infinite case.
However, we still have the problem that Shannon spoke about how the theorem applies to the real world in his opening paragraph of section II of his paper (as you quoted in above discussion). What do you think about adding another sentence after the one you edited as follows? "If the sequence of samples are not infinite, it is possible (to band-pass pre-filter) to have a bandwidth B for a chosen sampling interval T such that the time function x(t) of the signal is very small outside of T, then 2BT samples suffice.". The point is that in real world we can choose a suitable sampling interval, apply a band-pass filter to constrict the signal to the finite sampling interval, then Theorem applies because x(t) becomes very small outside the sampling interval. Although this is implied by the infinite case, I think we can be more explicit so the reader is not having to be genuis to understand how this is applied to the real world. And our canonical source is Shannon. He told us how to apply the theorem to the real world. Let's tell our readers what he said. --Shelbymoore3 (talk) 04:10, 23 August 2009 (UTC)[reply]
Also note Lutz was correct before to write "bandlimited", and I was incorrect to write "low-pass", pre-filter. I have now written "band-pass" above. We have to remove the low frequencies outside of the interval T also. So in end, I was correct, there is a low-frequency requirement in Shannon's theorem when applied to non-infinite signals. Also I admit I have learned the theorem better from this discussion, as evident by what I wrote just above and can now see clearly (analytically) how the opening paragraph is Shannon explaining theorem for non-infinite intervals. Shannon merely states that we can use a finite T by band-pass pre-filtering-- that does not require changing the concise statement of the theorem, it is just a little extra point to help the reader apply the theorem in real world. I hope it is more clear to you now also? Glad we can help each other. That is the spirit of Wikipedia. --Shelbymoore3 (talk) 04:25, 23 August 2009 (UTC)[reply]
Yes, that's all good, but let's also stick to the letter of wikipedia, as in WP:V, WP:RS, WP:NOR. Find a source that tells it the way you see it, and then we can consider it. Dicklyon (talk) 04:43, 23 August 2009 (UTC)[reply]
Here is a source to explain the granularity (spacing of samples) limit to infinite support in time-domain for time-limited, band-limted real waveforms (at least in quantum realm) is 2TW >= 1/Pi, thus T >= 1/2PiW:
http://en.wikipedia.org/wiki/Bandlimited#Bandlimited_versus_timelimited
I was incorrect to write to "band-pass" pre-filter a few minutes ago above. When sampling in limited time, we need only a low-pass filter to remove the discontinuity (high frequencies) at our sampling interval ends. Thus Oli is correct, there is no low frequency bound. The bound is only that the signal must appear in the time-limited observation window, which for sparse signals means the sampling interval must be greater than the longest period of sparse events (which is what was originally in my mind when I was thinking low-frequency requirement). My point was to make sure the reader understands that simply sampling at 2W without regard to time interval, is not sufficient for real-world signals, because they can not be ideally band-limited and secondly their periodic nature may be more Fat tail than the sampling interval chosen. The sampling interval T has to be chosen in a tradeoff to minimize pre-filter aliasing and to contain the largest period of signals of interest.
The current article already mentions that time-limited signals can not be ideally band-limited, and I would like to suggest we add a link in it to the aforementationed Bandlimited#Bandlimited_versus_timelimited Wiki section:
http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem#Practical_considerations

a "time-limited" signal can never be bandlimited. This means that even if an ideal reconstruction could be made, the reconstructed signal would not be exactly the original signal. The error that corresponds to the failure of bandlimitation is referred to as aliasing

The weakness with the current page appears to be the lack of coherent connection between the introduction and the 1 sentence about time-limited buried in Practical Considerations sub-section. Also the word "band-limited" isn't mentioned in the intro, so the reader may not make the connection from "highest frequency of B" to band-limited. I think this can be fixed by further improving the 1st sentence in the 3rd paragraph of the introduction to make it more consistent with the theorem, specifically to make it clear that the analog signal is infinite (not just the samples being infinite), so that the reader will be forced to proceed to the Practical Considerations section if they are sampling a real world time-limited signal, and to insert the word "bandlimited", as follows (bold below is only to show you the 2 words I have proposed to be added). This will be my last proposed edit if we can agree:

In essence the theorem shows that a continuous-time analog signal that has been sampled can be perfectly reconstructed from the (infinite sequence of) samples if the sampling rate exceeds 2B samples per second, where the band-limit B is the highest frequency in the original signal.

--Shelbymoore3 (talk) 06:19, 23 August 2009 (UTC)[reply]
As I am re-reading the Introduction, it could be rewritten, perhaps even made more concise, to explain that the theorem applies to non-existent Bandlimited signals, which can never be time-limited, thus don't exist (except in mind of a mathematician). This very fundamental point is just not clear in the way the intro is currently written. Imagine the high school student coming to read this most fundamental sampling theorem, and getting extremely confused. I think from the very start, you need to make it clear that this theorem is for imaginary world. Then the Practical Considerations section explain that the theorem can be applied to the real world, by accounting for the aliasing that occurs when approximating a Bandlimited signal in a timelimited interval. The Introduction says "signal (for example, a function of continuous...", but that is really vague for a newbie. It would be must better to be explicit. --Shelbymoore3 (talk) 07:24, 23 August 2009 (UTC)[reply]
I provided a suggested edit. Feel free to revert it, but I think you will find it is more concise and coherent. I think the section is now shorter and more explicit:
http://en.wikipedia.org/enwiki/w/index.php?title=Nyquist%E2%80%93Shannon_sampling_theorem&oldid=309564659 —Preceding unsigned comment added by Shelbymoore3 (talkcontribs) 07:51, 23 August 2009 (UTC)[reply]

(outdent)Thanks Dicklyon, Oli Filth, and Lutz. I am done, some edits have apparently stuck (for now) with Oli's refinement. I didn't entirely achieve my objective of making it clear on main page, that we can not predict Fat tail events from time-limted sampling intervals (out-of-scope I guess), but making "infinite" explicitly clear on the main page should make the reader think carefully about how time-limiting signals changes the conclusion of the theorem. I wish we could further qualify "albeit in practice often a very good one" on the main page, but perhaps it is out-of-scope of discussion of the theorem? Just give it some thought, remember a student needs to start on this theorem first, so we don't want them to have any false concepts about time-limted signals being nicely wrapped as a close approximation in all cases by this theorem. I realize it says "some" but you know once the camel gets his nose under the tent... --Shelbymoore3 (talk) 10:32, 23 August 2009 (UTC)[reply]

Reconstructability not a real word?

I can't find reconstructability in any dictionary. What I do find are the following terms:

  1. Reconstruction (noun)
  2. Reconstructible (adjective)
  3. Reconstruct (verb)
  4. Reconstructive (adjective)
  5. Reconstructively (adverb)
  6. Constructiveness (noun)

This would point to reconstructable not being a real word, but reconstructible is. Reconstructiveness and reconstructibility might be. --209.113.148.82 (talk) 13:16, 5 April 2010 (UTC)[reply]

max data rate = (2H)(log_2_(V)) bps

Quoting from a lecture slide:

In 1924, Henry Nyquist derived an equation expressing the maximum rate for a finite-bandwidth noiseless channel.
H is the maximum frequency
V is the number of levels used in each sample
max data rate = (2H)(log_2_(V)) bps
Example
A noiseless 3000Hz channel cannot transmit binary signals at a rate exceeding 6000bps (this would mean there are 2 "levels")

I can't relate that very well to this article. I recognize the 2H parameter, but the "levels" referred to here I'm not sure where they come from.

Then it says Shannon extended Nyquist's work:

The amount of thermal noise ( in a noisy channel) can be measured by a ratio of the signal power to the noise power ( aka signal-to-noise ratio). The quantity (10)log_10_(S/N) is called decibels.
H is the bandwidth of the channel
max data rate = (H)log_2_(1+S/N) bps
Example
A channel of 3000Hz bandwidth and a signal-to-noise ratio of 30dB cannot transmit binary signals at a rate exceeding 30,000bps.

Just bringing this up because people looking for clarification from computer communication lectures might find the presentation a bit odd, take it or leave it. kestasjk (talk) 06:47, 26 April 2010 (UTC)[reply]