Jump to content

Talk:Channel capacity

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by Cewbot (talk | contribs) at 07:15, 30 January 2024 (Maintain {{WPBS}} and vital articles: 1 WikiProject template. Create {{WPBS}}. Keep majority rating "Start" in {{WPBS}}. Remove 1 same rating as {{WPBS}} in {{WikiProject Telecommunications}}.). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

deleted information -- worth rewriting?

[edit]

The following information was deleted from this article (see [1] diff):

Channel capacity, shown often as "C" in communication formulas, is the amount of discrete information bits that a defined area or segment in a communications medium can hold. Thus, a telephone wire may be considered a channel in this sense. Breaking up the frequency bandwidth into smaller sub-segments, and using each of them to carry communications results in a reduction in the number of bits of information that each segment can carry. The total number of bits of information that the entire wire may carry is not expanded by breaking it into smaller sub-segments.
In reality, this sub-segmentation reduces the total amount of information that the wire can carry due to the additional overhead of information that is required to distinguish the sub-segments from each other.

However, no reason for this deletion was given. Is this information faulty? Should it be rewritted?

WpZurp 16:46, 29 July 2005 (UTC)[reply]

The above information is not very relevant to the article. However, the article could definitely use some rewriting, as I have added some information to it in rather rough form. -- 130.94.162.61 02:34, 22 February 2006 (UTC)[reply]
This comment presumed that the framing information separating the sub-segments is

not part of the channel capacity, but it must be included in that, so commenting on framing reducing channel capacity is incorrect/misleading.

For formatting -- the mention is that S/N is in power or volts(with exponent 2). Unfortunately, that exponent looks like a footnote reference, so it is misleading one to believe that the ratio of voltages provides S/N (which it doesn't). Perhaps someone can think of a better way to format that so one doesn't go to reference #2 to see how the voltage ratio can be the same as the power ratio. —Preceding unsigned comment added by 76.14.50.174 (talk) 09:29, 31 December 2008 (UTC)[reply]

Figure-text agreement

[edit]

The statement of the noisy-channel coding theorem does not agree well with the figure. I will try to fix it. 130.94.162.64 19:10, 22 May 2006 (UTC)[reply]


X given Y or Y given X

[edit]

The article currently reads "Let p(y | x) be the conditional probability distribution function of X given Y" should this not be "Let p(y | x) be the conditional probability distribution function of Y given X"?

Yes, you are right. Bob.v.R 14:24, 17 August 2006 (UTC)[reply]

Prepositions

[edit]

I'd like to make a few comments on the following wording. "Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel."

First, I wonder who "we" are, and whether that includes or excludes me.

It is evident from the illustration that the channel receives messages from the transmitter and transmits them to the receiver. One might suspect that "messages received...over our channel" are those which go around it somehow, or those that are received by it. The usual prepositions used with transmission and reception, namely "from," "to," "through," and "by" don't appear in the sentence. "Over" is certainly used in this way, but is less precise than "through." I find it troubling. Also, the concept of "space" enters the wording in a way which further complicates comprehension. Is it "the space of messages" which is transmitted?

I'm not familiar enough with the material to edit the sentence myself, but would like to suggest amending it to take advantage of the more universal prepositions and avoid ambiguity.

Perhaps one of the following, which avoid the hazards, may express the intended message.

"X represents the space of messages entering, and Y the space of messages leaving, the channel in one unit of time."

"X is the number of messages per unit time entering the channel from the transmitter, and Y, those it sends to the receiver."

"X is the flow of messages into, and Y out of, the channel." D021317c 02:18, 24 March 2007 (UTC)[reply]

Mange01's new lead that I reverted

[edit]

In digital communication, channel capacity is a theoretical upper bound for the amount of non-redundant information, in bit per second, that can be transmitted without bit errors over a point-to-point channel.

When calculating the channel capacity of a noisy channel, an ideal channel coding is assumed, i.e. an optimal combination of forward error correction, modulation and filtering. In practice such an ideal code does not exist, meaning the channel capacity should be considered as a theoretical upper bound for the information entropy per unit time, i.e the maximum possible net bit rate exclusive of redundant forward error correction that can be achieved. The Shannon-Hartley noisy-channel coding theorem defines the channel capacity of a given channel characterized by additive white gaussian noise and a certain bandwidth and signal-to-noise ratio.

In case of a noiseless channel, forward error correction is not required, meaning that the channel capacity is equal to the maximum data signalling rate, i.e. the maximum gross bit rate. The maximum data signalling rate for a base band communication system using a line coding scheme, i.e. pulse amplitude modulation, width certain number of alternative signal levels, is given by Hartley's law, which could be described as an application of the Nyquist sampling theorem to data transmission.

The channel capacity can be considered as the maximum throughput of a point-to-point physical communication link.

I reverted that because pretty much every sentence of it is either imprecise or not true. Please let me know if I need to enumerate the errors. Dicklyon 00:14, 21 June 2007 (UTC)[reply]
Dear Dicklyon, I would be greatful if you did that. I would be even more happy if you tried to improve my text instead of just reverting it. I have written similar formulations on other wikipedia pages and in my own course material, and I don't understand what is wrong. Mange01 10:41, 21 June 2007 (UTC)[reply]
OK, here are some comments
  • "non-redundant information" is redundant, and therefore misleading, as it seems to rely on a different definition of information than the rest of this topic does.
  • "in bit per second" is unnecessarily narrow, therefore misleading the reader about what channel capacity really is
  • "transmitted without bit errors" is misleading; the probility of error can be made low, but the concept of channel capacity does not allow "without error" in general; and bit is too narrow again
  • "over a point-to-point channel" is perhaps too narrow, too; what about broadcast and multi-user channel capacity?
  • "When calculating the channel capacity of a noisy channel, an ideal channel coding is assumed" is simply not true. The calculation of capacity is independent of any code. And even the proof of the channel-code theorem assumes a random code, not an ideal code.
  • "information entropy per unit time, i.e the maximum possible net bit rate exclusive of redundant forward error correction that can be achieved" is a mish-mash of terminological confusion, and the sentence structure obscures the important point that the part after the "i.e." is part of what capacity is an upper bound on; it tempts the reader to think you're saying capacity can be achieved.
  • The stuff about line coding and Hartley's law is a bit off topic, or should be brought in later as a relationship, not mixed into the lead. And "Hartley's law, which could be described as an application of the Nyquist sampling theorem to data transmission" implies that Nyquist said something about sampling, which he did not; in fact he wrote about line coding, and the sampling result can be considered an offshoot of that.
  • And the final sentence, equating capacity to "maximum throughput" is just nuts.
Dicklyon 03:31, 22 June 2007 (UTC)[reply]

I understand most of your comments. Now I am more happy. Actually I have made thousands of revisions to Wikipedia, and they have been reverted only twice, both times by you. It is about to become a bad habit of yours. ;)

Anyway, I would like to invite you as member in the WP:TEL WikiProject. Mange01 11:18, 26 June 2007 (UTC)[reply]

2 Channel capacity pages

[edit]

There is a subsection of information theory that talks about the same. Should be merged. —Preceding unsigned comment added by Besap (talkcontribs) 10:22, 7 November 2007 (UTC)[reply]

I don't think so, it's a part of Information Theory. Kavas (talk) 16:47, 22 August 2010 (UTC)[reply]

Capacity of images

[edit]

I added this link, which was then removed. That page is what originally brought me to this article. It seems like a particularly intuitive application of channel capacity that could make for some interesting examples. Particularly, the application of channel capacity to image quality seems like a good way to cut through shortcomings of other quality metrics such as megapixels, bit depth, sensor noise, and lens MTF. Can anyone comment on this application? —Ben FrantzDale (talk) 14:39, 9 January 2008 (UTC)[reply]

That's one of many approximate hypothetical applications of the capacity concept, but otherwise doesn't contribute to understanding the concept or where it is known to really apply. Dicklyon (talk) 19:28, 9 January 2008 (UTC)[reply]
Fair enough. It's all interesting stuff. —Ben FrantzDale (talk) 16:24, 15 January 2008 (UTC)[reply]

Merge request

[edit]

There is a merge request concerning Information_theory#Capacity_of_particular_channel_models. Can someone look at this and address the issue here? Otr500 (talk) 13:49, 10 March 2011 (UTC)[reply]

After second thoughts I removed the merge request. Isheden (talk) 08:43, 11 April 2011 (UTC)[reply]


Bandwidth/Power Limited Regions Mislabeled in Figure

[edit]

I believe the bandwidth and power limited regions in the figure are swapped. An example with the regions correctly labeled can be found here. --Nerdenceman (talk) 20:15, 8 March 2012 (UTC)[reply]

You have overlooked the labeling of the x-axis of the figure, which is "Bandwidth W". This figure is obtained by fixing the SNR value and varying the bandwidth. The figure you linked is obtained the other way round. Anarkigr (talk) 21:12, 13 November 2012 (UTC)[reply]

bandwidth-limited regime

[edit]

the formula for bandwidth-limited regime is incorrect. it should be: W*log2(1+EB/(No*W)).Ofir michael (talk) 13:03, 28 April 2012 (UTC)[reply]

It's an approximation. When , which gives the stated formula. However, your comment leads me to another thing. The usage of SNR is very inconsistent in this section. It is first defined as a ratio which is not in dB (lets call this the "linear" SNR). Then, when talking about "large" and "small" SNR, a dB value is used. The approximate formulas then use linear SNR again. I think that this can be confusing. In terms of linear SNR, "small" can be defined as and "large" could remain unchanged. Anarkigr (talk) 21:09, 13 November 2012 (UTC)[reply]

Redirect from Shannon capacity

[edit]

Shannon capacity currently redirects here, but there is also the somewhat related Shannon capacity of a graph. What do you think about turning Shannon capacity into a disamb page? - Saibod (talk) 19:13, 3 October 2013 (UTC)[reply]

A hatnote would be an alternative. See WP:2DABS. ~Kvng (talk) 14:52, 21 December 2018 (UTC)[reply]

Too technical for a general user to understand

[edit]

The written article is too technical for a common person to understand. — Preceding unsigned comment added by 202.12.103.120 (talk) 17:53, 11 January 2014 (UTC)[reply]

This is certainly true, and worse -- even a professional mathematician person can't understand it, because the mathematics itself is handled in a spaghetti-like way. It is clearly written by an expert, but at a detail level, the flow of logic in the proofs is not presented crisply. I have done some inching improvements, but it is like slowly chewing granite: the result is good loam, but it takes a long time and there are some hard calculi to digest along the way. 178.38.152.228 (talk) 20:11, 20 November 2014 (UTC)[reply]

Definition versus theorem handled in confusing, inconsistent way

[edit]

(1) channel capacity is the maximum rate at which information can be reliably transmitted over a communications channel.

(2) By the noisy-channel coding theorem, the channel capacity of a given channel is the limiting information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability.

(3) The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution.

(4) The channel capacity is defined as

where the supremum is taken over all possible choices of .

What is the definition and what is the theorem?

(3) and (4) appear to be making the same assertion, namely that channel capacity is the maximum of the mutual information. But in (2) it is a theorem ("key result") and in (3) it is a definition.

(1) appears to define the channel capacity in a natural way that is different from the definition given in (4). I would be happier to learn that (1) is the true definition and the characterization in (4) is established by a theorem.

The characterization in (2) appears to be a mere rephrasing of the definition in (1), rather than the result of a deep theorem of information theory as is claimed.

My recommendation is to change (2) and (4) roughly as follows. The bolding is there to highlight the change; it is not intended as part of the change.

(2) More precisely, the channel capacity of a given channel is the limiting information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability.

(4) The maximum of the mutual information with respect to the input distribution refers to the quantity

where the supremum is taken over all possible choices of .

That looks good to me. Go for it. Dicklyon (talk) 20:33, 20 November 2014 (UTC)[reply]
My fear is that actually, the technical statement (4) is the definition, and the "maximum rate of information" of (1) and the "limiting information rate" of (2) are related to (4) via the noisy-channel coding theorem. In fact, this is the way it is presented in that article.
This would make (3) a misstatement, and would also require rewriting (1) so that the relationship between the "channel capacity" (defined later in (4)) and the "maximum information rate" is expressed as a effective conclusion of the theory, rather than a definition of "channel capacity". (2) and (4) would then be okay. I will have to research this more, or an expert can straighten it out. I understand the math but I don't know the field.
By the way, I do understand that for people familiar with Shannon's noisy channel coding theorem, the two concepts (mutual information I(X,Y) maximized over input probabilities, and the maximum attainable information flow-through rate) become effectively synonymous, so I understand how the conflation occurs, but in my view the two should not be treated as identical already during the definition, because then it become impossible to express what Shannon's theorem actually says. 178.38.60.255 (talk) 21:06, 28 November 2014 (UTC)[reply]

"reliably"?

[edit]

The statement "channel capacity is the tight upper bound on the rate at which information can be reliably transmitted over a communications channel" seems to be using the word "reliably" inappropriately. It is, quite simply, a limit on the amount of information that can be transmitted, and it applies whether the information is reliable (certain) or not. I'll delete the word. —Quondum 00:04, 21 November 2014 (UTC)[reply]

@Dicklyon: Would you like to clarify the definition of information you are working with when you made this revert? The only rigorous and appropriate one in this context is that of Shannon, which can essentially be thought of as the reduction of entropy of the transmitter's state upon receiving the message. Only noiseless real systems can transmit "information" reliably, if you by that you mean knowledge of some state at the transmitter is to be known with certainty. In real systems, reliability (in the sense of certainty) is only approached on a channel with infinite total capacity (infinite time). IMO, adding the word only confuses things. Information is transmitted across finite noisy channels too. By including the word "reliably", you are confining the statement to highly reliable communication, which is unnecessary. —Quondum 05:44, 21 November 2014 (UTC)[reply]
In Shannon's approach, reliably means that the error probability can be reduced below any finite bound, not with certainty. Take a look into books to see what a common expression this is in this context, and for what it means. Dicklyon (talk) 05:54, 21 November 2014 (UTC)[reply]
How would you feel about "arbitrarily reliably"? I guess from looking at the texts that the general focus is on the reliable limit (of an infinite total energy channel), though I intuitively feel that the theorem would cover (or at least generalize to) finite-energy channels, for example to answer the question: how much information can be transmitted via a AWGN channel of finite duration with finite energy? It should be clear that a bound on the quantity of information than can be transmitted (i.e. the mutual information) can be calculated, but that this will not be with arbitrary reliability. —Quondum 16:34, 21 November 2014 (UTC)[reply]
I think it's not necessary to move away from the usual terminology used in this area. Capacity arguments generally need to allow arbitrary large code sizes, so don't apply to channels with finite duration; it's more applicable to finite power but ongoing time, as bits per second. Dicklyon (talk) 20:52, 21 November 2014 (UTC)[reply]
I agree that this is the usual application. In this context (an encyclopaedia), it should be made clear that this is how a channel is being defined; this is currently left implicit. —Quondum 22:03, 28 November 2014 (UTC)[reply]
[edit]

Hello fellow Wikipedians,

I have just modified one external link on Channel capacity. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

checkY An editor has reviewed this edit and fixed any errors that were found.

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 13:54, 19 November 2016 (UTC)[reply]

Incorrect approximation for Power-limited regime?

[edit]

My calculations show that the denominator should have natural log of 2 in the denominator rather than log base 2 of e. I believe this is consistent with the approximation found under the Shannon-Hartley Theorem article.