Shannon coding: Difference between revisions
Tagging possible copyvio of http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Shannon%E2%80%93Fano_coding.html |
No edit summary Tag: copyright violation template removed |
||
Line 1: | Line 1: | ||
{{csb-pageincludes|1=http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Shannon%E2%80%93Fano_coding.html}} |
|||
In the field of [[data compression]], Shannon coding, named after its creator, Claude Shannon, is a [[lossless data compression]] technique for constructing a [[prefix code]] based on a set of symbols and their probabilities (estimated or measured). It is suboptimal in the sense that it does not achieve the lowest possible expected code word length like Huffman coding, and never better but sometime equal to the [[shannon-Fano coding]]. The technique was used to prove [[Shannon's noiseless coding theorem]] in his 1948 article "A Mathematical Theory of Communication", and is therefore a centerpiece to the information age. |
In the field of [[data compression]], Shannon coding, named after its creator, Claude Shannon, is a [[lossless data compression]] technique for constructing a [[prefix code]] based on a set of symbols and their probabilities (estimated or measured). It is suboptimal in the sense that it does not achieve the lowest possible expected code word length like Huffman coding, and never better but sometime equal to the [[shannon-Fano coding]]. The technique was used to prove [[Shannon's noiseless coding theorem]] in his 1948 article "A Mathematical Theory of Communication", and is therefore a centerpiece to the information age. |
||
This coding method lead rise to the field of information theory and without it, we would not have any of the many predecessors; for example Shannon-Fano coding, Huffman coding, or [[arithmetic coding]]. Much of our day to day lives are significantly influenced by [[digital data]] and this would not be possible without Shannon coding and its ongoing evolution of its predecessor coding methods. |
This coding method lead rise to the field of information theory and without it, we would not have any of the many predecessors; for example Shannon-Fano coding, Huffman coding, or [[arithmetic coding]]. Much of our day to day lives are significantly influenced by [[digital data]] and this would not be possible without Shannon coding and its ongoing evolution of its predecessor coding methods. |
||
In Shannon coding, the symbols are arranged in order from most probable to least probable, and assigned codewords by taking the first |
In Shannon coding, the symbols are arranged in order from most probable to least probable, and assigned codewords by taking the first <math>l_i = \left\lceil {-\log} p_i(x) \right\rceil </math> digits of the binary expansion of the probability of that symbol, <math>p_i(x)</math> . Here <math>\left\lceil x \right\rceil</math> denotes the function which rounds x up a integer value. |
Revision as of 19:05, 4 January 2015
In the field of data compression, Shannon coding, named after its creator, Claude Shannon, is a lossless data compression technique for constructing a prefix code based on a set of symbols and their probabilities (estimated or measured). It is suboptimal in the sense that it does not achieve the lowest possible expected code word length like Huffman coding, and never better but sometime equal to the shannon-Fano coding. The technique was used to prove Shannon's noiseless coding theorem in his 1948 article "A Mathematical Theory of Communication", and is therefore a centerpiece to the information age.
This coding method lead rise to the field of information theory and without it, we would not have any of the many predecessors; for example Shannon-Fano coding, Huffman coding, or arithmetic coding. Much of our day to day lives are significantly influenced by digital data and this would not be possible without Shannon coding and its ongoing evolution of its predecessor coding methods.
In Shannon coding, the symbols are arranged in order from most probable to least probable, and assigned codewords by taking the first digits of the binary expansion of the probability of that symbol, . Here denotes the function which rounds x up a integer value.