Shannon coding: Difference between revisions
m picture |
No edit summary |
||
Line 1: | Line 1: | ||
⚫ | |||
In the field of [[data compression]], '''Shannon coding''', named after its creator, [[Claude Shannon]], is a [[lossless data compression]] technique for constructing a [[prefix code]] based on a set of symbols and their probabilities (estimated or measured). It is suboptimal in the sense that it does not achieve the lowest possible expected code word length like [[Huffman coding]], and never better but sometime equal to the [[Shannon-Fano coding]]. |
In the field of [[data compression]], '''Shannon coding''', named after its creator, [[Claude Shannon]], is a [[lossless data compression]] technique for constructing a [[prefix code]] based on a set of symbols and their probabilities (estimated or measured). It is suboptimal in the sense that it does not achieve the lowest possible expected code word length like [[Huffman coding]], and never better but sometime equal to the [[Shannon-Fano coding]]. |
||
Line 7: | Line 6: | ||
In Shannon coding, the symbols are arranged in order from most probable to least probable, and assigned codewords by taking the first <math>l_i = \left\lceil {-\log} p_i(x) \right\rceil </math> digits from the binary expansions of the cumulative probabilities. <math> \sum\limits_{i=k}^{i-1} p_k(x)</math> . Here <math>\left\lceil x \right\rceil</math> denotes the function which rounds <math>x</math> up to the next integer value. |
In Shannon coding, the symbols are arranged in order from most probable to least probable, and assigned codewords by taking the first <math>l_i = \left\lceil {-\log} p_i(x) \right\rceil </math> digits from the binary expansions of the cumulative probabilities. <math> \sum\limits_{i=k}^{i-1} p_k(x)</math> . Here <math>\left\lceil x \right\rceil</math> denotes the function which rounds <math>x</math> up to the next integer value. |
||
== Example == |
|||
⚫ | |||
{| class="wikitable" |
|||
!ai |
|||
!p(a<sub>i</sub>) |
|||
!l<sub>i</sub> |
|||
!Sum from p<sub>i</sub> to i-1 |
|||
!Sum to p(a<sub>i</sub>) binary |
|||
!Result (truncated [[significand]]) |
|||
|- |
|||
!a<sub>1</sub> |
|||
|0.36 |
|||
|2 |
|||
|0.0 |
|||
|0.0000 |
|||
|00 |
|||
|- |
|||
!a<sub>2</sub> |
|||
|0.18 |
|||
|3 |
|||
|0.36 |
|||
|0.0100 |
|||
|010 |
|||
|- |
|||
!a<sub>3</sub> |
|||
|0.18 |
|||
|3 |
|||
|0.54 |
|||
|0.1000 |
|||
|100 |
|||
|- |
|||
!a<sub>4</sub> |
|||
|0.12 |
|||
|4 |
|||
|0.72 |
|||
|0.1011 |
|||
|1011 |
|||
|- |
|||
!a<sub>5</sub> |
|||
|0.09 |
|||
|4 |
|||
|0.84 |
|||
|0.1101 |
|||
|1101 |
|||
|- |
|||
!a<sub>6</sub> |
|||
|0.07 |
|||
|4 |
|||
|0.93 |
|||
|0.1110 |
|||
|1110 |
|||
|} |
|||
==References== |
==References== |
Revision as of 23:04, 3 January 2016
In the field of data compression, Shannon coding, named after its creator, Claude Shannon, is a lossless data compression technique for constructing a prefix code based on a set of symbols and their probabilities (estimated or measured). It is suboptimal in the sense that it does not achieve the lowest possible expected code word length like Huffman coding, and never better but sometime equal to the Shannon-Fano coding.
The method was the first of its type, The technique was used to prove Shannon's noiseless coding theorem in his 1948 article "A Mathematical Theory of Communication",[1] and is therefore a centerpiece to the information age.
This coding method led rise to the field of information theory and without its contribution, the world would not have any of the many successors; for example Shannon-Fano coding, Huffman coding, or arithmetic coding. Much of our day-to-day lives are significantly influenced by digital data and this would not be possible without Shannon coding and its ongoing evolution of its predecessor coding methods.
In Shannon coding, the symbols are arranged in order from most probable to least probable, and assigned codewords by taking the first digits from the binary expansions of the cumulative probabilities. . Here denotes the function which rounds up to the next integer value.
Example
In the table below is shown example of creating code scheme for symbols a1-6. li represents negative power of 2, which is less or equal then current probability. Fifth row represent binary form of sum. The last row is the bit code of each symbol.
ai | p(ai) | li | Sum from pi to i-1 | Sum to p(ai) binary | Result (truncated significand) |
---|---|---|---|---|---|
a1 | 0.36 | 2 | 0.0 | 0.0000 | 00 |
a2 | 0.18 | 3 | 0.36 | 0.0100 | 010 |
a3 | 0.18 | 3 | 0.54 | 0.1000 | 100 |
a4 | 0.12 | 4 | 0.72 | 0.1011 | 1011 |
a5 | 0.09 | 4 | 0.84 | 0.1101 | 1101 |
a6 | 0.07 | 4 | 0.93 | 0.1110 | 1110 |
References
- ^ "A Mathematical Theory of Communication" http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf