Adaptive Huffman coding: Difference between revisions
m Reverted edits by 202.88.225.156 to last revision by Malbrain (HG) |
IznoRepeat (talk | contribs) m add WP:TEMPLATECAT to remove from template; genfixes |
||
(74 intermediate revisions by 50 users not shown) | |||
Line 1: | Line 1: | ||
{{short description|Data compression technique}} |
|||
'''Adaptive Huffman coding''' (also called '''Dynamic Huffman coding''') is an [[adaptive coding]] technique based on [[Huffman coding]]. It permits building the code as the symbols are being transmitted, having no initial knowledge of source distribution, that allows one-pass encoding and adaptation to changing conditions in data. |
'''Adaptive Huffman coding''' (also called '''Dynamic Huffman coding''') is an [[adaptive coding]] technique based on [[Huffman coding]]. It permits building the code as the symbols are being transmitted, having no initial knowledge of source distribution, that allows one-pass encoding and adaptation to changing conditions in data.<ref name="LiDrew2014">{{cite book|author1=Ze-Nian Li|author2=Mark S. Drew|author3=Jiangchuan Liu|title=Fundamentals of Multimedia|url=https://books.google.com/books?id=R6vBBAAAQBAJ|date=9 April 2014|publisher=Springer Science & Business Media|isbn=978-3-319-05290-8}}</ref> |
||
The benefit of one-pass procedure is that the source can be encoded in real time, though it becomes more sensitive to transmission errors, since just a single loss ruins the whole code. |
The benefit of one-pass procedure is that the source can be encoded in real time, though it becomes more sensitive to transmission errors, since just a single loss ruins the whole code, requiring [[error detection and correction]]. |
||
==Algorithms== |
==Algorithms== |
||
There are a number of implementations of this method, the most notable are '''FGK''' ([[Newton Faller|Faller]]-[[Robert G. Gallager|Gallager]]-[[Donald Knuth|Knuth]]) and '''[[Jeffrey Vitter|Vitter]]''' algorithm. |
There are a number of implementations of this method, the most notable are '''FGK''' ([[Newton Faller|Faller]]-[[Robert G. Gallager|Gallager]]-[[Donald Knuth|Knuth]]) and '''[[Jeffrey Vitter|Vitter]]''' algorithm. |
||
=== |
===FGK Algorithm === |
||
It is an online coding technique based on Huffman coding. Having no initial knowledge of occurrence frequencies, it permits dynamically adjusting the Huffman's tree as data are being transmitted. In a FGK Huffman tree, a special external node, called ''0-node'', is used to identify a newly coming character. That is, whenever new data is encountered, output the path to the 0-node followed by the data. For a past-coming character, just output the path of the data in the current Huffman's tree. Most importantly, we have to adjust the FGK Huffman tree if necessary, and finally update the frequency of related nodes. As the frequency of a datum is increased, the'' sibling property'' of the Huffman's tree may be broken. The adjustment is triggered for this reason. It is accomplished by consecutive swappings of nodes, subtrees, or both. The data node is swapped with the highest-ordered node of the same frequency in the Huffman's tree, (or the subtree rooted at the highest-ordered node). All ancestor nodes of the node should also be processed in the same manner. |
|||
Since the FGK Algorithm has some drawbacks about the node-or-subtree swapping, Vitter proposed another algorithm to improve it. |
|||
Code is represented as a tree structure in which every node has a corresponding weight and a unique number. |
|||
===Vitter algorithm=== |
|||
Numbers go down, and from right to left. |
|||
Some important terminologies & constraints :- |
|||
* '''Implicit Numbering''' : It simply means that nodes are numbered in increasing order by level and from left to right. i.e. nodes at bottom level will have low implicit number as compared to upper level nodes and nodes on same level are numbered in increasing order from left to right. In other terms, when we have built the Huffman tree, when merging two nodes into a parent node, we have set the one with the lower value as the left child, and the one with the higher value as the right child. |
|||
* '''Invariant''' : For each weight w, all leaves of weight w precede all internal nodes having weight w. In other terms, when we have built the Huffman tree, if several nodes had the same value, we prioritized merging the leaves over the internal nodes. |
|||
* '''Blocks''' : Nodes of same weight and same type (i.e. either leaf node or internal node) form a Block. |
|||
* '''Leader''' : Highest numbered node in a block. |
|||
Blocks are interlinked by increasing order of their weights. |
|||
Weights must satisfy the sibling property, which is that nodes can be listed in order of nonincreasing weight with each node adjacent to its sibling. Thus if A is parent node of B and node C is child of B, then <math>W(A) > W(B) > W(C)</math>. |
|||
A leaf block always precedes internal block of same weight, thus maintaining the invariant. |
|||
The weight is merely the count of symbols transmitted which codes are associated with children of that node. |
|||
'''NYT (Not Yet Transferred)''' is a special node used to represent symbols which are ''<nowiki>'not yet transferred'</nowiki>''. |
|||
A set of nodes with same weights make a '''block'''. |
|||
[[File:Leaf step one.png|thumb|Slide_And_Increment(leaf node) sliding starts. P is a leaf node.]] |
|||
To get the code for every node, in case of binary tree we could just traverse all the path from the root to the node, writing down (for example) "1" if we go to the right and "0" if we go to the left. |
|||
[[File:Leaf step two.png|thumb|Slide_And_Increment(leaf node) sliding step 2. As P is leaf node, it slides in front of next block nodes of equal weight.]] |
|||
[[File:Leaf step three.png|thumb|Slide_And_Increment(leaf node) sliding step 3. Here we increase the current weight by 1.]] |
|||
[[File:Leaf step four.png|thumb|Slide_And_Increment(leaf node) sliding step 4. Method comes to an end. P is the new parent.]] |
|||
[[File:Internal one.png|thumb|Slide_And_Increment(internal node) sliding starts. P is an internal node.]] |
|||
[[File:Internal two.png|thumb|Slide_And_Increment(internal node) sliding step 2. Node P slides in front of next block of leaves nodes, with weight wt+1.]] |
|||
[[File:Internal three.png|thumb|Slide_And_Increment(internal node) sliding step 3. Now we increase the weight to 9. Thus the '''''invariant is maintained''''' as the current node is an internal node and should occur in front of leaf nodes of equal weight as we have increased the weight.]] |
|||
[[File:Internal four.png|thumb|Slide_And_Increment(internal node) sliding step 4. Now the 'P' points to the former parent ( as in the case of internal node according to algorithm)]] |
|||
'''algorithm''' for adding a symbol '''is''' |
|||
We need some general and straightforward method to transmit symbols that are "not yet transmitted" (NYT). We could use, for example, transmission of binary numbers for every symbol in alphabet. |
|||
leaf_to_increment := NULL |
|||
p := pointer to the leaf node containing the next symbol |
|||
'''if''' (p is NYT) '''then''' |
|||
Extend p by adding two children |
|||
Left child becomes new NYT and right child is the new symbol leaf node |
|||
''p'' := parent of new symbol leaf node |
|||
leaf_to_increment := Right Child of p |
|||
'''else''' |
|||
Swap p with leader of its block |
|||
'''if''' (new p is sibling to NYT) '''then''' |
|||
leaf_to_increment := p |
|||
''p'' := parent of p |
|||
'''while''' (p ≠ NULL) '''do''' |
|||
Slide_And_Increment(p) |
|||
'''if''' (leaf_to_increment != NULL) '''then''' |
|||
Slide_And_Increment(leaf_to_increment) |
|||
'''function''' Slide_And_Increment(p) '''is''' |
|||
previous_p := parent of ''p'' |
|||
'''if''' (p is an internal node) '''then''' |
|||
Slide p in the tree higher than the leaf nodes of weight wt + 1 |
|||
increase weight of ''p'' by 1 |
|||
''p'' := previous_p |
|||
'''else''' |
|||
Slide p in the tree higher than the internal nodes of weight wt |
|||
increase weight of ''p'' by 1 |
|||
''p'' := new parent of ''p''. |
|||
Encoder and decoder start with only the root node, which has the maximum number. In the beginning it is our initial NYT node. |
Encoder and decoder start with only the root node, which has the maximum number. In the beginning it is our initial NYT node. |
||
Line 28: | Line 73: | ||
For every symbol that is already in the tree, we only have to transmit code for its leaf node. |
For every symbol that is already in the tree, we only have to transmit code for its leaf node. |
||
⚫ | |||
For every symbol transmitted both the transmitter and receiver execute the update procedure: |
|||
[[File:Adaptive Huffman Vitter.jpg|800px]] |
|||
# If current symbol is NYT, add two child nodes to NYT node. One will be a new NYT node the other is a leaf node for our symbol. Increase weight for the new leaf node and the old NYT and go to step 4. If not, go to symbol's leaf node. |
|||
# If this node does not have the highest number in a block, swap it with the node having has the highest number |
|||
# Increase weight for current node |
|||
# If this is not the root node go to parent node then go to step 2. If this is the root, end. |
|||
Encoding "abb" gives 01100001 001100010 11. |
|||
Note: swapping nodes means swapping weights and corresponding symbols, but not the numbers. |
|||
Step 1: |
|||
⚫ | |||
[[Image:adaptive_huffman.png|Developing adaptive Huffman tree]] |
|||
Start with an empty tree. |
Start with an empty tree. |
||
For |
For "a" transmit its binary code. |
||
Step 2: |
|||
NYT spawns two child nodes: 254 and 255, both with weight 0. Increase weight for root and 255. Code for "a", associated with node 255, is 1. |
|||
For "b" transmit 0 (for NYT node) then its binary code. |
|||
Step 3: |
|||
NYT spawns two child nodes: 252 for NYT and 253 for leaf node, both with weight 0. Increase weights for 253, 254, and root. To maintain Vitter's invariant that all leaves of weight w precede (in the implicit numbering) all internal nodes of weight w, the branch starting with node 254 should be swapped (in terms of symbols and weights, but not number ordering) with node 255. Code for "b" is 11. |
|||
NYT spawns two child nodes: 254 and 255. |
|||
Increase weight for root. |
|||
Code for '''"a"''', associated with node 255, is 1. |
|||
For |
For the second "b" transmit 11. |
||
For the convenience of explanation this step doesn't exactly follow Vitter's algorithm,<ref name=":0">{{cite web|url=http://www.cs.duke.edu/csed/curious/compression/adaptivehuff.html#tree |title=Adaptive Huffman Coding |publisher=Cs.duke.edu |access-date=2012-02-26}}</ref> but the effects are equivalent. |
|||
NYT spawns two child nodes: 252 for NYT and 253 for leaf node. |
|||
Increase weights for 253, 254, and root. |
|||
Code for '''"b"''' is 01. |
|||
Step 4: |
|||
For the second "b" transmit 01. |
|||
Go to |
Go to leaf node 253. Notice we have two blocks with weight 1. Node 253 and 254 is one block (consisting of leaves), node 255 is another block (consisting of internal nodes). For node 253, the biggest number in its block is 254, so swap the weights and symbols of nodes 253 and 254. Now node 254 and the branch starting from node 255 satisfy the SlideAndIncrement condition<ref name=":0"/> and hence must be swapped. At last increase node 255 and 256's weight. |
||
Future code for "b" is 1, and for "a" is now 01, which reflects their frequency. |
Future code for "b" is 1, and for "a" is now 01, which reflects their frequency. |
||
Line 63: | Line 107: | ||
== References == |
== References == |
||
{{reflist}} |
|||
⚫ | |||
⚫ | |||
* J. S. Vitter, "ALGORITHM 673 Dynamic Huffman Coding", ACM Transactions on Mathematical Software, 15(2), June 1989, pp |
* J. S. Vitter, "ALGORITHM 673 Dynamic Huffman Coding", ACM Transactions on Mathematical Software, 15(2), June 1989, pp 158–167. Also appears in Collected Algorithms of ACM. |
||
* Donald E. Knuth, "Dynamic Huffman Coding", Journal of Algorithm, 6(2), 1985, pp 163–180. |
|||
== External links == |
== External links == |
||
Line 71: | Line 117: | ||
* [http://www.ics.uci.edu/~dan/pubs/DC-Sec4.html University of California Dan Hirschberg site] |
* [http://www.ics.uci.edu/~dan/pubs/DC-Sec4.html University of California Dan Hirschberg site] |
||
* [http://www.cs.cf.ac.uk/Dave/Multimedia/node212.html Cardiff University Dr. David Marshall site] |
* [http://www.cs.cf.ac.uk/Dave/Multimedia/node212.html Cardiff University Dr. David Marshall site] |
||
* [ |
* [https://code.google.com/p/compression-code/downloads/list C implementation of Vitter algorithm] |
||
* [http://www.cs.duke.edu/csed/curious/compression/adaptivehuff.html Excellent description from Duke University] |
|||
{{Compression Methods}} |
{{Compression Methods}} |
||
[[Category:Lossless compression algorithms]] |
[[Category:Lossless compression algorithms]] |
||
[[Category:Data compression]] |
Latest revision as of 23:50, 5 December 2024
Adaptive Huffman coding (also called Dynamic Huffman coding) is an adaptive coding technique based on Huffman coding. It permits building the code as the symbols are being transmitted, having no initial knowledge of source distribution, that allows one-pass encoding and adaptation to changing conditions in data.[1]
The benefit of one-pass procedure is that the source can be encoded in real time, though it becomes more sensitive to transmission errors, since just a single loss ruins the whole code, requiring error detection and correction.
Algorithms
[edit]There are a number of implementations of this method, the most notable are FGK (Faller-Gallager-Knuth) and Vitter algorithm.
FGK Algorithm
[edit]It is an online coding technique based on Huffman coding. Having no initial knowledge of occurrence frequencies, it permits dynamically adjusting the Huffman's tree as data are being transmitted. In a FGK Huffman tree, a special external node, called 0-node, is used to identify a newly coming character. That is, whenever new data is encountered, output the path to the 0-node followed by the data. For a past-coming character, just output the path of the data in the current Huffman's tree. Most importantly, we have to adjust the FGK Huffman tree if necessary, and finally update the frequency of related nodes. As the frequency of a datum is increased, the sibling property of the Huffman's tree may be broken. The adjustment is triggered for this reason. It is accomplished by consecutive swappings of nodes, subtrees, or both. The data node is swapped with the highest-ordered node of the same frequency in the Huffman's tree, (or the subtree rooted at the highest-ordered node). All ancestor nodes of the node should also be processed in the same manner.
Since the FGK Algorithm has some drawbacks about the node-or-subtree swapping, Vitter proposed another algorithm to improve it.
Vitter algorithm
[edit]Some important terminologies & constraints :-
- Implicit Numbering : It simply means that nodes are numbered in increasing order by level and from left to right. i.e. nodes at bottom level will have low implicit number as compared to upper level nodes and nodes on same level are numbered in increasing order from left to right. In other terms, when we have built the Huffman tree, when merging two nodes into a parent node, we have set the one with the lower value as the left child, and the one with the higher value as the right child.
- Invariant : For each weight w, all leaves of weight w precede all internal nodes having weight w. In other terms, when we have built the Huffman tree, if several nodes had the same value, we prioritized merging the leaves over the internal nodes.
- Blocks : Nodes of same weight and same type (i.e. either leaf node or internal node) form a Block.
- Leader : Highest numbered node in a block.
Blocks are interlinked by increasing order of their weights.
A leaf block always precedes internal block of same weight, thus maintaining the invariant.
NYT (Not Yet Transferred) is a special node used to represent symbols which are 'not yet transferred'.
algorithm for adding a symbol is leaf_to_increment := NULL p := pointer to the leaf node containing the next symbol if (p is NYT) then Extend p by adding two children Left child becomes new NYT and right child is the new symbol leaf node p := parent of new symbol leaf node leaf_to_increment := Right Child of p else Swap p with leader of its block if (new p is sibling to NYT) then leaf_to_increment := p p := parent of p while (p ≠ NULL) do Slide_And_Increment(p) if (leaf_to_increment != NULL) then Slide_And_Increment(leaf_to_increment)
function Slide_And_Increment(p) is previous_p := parent of p if (p is an internal node) then Slide p in the tree higher than the leaf nodes of weight wt + 1 increase weight of p by 1 p := previous_p else Slide p in the tree higher than the internal nodes of weight wt increase weight of p by 1 p := new parent of p.
Encoder and decoder start with only the root node, which has the maximum number. In the beginning it is our initial NYT node.
When we transmit an NYT symbol, we have to transmit code for the NYT node, then for its generic code.
For every symbol that is already in the tree, we only have to transmit code for its leaf node.
Example
[edit]Encoding "abb" gives 01100001 001100010 11.
Step 1:
Start with an empty tree.
For "a" transmit its binary code.
Step 2:
NYT spawns two child nodes: 254 and 255, both with weight 0. Increase weight for root and 255. Code for "a", associated with node 255, is 1.
For "b" transmit 0 (for NYT node) then its binary code.
Step 3:
NYT spawns two child nodes: 252 for NYT and 253 for leaf node, both with weight 0. Increase weights for 253, 254, and root. To maintain Vitter's invariant that all leaves of weight w precede (in the implicit numbering) all internal nodes of weight w, the branch starting with node 254 should be swapped (in terms of symbols and weights, but not number ordering) with node 255. Code for "b" is 11.
For the second "b" transmit 11.
For the convenience of explanation this step doesn't exactly follow Vitter's algorithm,[2] but the effects are equivalent.
Step 4:
Go to leaf node 253. Notice we have two blocks with weight 1. Node 253 and 254 is one block (consisting of leaves), node 255 is another block (consisting of internal nodes). For node 253, the biggest number in its block is 254, so swap the weights and symbols of nodes 253 and 254. Now node 254 and the branch starting from node 255 satisfy the SlideAndIncrement condition[2] and hence must be swapped. At last increase node 255 and 256's weight.
Future code for "b" is 1, and for "a" is now 01, which reflects their frequency.
References
[edit]- ^ Ze-Nian Li; Mark S. Drew; Jiangchuan Liu (9 April 2014). Fundamentals of Multimedia. Springer Science & Business Media. ISBN 978-3-319-05290-8.
- ^ a b "Adaptive Huffman Coding". Cs.duke.edu. Retrieved 2012-02-26.
- Vitter's original paper: J. S. Vitter, "Design and Analysis of Dynamic Huffman Codes", Journal of the ACM, 34(4), October 1987, pp 825–845.
- J. S. Vitter, "ALGORITHM 673 Dynamic Huffman Coding", ACM Transactions on Mathematical Software, 15(2), June 1989, pp 158–167. Also appears in Collected Algorithms of ACM.
- Donald E. Knuth, "Dynamic Huffman Coding", Journal of Algorithm, 6(2), 1985, pp 163–180.
External links
[edit]- This article incorporates public domain material from Paul E. Black. "adaptive Huffman coding". Dictionary of Algorithms and Data Structures. NIST.
- University of California Dan Hirschberg site
- Cardiff University Dr. David Marshall site
- C implementation of Vitter algorithm
- Excellent description from Duke University