Data compression
In computer science, data compression or source coding is the process of encoding information using fewer bits (or other information-bearing units) than a more obvious representation would use, thanks to specific encoding schemes. For example, this article could be encoded with fewer bits if we accept the convention that the word "compression" is encoded as "CP!".
As is the case with any form of communication, compressed data communication only works when both the sender and receiver of the information understand the encoding scheme. For example, this text makes sense only if the receiver understands that it is intended to be interpreted as characters representing the English language. Similarly, compressed data can only be understood if the decoding method is known by the receiver.
One popular encoding scheme that many computer users are familiar with is the ZIP file format. It can be used to reduce the size of an attachment to an e-mail message, facilitating its easier transmission or storage.
Compression is possible because most real-world data is very statistically redundant. When represented in its human-interpretable form (or in the case of text to be printed on a computer screen, a simple machine-interpretable form such as ASCII), the data is represented in a non-concise way. For example, the letter 'e' is much more common in English text than the letter 'z', and the likelihood of the letter 'q' being followed by the letter 'z' is rather remote. Analysis of these statistical behaviors can allow the same information to be represented much more concisely.
Further compression is possible if some loss of fidelity is allowable. For example, a person viewing a picture or television video scene might not notice if some of its finest details are removed or not represented perfectly. Similarly, two strings of samples representing an audio recording may sound the same but actually not be exactly the same under detailed computer analysis. Specialized signal processing techniques can take advantage of allowing relatively-minor differences in order to enable representing the picture, video, or audio using fewer bits.
Compression is important because it helps reduce the consumption of expensive resources, such as disk space or connection bandwidth. However, compression requires information processing power, which can also be expensive. The design of data compression schemes therefore involves trade-offs between various factors including compression capability, any amount of introduced distortion, computational resource requirements, and often other considerations as well.
Some schemes are reversible so that the original data can be reconstructed (lossless data compression), while others accept some loss of data in order to achieve higher compression (lossy data compression).
Applications
One very simple means of compression, for example, is run-length encoding, wherein large runs of consecutive identical data values are replaced by a simple code with the data value and length of the run. This is an example of lossless data compression. It is often used to better use disk space on office computers, or better use the connection bandwidth in a computer network. For symbolic data such as spreadsheets, text, executable programs, etc., losslessness is essential because changing even a single bit cannot be tolerated (except in some limited cases).
For visual and audio data, some loss of quality can be tolerated without losing the essential nature of the data. By taking advantage of limitations of the human sensory system, a great deal of space can be saved while producing output which is nearly indistinguishable from the original. These lossy data compression methods typically offer a three-way tradeoff between compression speed, compressed data size and quality loss.
Lossy image compression is used in digital cameras, greatly reducing their storage requirements while hardly degrading picture quality at all. Similarly, DVDs use the lossy MPEG-2 codec for video compression.
In lossy audio compression, methods of psychoacoustics are used to remove non-audible (or less audible) components of the signal. Compression of human speech is often performed with even more specialized techniques, so that "speech compression" or "voice coding" is sometimes distinguished as a separate discipline than "audio compression". Different audio and speech compression standards are listed under audio codecs. Voice compression is used in internet telephony for example, while audio compression is used for CD ripping and is decoded by MP3 players.
Theory
The theoretical background of compression is provided by information theory (which is closely related to algorithmic information theory) and by rate-distortion theory. These fields of study were essentially created by Claude Shannon, who published fundamental papers on the topic in the late 1940s and early 1950s. Cryptography and coding theory are also closely related. The idea of data compression is deeply connected with statistical inference and particularly with the maximum likelihood principle.
Many lossless data compression systems can be viewed in terms of a four-stage model. Lossy data compression systems typically include even more stages, including for example, prediction, frequency transformation, and quantization.
The Lempel-Ziv (LZ) compression methods are the most popular algorithms for lossless storage. DEFLATE is a variation on LZ which is optimized for decompression speed and compression ratio, although compression can be slow. DEFLATE is used in PKZIP, gzip and PNG. LZW (Lempel-Ziv-Welch) was patented by Unisys until June of 2003, and is used in GIF images. Also noteworthy are the LZR (LZ-Renau) methods, which serve as the basis of the Zip method. LZ methods utilize a table based compression model where table entries are substituted for repeated strings of data. For most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded (e.g. SHRI, LZX). A current LZ based coding scheme that performs well is LZX, used in Microsoft's CAB format.
See also
Data compression topics
- algorithmic complexity theory
- information entropy
- self-extraction
- image compression
- multimedia compression
- minimum description length
- minimum message length (two-part lossless compression designed for inference)
- universal codes
Compression algorithms
- run-length encoding – used by PCX images and PackBits
- dictionary coders
- Burrows-Wheeler transform
- bzip2 (a combination of the Burrows-Wheeler transform and Huffman coding)
- prediction by partial matching
- context mixing
- PAQ (very high compression, but extremely slow; competing in the top of the highest compression competitions)
- entropy encoding
- Huffman coding (simple entropy coding; commonly used as the final stage of compression)
- arithmetic coding (more advanced)
- range encoding (simple, intended to approach the performance of arithmetic coding without being patent-encumbered)
- linear predictive coding
- discrete cosine transform
- JPEG (image compression using a discrete cosine transform, then quantization, then Huffman coding)
- MPEG (audio and video compression standards family in wide use, using DCT and motion-compensated prediction for video)
- Ogg Vorbis (AAC-alike audio codec, designed with a focus on avoiding patent encumbrance)
- fractal compression
- wavelet compression
- JPEG 2000 (image compression using wavelets, then quantization, then entropy coding)
References
- Timothy C. Bell, Ian Witten, John Cleary (1990) Text Compression, Prentice Hall, ISBN 0139119914