Jump to content

Μ-law algorithm: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
NetJohn (talk | contribs)
Changed TI DSP links to official source, added C6000 link.
NetJohn (talk | contribs)
mNo edit summary
Line 169: Line 169:
* [http://www.cisco.com/warp/public/788/signalling/waveform_coding.html Waveform Coding Techniques] - Has details of implementation
* [http://www.cisco.com/warp/public/788/signalling/waveform_coding.html Waveform Coding Techniques] - Has details of implementation
* [http://focus.ti.com/lit/an/spra163a/spra163a.pdf A-Law and mu-Law Companding Implementations Using the TMS320C54x] ([[PDF]])
* [http://focus.ti.com/lit/an/spra163a/spra163a.pdf A-Law and mu-Law Companding Implementations Using the TMS320C54x] ([[PDF]])
* [http://focus.ti.com/lit/an/spra634/spra634.pdf TMS320C6000 m-Law and A-Law Companding with Software or the McBSP] ([[PDF]])
* [http://focus.ti.com/lit/an/spra634/spra634.pdf TMS320C6000 μ-Law and A-Law Companding with Software or the McBSP] ([[PDF]])
* [http://hazelware.luggle.com/tutorials/mulawcompression.html A-law and μ-law realisation (on C)]
* [http://hazelware.luggle.com/tutorials/mulawcompression.html A-law and μ-law realisation (on C)]



Revision as of 17:38, 19 November 2007


Graph of μ-law & A-law algorithms

The µ-law algorithm (pronounced mu-law) is a companding algorithm, primarily used in the digital telecommunication systems of North America, Japan and Australia. As with other companding algorithms, its purpose is to reduce the dynamic range of an audio signal. In the analog domain, this can increase the signal-to-noise ratio (SNR) achieved during transmission, and in the digital domain, it can reduce the quantization error (hence increasing signal to quantization noise ratio). These SNR increases can be traded instead for reduced bandwidth for equivalent SNR.

It is similar to the A-law algorithm used in Europe.

Algorithm Types

There are two forms of this algorithm - an analog version, and a quantized digital version.

Continuous

For a given input x, the equation for μ-law encoding is[1]

,

where μ = 255 (8 bits) in the North American and Japanese standards.

μ-law expansion is then given by the inverse equation:

The equations are culled from Cisco's Waveform Coding Techniques.

Discrete

This is defined in ITU-T Recommendation G.711 [2]

G.711 is rather unclear about what the values at the limit of a range code up as. (e.g. whether +31 codes to 0xEF or 0xF0). However G.191 provides example C code for a u-law encoder which gives the following encoding. Note the difference between the positive and negative ranges. e.g. the negative range corresponding to +30 to +1 is -31 to -2. This is accounted for by the use of a 1's complement (simple bit inversion) rather than 2's complement to convert a negative value to a positive value during encoding.

Quantized μ-law algorithm
13 bit Binary Linear input code 8 bit Compressed code
+8158 to +4063 in 16 intervals of 256 0x80 + interval number
+4062 to +2015 in 16 intervals of 128 0x90 + interval number
+2014 to +991 in 16 intervals of 64 0xA0 + interval number
+990 to +479 in 16 intervals of 32 0xB0 + interval number
+478 to +223 in 16 intervals of 16 0xC0 + interval number
+222 to +95 in 16 intervals of 8 0xD0 + interval number
+94 to +31 in 16 intervals of 4 0xE0 + interval number
+30 to +1 in 15 intervals of 2 0xF0 + interval number
0 0xFF
-1 0x7F
-31 to -2 in 15 intervals of 2 0x70 + interval number
-95 to -32 in 16 intervals of 4 0x60 + interval number
-223 to -96 in 16 intervals of 8 0x50 + interval number
-479 to -224 in 16 intervals of 16 0x40 + interval number
-991 to -480 in 16 intervals of 32 0x30 + interval number
-2015 to -992 in 16 intervals of 64 0x20 + interval number
-4063 to -2016 in 16 intervals of 128 0x10 + interval number
-8159 to -4064 in 16 intervals of 256 0x00 + interval number

The above table is for the much more obscure 14 bit u-law encoding. The graph is identical to the standard 16 bit version, it's just scaled differently. The above 14 bit numbers can be generated by the following Java snippet.

       int j = 512;
       int linear = -8159;
       for (int ulaw = 0; ulaw <= 127; ulaw++) {
           System.out.println("ulaw " + Integer.toHexString(ulaw) + " becomes " + linear);
           if ((ulaw & 0xf) == 0) j >>= 1;
           linear += j;
       }
       j = -256;
       linear = 7903;
       for (int ulaw = 128; ulaw < 255; ulaw++) {
           System.out.println("ulaw " + Integer.toHexString(ulaw) + " becomes " + linear);
           if ((ulaw & 0xf) == 0xf) j >>= 1;
           linear += j;
       }
       System.out.println("ulaw ff becomes 0");


Which can be used to generate a simple Java lookup array to convert from a u-law byte to 14 bit linear.

   private static final int[] ULAW_TO_LINEAR_14_BIT = new int[]{
           -8159, -7903, -7647, -7391, -7135, -6879, -6623, -6367, -6111, -5855, -5599, -5343, -5087, -4831, -4575, -4319,
           -4063, -3935, -3807, -3679, -3551, -3423, -3295, -3167, -3039, -2911, -2783, -2655, -2527, -2399, -2271, -2143,
           -2015, -1951, -1887, -1823, -1759, -1695, -1631, -1567, -1503, -1439, -1375, -1311, -1247, -1183, -1119, -1055,
           -991, -959, -927, -895, -863, -831, -799, -767, -735, -703, -671, -639, -607, -575, -543, -511, -479, -463, -447,
           -431, -415, -399, -383, -367, -351, -335, -319, -303, -287, -271, -255, -239, -223, -215, -207, -199, -191, -183,
           -175, -167, -159, -151, -143, -135, -127, -119, -111, -103, -95, -91, -87, -83, -79, -75, -71, -67, -63, -59, -55,
           -51, -47, -43, -39, -35, -31, -29, -27, -25, -23, -21, -19, -17, -15, -13, -11, -9, -7, -5, -3, -1, 7903, 7647,
           7391, 7135, 6879, 6623, 6367, 6111, 5855, 5599, 5343, 5087, 4831, 4575, 4319, 4063, 3935, 3807, 3679, 3551, 3423,
           3295, 3167, 3039, 2911, 2783, 2655, 2527, 2399, 2271, 2143, 2015, 1951, 1887, 1823, 1759, 1695, 1631, 1567, 1503,
           1439, 1375, 1311, 1247, 1183, 1119, 1055, 991, 959, 927, 895, 863, 831, 799, 767, 735, 703, 671, 639, 607, 575,
           543, 511, 479, 463, 447, 431, 415, 399, 383, 367, 351, 335, 319, 303, 287, 271, 255, 239, 223, 215, 207, 199, 191,
           183, 175, 167, 159, 151, 143, 135, 127, 119, 111, 103, 95, 91, 87, 83, 79, 75, 71, 67, 63, 59, 55, 51, 47, 43, 39,
           35, 31, 29, 27, 25, 23, 21, 19, 17, 15, 13, 11, 9, 7, 5, 3, 1, 0};

But the values generated with the Sun Microsystems c routine g711.c commonly available on the Internet generate the much more common 16 bit series:

   private static final int[] ULAW_TO_LINEAR_16_BIT = new int[]{
           -32124, -31100, -30076, -29052, -28028, -27004, -25980, -24956, -23932, -22908, -21884, -20860, -19836, -18812,
           -17788, -16764, -15996, -15484, -14972, -14460, -13948, -13436, -12924, -12412, -11900, -11388, -10876, -10364,
           -9852, -9340, -8828, -8316, -7932, -7676, -7420, -7164, -6908, -6652, -6396, -6140, -5884, -5628, -5372, -5116,
           -4860, -4604, -4348, -4092, -3900, -3772, -3644, -3516, -3388, -3260, -3132, -3004, -2876, -2748, -2620, -2492,
           -2364, -2236, -2108, -1980, -1884, -1820, -1756, -1692, -1628, -1564, -1500, -1436, -1372, -1308, -1244, -1180,
           -1116, -1052, -988, -924, -876, -844, -812, -780, -748, -716, -684, -652, -620, -588, -556, -524, -492, -460,
           -428, -396, -372, -356, -340, -324, -308, -292, -276, -260, -244, -228, -212, -196, -180, -164, -148, -132, -120,
           -112, -104, -96, -88, -80, -72, -64, -56, -48, -40, -32, -24, -16, -8, 0, 32124, 31100, 30076, 29052, 28028,
           27004, 25980, 24956, 23932, 22908, 21884, 20860, 19836, 18812, 17788, 16764, 15996, 15484, 14972, 14460, 13948,
           13436, 12924, 12412, 11900, 11388, 10876, 10364, 9852, 9340, 8828, 8316, 7932, 7676, 7420, 7164, 6908, 6652, 6396,
           6140, 5884, 5628, 5372, 5116, 4860, 4604, 4348, 4092, 3900, 3772, 3644, 3516, 3388, 3260, 3132, 3004, 2876, 2748,
           2620, 2492, 2364, 2236, 2108, 1980, 1884, 1820, 1756, 1692, 1628, 1564, 1500, 1436, 1372, 1308, 1244, 1180, 1116,
           1052, 988, 924, 876, 844, 812, 780, 748, 716, 684, 652, 620, 588, 556, 524, 492, 460, 428, 396, 372, 356, 340,
           324, 308, 292, 276, 260, 244, 228, 212, 196, 180, 164, 148, 132, 120, 112, 104, 96, 88, 80, 72, 64, 56, 48, 40,
           32, 24, 16, 8, 0};

Searching the Internet for a short subset of the 16 bit sequence, such as "32124, 31100, 30076" will quickly demonstrate the industry dominance of the 16 bit format vs a search for the obscure 14 bit sequence "8159, 7903, 7647"

Implementation

There are three ways of implementing a μ-law algorithm :

Analog
Use an amplifier with non-linear gain to achieve companding entirely in the analog domain.
Non-linear ADC
Use an Analog to Digital Converter with quantization levels which are unequally spaced to match the μ-law algorithm.
Digital
Use the quantized digital version of the μ-law algorithm to convert data once it is in the digital domain.

Usage Justification

This encoding is used because speech has a wide dynamic range. In the analog world, when mixed with a relatively constant background noise source, the finer detail is lost. Given that the precision of the detail is compromised anyway, and assuming that the signal is to be perceived as audio by a human, one can take advantage of the fact that perceived intensity (loudness) is logarithmic[3] by compressing the signal using a logarithmic-response op-amp. In telco circuits, most of the noise is injected on the lines, thus after the compressor, the intended signal will be perceived as significantly louder than the static, compared to an un-compressed source. This became a common telco solution, and thus, prior to common digital usage, the mu-law specification was developed to define an inter-compatible standard.

As the digital age dawned, it was noted that this pre-existing algorithm had the effect of significantly reducing the number of bits needed to encode recognizable human voice. Using mu-law, a sample could be effectively encoded in as few as 8 bits, a sample size that conveniently matched the symbol size of most standard computers.

Mu-law encoding effectively reduced the dynamic range of the signal, thereby increasing the coding efficiency while biasing the signal in a way that results in a signal-to-distortion ratio that is greater than that obtained by linear encoding for a given number of bits. This is an early form of perceptual audio encoding.

The mu-law algorithm is also used in the .au format, which dates back at least to the SPARCstation 1 as the native method used by Sun's /dev/audio interface, widely used as a de facto standard for Unix sound. The .au format is also used in various common audio API's such as the classes in the sun.audio Java package in Java 1.1 and in some C# methods.

This graph illustrates how u-law concentrates sampling in the smaller (softer) values. The values of a u-law byte 0-255 are the horizontal axis, the vertical axis is the 16 bit linear decoded value. This image was generated with the Sun Microsystems c routine g711.c commonly available on the Internet.

Comparison with A-law

The A-law algorithm provides a slightly larger dynamic range than the mu-law at the cost of worse proportional distortion for small signals. By convention, A-law is used for an international connection if at least one country uses it.

Public Domain This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22.

See also

References