Bit array
A bit array (also known as a bitmap, a bitset, or a bitstring) is an array data structure that compactly stores individual bits (boolean values). It implements a simple set data structure storing a subset of {1,2,...,n} and is effective at exploiting bit-level parallelism in hardware to perform operations quickly. A typical bit array stores kw bits, where w is the number of bits in the unit of storage, such as a byte or word, and k is some nonnegative integer. If w does not divide the number of bits to be stored, some space is wasted due to internal fragmentation.
Basic operations
Although most machines are not able to address individual bits in memory, nor have instructions to manipulate single bits, each bit in a word can be singled out and manipulated using bitwise operations. In particular:
- OR can be used to set a bit to one: 11101010 OR 00000100 = 11101110
- AND can be used to set a bit to zero: 11101010 AND 11111101 = 11101000
- AND together with zero-testing can be used to determine if a bit is set:
- 11101010 AND 00000001 = 00000000 = 0
- 11101010 AND 00000010 = 00000010 ≠ 0
- XOR can be used to invert or toggle a bit:
- 11101010 XOR 00000100 = 11101110
- 11101110 XOR 00000100 = 11101010
To obtain the bit mask needed for these operations, we can use a bit shift operator to shift the number 1 to the left by the appropriate number of places, as well as bitwise negation if necessary.
We can view a bit array as a subset of {1,2,...,n}, where a 1 bit indicates a number in the set and a 0 bit a number not in the set. This set data structure uses about n/w words of space, where w is the number of bits in each machine word. Whether the least significant bit or the most significant bit indicates the smallest-index number is largely irrelevant, but the former tends to be preferred.
Given two bit arrays of the same size representing sets, we can compute their union, intersection, and set-theoretic difference using n/w simple bit operations each (2n/w for difference), as well as the complement of either:
'''for''' i '''from''' 0 '''to''' n/w-1
complement_a[i] := '''not''' a[i]
union[i] := a[i] '''or''' b[i]
intersection[i] := a[i] '''and''' b[i]
difference[i] := a[i] '''and''' ('''not''' b[i])
If we wish to iterate through the bits of a bit array, we can do this efficiently using a doubly nested loop that loops through each word, one at a time. Only n/w memory accesses are required:
'''for''' i '''from''' 0 '''to''' n/w-1
index := 0 ''// if needed''
word := a[i]
'''for''' b '''from''' 0 '''to''' w-1
value := word '''and''' 1 ≠ 0
word := word shift right 1
''// do something with value''
index := index + 1 ''// if needed''
Both of these code samples exhibit ideal locality of reference, and so get a large performance boost from a data cache. If a cache line is k words, only about n/wk cache misses will occur.
More complex operations
Population / Hamming weight
If we wish to find the number of 1 bits in a bit array, sometimes called the population count or Hamming weight, there are efficient branch-free algorithms that can compute the number of bits in a word using a series of simple bit operations. We simply run such an algorithm on each word and keep a running total. Counting zeros is similar. See the Hamming weight article for examples of an efficient implementation.
Sorting
Similarly, sorting a bit array is trivial to do in O(n) time using counting sort — we count the number of ones k, fill the last k/w words with ones, set only the low k mod w bits of the next word, and set the rest to zero.
Inversion
Vertical flipping of a one-bit-per-pixel image, or some FFT algorithms, require to flip
the bits of individual words (so b31 b30 ... b0
becomes b0 ... b30 b31
).
When this operation is not available on the processor, it's still possible to proceed by successive passes, in this example on 32 bits:
exchange two 16bit halfwords
exchange bytes by pairs (0xddccbbaa -> 0xccddaabb)
...
swap bits by pairs
swap bits (b31 b30 ... b1 b0 -> b30 b31 ... b0 b1)
The last operation can be written ((x&0x55555555)<<1) | (x&0xaaaaaaaa)>>1)).
Find first one
The find first one operation identifies the one bit of the smallest index, that is the least significant bit having a one value. The find first zero operation similarly identifies the first zero bit. Each operation can be used instead of the other by complementing the input first.
Doing this operation quickly is useful in contexts such as priority queues. The application in this context is to identify the highest priority queue that is not empty. The find-first-one operation starting from the most significant bit is equivalent to computing the base 2 logarithm.
Many machines can quickly perform the operation on a single word using a single instruction. For example the x86 instruction bsr (bit scan reverse) finds the most significant one bit. The ffs (find first set) function in POSIX operating systems finds the least significant one.[1] To expand such an instruction or function to longer arrays, one can find the first nonzero word and then run find first one on that word.
On machines that use two's complement arithmetic, which includes all conventional CPUs, find first one can be performed quickly by anding a word with its two's complement, that is, performing (w AND −w). This results in a word with only the least significant (rightmost) bit set of the bits that were set in w. For instance, if the original value were 6 (110), after this operation the result would be 2 (010). See Gosper's Hack for an example of this technique in use.
Compression
Large bit arrays tend to have long streams of zeroes or ones. This phenomenon wastes storage and processing time. Run-length encoding is commonly used to compress these long streams. However, by compressing bit arrays too aggressively we run the risk of losing the benefits due to bit-level parallelism (vectorization). Thus, instead of compressing bit arrays as streams of bits, we might compress them as streams bytes or words (see Bitmap index (compression)).
Examples:
- compressedbitset: WAH Compressed BitSet for Java
- javaewah: A compressed alternative to the Java BitSet class (using Enhanced WAH)
- CONCISE: COmpressed 'N' Composable Integer Set, another bitmap compression scheme for Java
- EWAHBoolArray: A compressed bitmap/bitset class in C++
Advantages and disadvantages
Bit arrays, despite their simplicity, have a number of marked advantages over other data structures for the same problems:
- They are extremely compact; few other data structures can store n independent pieces of data in n/w words.
- They allow small arrays of bits to be stored and manipulated in the register set for long periods of time with no memory accesses.
- Because of their ability to exploit bit-level parallelism, limit memory access, and maximally use the data cache, they often outperform many other data structures on practical data sets, even those that are more asymptotically efficient.
However, bit arrays aren't the solution to everything. In particular:
- Without compression, they are wasteful set data structures for sparse sets (those with few elements compared to their range) in both time and space. For such applications, compressed bit arrays, Judy arrays, tries, or even Bloom filters should be considered instead.
- Accessing individual elements can be expensive and difficult to express in some languages. If random access is more common than sequential and the array is relatively small, a byte array may be preferable on a machine with byte addressing. A word array, however, is probably not justified due to the huge space overhead and additional cache misses it causes, unless the machine only has word addressing.
Applications
Because of their compactness, bit arrays have a number of applications in areas where space or efficiency is at a premium. Most commonly, they are used to represent a simple group of boolean flags or an ordered sequence of boolean values.
Bit arrays are used for priority queues, where the bit at index k is set if and only if k is in the queue; this data structure is used, for example, by the Linux kernel, and benefits strongly from a find-first-zero operation in hardware.
Bit arrays can be used for the allocation of memory pages, inodes, disk sectors, etc. In such cases, the term bitmap may be used. However, this term is frequently used to refer to raster images, which may use multiple bits per pixel.
Another application of bit arrays is the Bloom filter, a probabilistic set data structure that can store large sets in a small space in exchange for a small probability of error. It is also possible to build probabilistic hash tables based on bit arrays that accept either false positives or false negatives.
Bit arrays and the operations on them are also important for constructing succinct data structures, which use close to the minimum possible space. In this context, operations like finding the nth 1 bit or counting the number of 1 bits up to a certain position become important.
Bit arrays are also a useful abstraction for examining streams of compressed data, which often contain elements that occupy portions of bytes or are not byte-aligned. For example, the compressed Huffman coding representation of a single 8-bit character can be anywhere from 1 to 255 bits long.
In information retrieval, bit arrays are a good representation for the posting lists of very frequent terms. If we compute the gaps between adjacent values in a list of strictly increasing integers and encode them using unary coding, the result is a bit array with a 1 bit in the nth position if and only if n is in the list. The implied probability of a gap of n is 1/2n. This is also the special case of Golomb coding where the parameter M is 1; this parameter is only normally selected when -log(2-p)/log(1-p) ≤ 1, or roughly the term occurs in at least 38% of documents.
Language support
The C programming language's bitfields, pseudo-objects found in structs with size equal to some number of bits, are in fact small bit arrays; they are limited in that they cannot span words. Although they give a convenient syntax, the bits are still accessed using bitwise operators on most machines, and they can only be defined statically (like C's static arrays, their sizes are fixed at compile-time). It is also a common idiom for C programmers to use words as small bit arrays and access bits of them using bit operators. A widely available header file included in the X11 system, xtrapbits.h, is "a portable way for systems to define bit field manipulation of arrays of bits.". A more explanatory description of aforementioned approach can be found in the comp.lang.c faq.
In C++, although individual bool
s typically occupy the same space as a byte or an integer, the STL type vector<bool>
is a partial template specialization in which bits are packed as a space efficiency optimization. Since bytes (and not bits) are the smallest addressable unit in C++, the [] operator does not return a reference to an element, but instead returns a proxy reference. This might seem a minor point, but it means that vector<bool>
is not a standard STL container, which is why the use of vector<bool>
is generally discouraged. Another unique STL class, bitset
,[2] creates a vector of bits fixed at a particular size at compile-time, and in its interface and syntax more resembles the idiomatic use of words as bit sets by C programmers. It also has some additional power, such as the ability to efficiently count the number of bits that are set. The Boost C++ Libraries provide a dynamic_bitset
class[3] whose size is specified at run-time.
The D programming language provides bit arrays in both of its competing standard libraries. In Phobos, they are provided in std.bitmanip
, and in Tango, they are provided in tango.core.BitArray
. As in C++, the [] operator does not return a reference, since individual bits are not directly addressable on most hardware, but instead returns a bool
.
In Java, the class BitSet
creates a bit array that is then manipulated with functions named after bitwise operators familiar to C programmers. Unlike the bitset
in C++, the Java BitSet
does not have a "size" state (it has an effectively infinite size, initialized with 0 bits); a bit can be set or tested at any index. In addition, there is a class EnumSet
, which represents a Set of values of an enumerated type internally as a bit vector, as a safer alternative to bitfields.
The .NET Framework supplies a BitArray
collection class. It stores boolean values, supports random access and bitwise operators, can be iterated over, and its Length
property can be changed to grow or truncate it.
Although Standard ML has no support for bit arrays, Standard ML of New Jersey has an extension, the BitArray
structure, in its SML/NJ Library. It is not fixed in size and supports set operations and bit operations, including, unusually, shift operations.
Haskell likewise currently lacks standard support for bitwise operations, but both GHC and Hugs provide a Data.Bits
module with assorted bitwise functions and operators, including shift and rotate operations and an "unboxed" array over boolean values may be used to model a Bit array, although this lacks support from the former module.
In Perl, strings can be used as expandable bit arrays. They can be manipulated using the usual bitwise operators (~ | & ^
),[4] and individual bits can be tested and set using the vec function.[5]
Apple's Core Foundation library contains CFBitVector and CFMutableBitVector structures.
See also
- Bit field
- Bitboard Chess and similar games.
- Bitmap index
- Binary numeral system
- Bitstream