Jump to content

64-bit computing

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 68.168.167.54 (talk) at 15:40, 29 September 2003. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

64-bit means using computer words containing sixty-four bits. This adjective often refers to the number of bits used internally by a computer's CPU. E.g. "The Sun UltraSPARC is a 64-bit processor". Its external data bus or address bus may be narrower, and the term is also often used to describe the size of these buses as well. Many current 32-bit machines use 64-bit buses for instance. The term may also refer to the size of an instruction in the computer's instruction set or to any other item of data.

In most modern 64-bit architectures, both 32-bit and 64-bit computing are supported. A program running in a 32-bit process is said to use the ILP32 model, referring to integers, longs, and pointers. A 64-bit process is said to use the LP64 model. In ILP32, integers, longs, and pointers are 32 bits wide, capable of holding values of up to 232 (unsigned) or 231 (signed). In LP64, integers are still 32 bits wide, but longs and pointers are 64 bits wide, yielding values of up to 264 (unsigned) or 263 (signed).

Each memory address in a program's virtual memory address space is numbered, starting with zero. An ILP32 program, therefore, can access as much as 232 bytes of virtual memory, or four gigabytes. That was an awful lot of memory not too long ago, but today many users find it downright cramped. An LP64 program, on the other hand, can address up to 264 bytes of virtual memory, or eighteen billion gigabytes. Which, as the old saying goes, ought to be enough for anybody.

A noteworthy exception to the ILP32/LP64 model is Intel's IA-64 architecture. This family of processors, comprised of the Itanium and Itanium 2, do not support 32-bit computing at all. Instead of running natively, 32-bit programs are executed by IA-64 processors in a special emulation mode, which adversely affects their performance. In contrast, other 64-bit architectures are capable of running either 32-bit or 64-bit code with no inherent speed penalty.

All other things being equal, 64-bit code is slower than 32-bit code. Microprocessor caches are fixed in size, and 64-bit computing eats up cache twice as fast as 32-bit computing. A 32-bit version of a program will always be measurably faster than its 64-bit counterpart, because it will be able to make more efficient use of processor caches.

64-bit processor architectures include:

See also:


64-bit is a term used to describe a computer architecture based around an arithmetic and logic unit (ALU), registers, and data bus which are 64 bits wide.

64-bit processors are quite common, e.g. Digital Alpha, versions of Sun SPARC, and the IBM AS/4000. The PowerPC and Intel are expected to move to 64 bits at their next generation - PPC 620 and Intel's IA-64.

A 64-bit address bus allows the processor to address 18 million gigabytes as opposed to the mere 4 gigabytes allowed with 32 bits. Floating point calculations can also be more accurate.

Often mixed architectures are used: with 32-bit integer/addresses and 64-bit floats.

Taking full advantage of a 64-bit CPU requires a A 64-bit operating system, but backward-compatible architectures can also run a 32-bit OS. For example, processors based on the AMD Hammer architecture can run Intel x86 compatible software, whereas processors based on IA64 architecture need to use software emulation.


Partly based on material from FOLDOC, used with permission.