64-bit computing: Difference between revisions
m lk fix |
Timo Holle (talk | contribs) m added benefit of 64-bit for technical computing |
||
Line 8: | Line 8: | ||
A noteworthy exception to the ILP32/LP64 model is Intel's [[IA-64]] architecture. This family of processors, comprised of the [[Itanium]] and [[Itanium 2]], do not support 32-bit computing at all. Instead of running natively, 32-bit programs are executed by IA-64 processors in a special emulation mode, which adversely affects their performance. In contrast, other 64-bit architectures are capable of running either 32-bit or 64-bit code with no inherent speed penalty. |
A noteworthy exception to the ILP32/LP64 model is Intel's [[IA-64]] architecture. This family of processors, comprised of the [[Itanium]] and [[Itanium 2]], do not support 32-bit computing at all. Instead of running natively, 32-bit programs are executed by IA-64 processors in a special emulation mode, which adversely affects their performance. In contrast, other 64-bit architectures are capable of running either 32-bit or 64-bit code with no inherent speed penalty. |
||
All other things being equal, 64-bit code is slower than 32-bit code. Microprocessor [[cache]]s are fixed in size, and 64-bit computing eats up cache twice as fast as 32-bit computing. A 32-bit version of a program will |
All other things being equal, 64-bit code is slower than 32-bit code. Microprocessor [[cache]]s are fixed in size, and 64-bit computing eats up cache twice as fast as 32-bit computing. A 32-bit version of a program will be measurably faster than its 64-bit counterpart, because it will be able to make more efficient use of processor caches. However, numerical applications in technical [[computing]] will normally benefit of 64-bit [[arithmetic]], since frequently used [[IEEE floating-point standard|IEEE 754]] 64-bit "double" [[precision]] [[floating point]] operations are executed faster because they can be handled natively instead of being [[Emulator|emulated]] in software, which is generally slower. |
||
64-bit [[processor architecture]]s include: |
64-bit [[processor architecture]]s include: |
Revision as of 23:16, 19 February 2004
64 - bit means using computer words containing sixty-four bits. This adjective often refers to the number of bits used internally by a computer's CPU. E.g. "The Sun UltraSPARC is a 64-bit processor". Its external data bus or address bus may be narrower, and the term is also often used to describe the size of these buses as well. Many current 32-bit machines use 64-bit buses for instance. The term may also refer to the size of an instruction in the computer's instruction set or to any other item of data.
In most modern 64-bit architectures, both 32-bit and 64-bit computing are supported. A program running in a 32-bit process is said to use the ILP32 model, referring to integers, longs, and pointers. A 64-bit process is said to use the LP64 model. In ILP32, integers, longs, and pointers are 32 bits wide, capable of holding values of up to 232 (unsigned) or 231 (signed). In LP64, integers are still 32 bits wide, but longs and pointers are 64 bits wide, yielding values of up to 264 (unsigned) or 263 (signed).
Each memory address in a program's virtual memory address space is numbered, starting with zero. An ILP32 program, therefore, can access as much as 232 bytes of virtual memory, or four gigabytes. That was an awful lot of memory not too long ago, but today many users find it downright cramped. An LP64 program, on the other hand, can address up to 264 bytes of virtual memory, or sixteen exabytes. Which, as the old saying goes, ought to be enough for anybody.
A noteworthy exception to the ILP32/LP64 model is Intel's IA-64 architecture. This family of processors, comprised of the Itanium and Itanium 2, do not support 32-bit computing at all. Instead of running natively, 32-bit programs are executed by IA-64 processors in a special emulation mode, which adversely affects their performance. In contrast, other 64-bit architectures are capable of running either 32-bit or 64-bit code with no inherent speed penalty.
All other things being equal, 64-bit code is slower than 32-bit code. Microprocessor caches are fixed in size, and 64-bit computing eats up cache twice as fast as 32-bit computing. A 32-bit version of a program will be measurably faster than its 64-bit counterpart, because it will be able to make more efficient use of processor caches. However, numerical applications in technical computing will normally benefit of 64-bit arithmetic, since frequently used IEEE 754 64-bit "double" precision floating point operations are executed faster because they can be handled natively instead of being emulated in software, which is generally slower.
64-bit processor architectures include:
- The DEC Alpha architecture
- Intel's IA-64 architecture
- AMD's AMD64 architecture
- Sun's UltraSPARC architecture
- IBM's POWER architecture
- MIPS Technologies' MIPS IV, MIPS V, and MIPS64 architectures
- IBM/Motorola's PowerPC architecture (starting with the PowerPC 970 µP)
See also:
64-bit is a computer architecture based around an arithmetic and logic unit (ALU), registers, and data bus which are 64 bits wide.
64-bit processors are quite common, e.g. Digital Alpha, versions of Sun SPARC, and the IBM AS/4000. The PowerPC and Intel are expected to move to 64 bits at their next generation - PPC 620 and Intel's IA-64.
A 64-bit address bus allows the processor to address 18 million gigabytes as opposed to the mere 4 gigabytes allowed with 32 bits. Floating point calculations can also be more accurate.
Often mixed architectures are used: with 32-bit integer/addresses and 64-bit floats.
Taking full advantage of a 64-bit CPU requires a 64-bit operating system, but backward-compatible architectures can also run a 32-bit OS. For example, processors based on the AMD Hammer architecture can run Intel x86 compatible software, whereas processors based on IA64 architecture need to use software emulation.
Partly based on material from FOLDOC, used with permission.