Jump to content

64-bit computing

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Rbrwr (talk | contribs) at 12:40, 18 April 2004 (Apple). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computing, a 64-bit component is one in which data is stored in chunks (computer words) of sixty-four bits (binary digits) each. The term often describes a computer's CPU: e.g. "The Sun UltraSPARC is a 64-bit processor". Its external data bus or address bus may be narrower (i.e. pass data in chunks of fewer bits), and the term is often used to describe the size of these buses as well. Many current machines with 32-bit processors use 64-bit buses for instance. The term may also refer to the size of an instruction in the computer's instruction set or to any other item of data. Without further qualification, however, a computer architecture described as "64-bit" generally has registers that are 64 bits wide and thus directly supports dealing both internally and externally with 64-bit "chunks" of data.

Most CPU's are designed so that the contents of a single register can be point to the address (location) of data in the computer's virtual memory. Therefore, the total number of addresses in the virtual memory - the total amount of data the computer can keep in its working area - is determined by the width of these registers. Beginning in the 1960's with the IBM System 360, then (amongst many others) the DEC VAX minicomputer in the 1970's, and then with the Intel 80386 in the mid-1980's, a de facto consensus developed that 32 bits was a convenient register size. A 32-bit register meant that 232 addresses - 4 gigabytes of memory - could be referenced. At the time all of these architectures were settled on, 4 gigabytes of RAM was so far beyond the typical quantities available in installations, that this was considered to be enough "headroom" for addressing, as well as being an appropriate size to work with for other reason - 4 billion integers is enough to assign unique references to most physically countable things, in databases for example.

However, with the march of time and the continual reductions in the cost of memory (see Moore's Law), by the early 1990's installations with quantities of RAM approaching 4 gigabytes began to appear, and the use of virtual memory spaces greater than the four gigabyte limit became desirable for handling certain types of problems. In response, a number of companies began releasing new families of chips with 64-bit architectures, initially for supercomputers and high-end server machines. 64-bit computing has gradually drifted down to the personal computer desktop, with Apple's desktop line using a 64-bit processor as of 2004, and AMD's x86-64 architecture becoming common in high-end Windows PC's and Intel adopting the same architecture for their future desktop CPU's.

While 64-bit architectures indisputably make working with huge data sets, for applications the case of digital video, much scientific computing, and big databases, much easier for the system programmers and more efficient for the system users, there has been considerable debate as to whether they will be faster than comparably-priced 32-bit systems for other tasks - or, where the the 64-bit processors support running programs in 32-bit compatibility modes, where the 32-bit modes will be faster.

Theoretically, some programs could well be faster in 32-bit mode. Instructions for 64-bit computing take up more storage space than the earlier 32-bit ones, so it is possible that some 32-bit programs will fit into the CPU's high-speed cache where the 64-bit version will not. However, other programs where the data being processed fits naturally in 64-bit chunks, such as a lot of scientific computing, will be faster because the CPU will be designed to process such information directly rather than requiring the program to perform multiple steps. Such assessments are complicated by the fact that in the process of designing the new 64-bit architectures, the instruction set designers have taken the opportunity to make other changes that address some of the deficiencies in older instruction sets by adding new performance-enhancing facilities (such as the extra register in the x86-64 design).

Converting application software written in a high-level language from a 32-bit architecture to a 64-bit architecture varies in difficulty. One common problem that occurs is that in the C programming language and its descendant C++ on 32-bit machines pointers (variables that store memory addresses), and two built-in "data types" for storing numbers - "int" and "long" - all refer to a 32-bit wide data chunk. Some (usually poorly-written) programs transfer quantities between these data types with the assuption that no information will be lost. In many programming environments on 64-bit machines, however, "int" variables are still 32 bits wide, but "long"s and pointers are 64-bits wide. However, in most cases the modifications required are relatively minor and straightforward, and many well-written programs can simply be recompiled for the new environment without any changes at all.


64-bit processor architectures include:

See also:


64-bit is a computer architecture based around an arithmetic and logic unit (ALU), registers, and data bus which are 64 bits wide.

64-bit processors are quite common, e.g. Digital Alpha, versions of Sun SPARC, and the IBM AS/4000. The PowerPC and Intel are expected to move to 64 bits at their next generation - PPC 620 and Intel's IA-64.

A 64-bit address bus allows the processor to address 18 million gigabytes as opposed to the mere 4 gigabytes allowed with 32 bits. Floating point calculations can also be more accurate.

Often mixed architectures are used: with 32-bit integer/addresses and 64-bit floats.

Taking full advantage of a 64-bit CPU requires a 64-bit operating system, but backward-compatible architectures can also run a 32-bit OS. For example, processors based on the AMD Hammer architecture can run Intel x86 compatible software, whereas processors based on IA64 architecture need to use software emulation.

This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.