Jump to content

Supercomputer: Difference between revisions

From Wikipedia, the free encyclopedia
[pending revision][pending revision]
Content deleted Content added
Bk0 (talk | contribs)
Line 202: Line 202:
|35.86 teraflops
|35.86 teraflops
|[[Yokohama Institute for Earth Sciences]], [[Japan]]
|[[Yokohama Institute for Earth Sciences]], [[Japan]]
|-
|[[2004]] october
|[[SGI Altix]]
|51.87 teraflops
|[[NASA Ames Research Center]]
|-
|-
|[[2004]]–
|[[2004]]–

Revision as of 17:45, 19 December 2004

"A supercomputer is a device for turning compute-bound problems into I/O-bound problems." —Ken Batcher
The Cray-2; world's fastest computer 1985–1990.

A supercomputer is a computer that leads the world in terms of processing capacity, particularly speed of calculation, at the time of its introduction. The first supercomputers were introduced in the 1960s, designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, all in all holding the top spot in supercomputing for 25 years (1965–1990). In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-off custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience, although Cray Inc. still specializes in building supercomputers.

The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's also-ran, as can be seen from the world's first (non solid state) digital programmable electronic computer Colossus, used to break some German ciphers in World War II. CDC's early machines were simply very fast single processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at lower price points to enter the market. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of simple CPUs; some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" RISC microprocessors, such as the PowerPC or PA-RISC.

Software tools

Software tools for distributed processing include standard APIs such as MPI and PVM and open source-based software solutions such as Beowulf and openMosix which facilitate the creation of a sort of "virtual supercomputer" from a collection of ordinary workstations or servers. Technology like Rendezvous pave the way for the creation of ad hoc computer clusters. An example of this is the distributed rendering function in Apple's Shake compositing application. Computers running the Shake software merely need to be in proximity to each other, in networking terms, to automatically discover and use each other's resources. While no one has yet built an ad hoc computer cluster that rivals even yesteryear's supercomputers, the line between desktop, or even laptop, and supercomputer is beginning to blur, and is likely to continue to blur as built-in support for parallelism and distributed processing increases in mainstream desktop operating systems. An easy programming language for Supercomputers remains an open research topic in Computer Science.

Uses

Supercomputers are used for highly calculation-intensive tasks such as weather forecasting, climate research (including research into global warming), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Military and scientific agencies are heavy users.

Design

Supercomputers traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialised for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times—in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy design and componentry. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.

As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to accelerate the remaining bottlenecks.

Supercomputer challenges and technologies

  • A supercomputer generates heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
  • Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's Cray supercomputer designs attempted to keep cable runs as short as possible for this reason.
  • Supercomputers consume and produce massive amounts of data in a very short period of time. Much work is needed to ensure that this information can be transferred quickly and stored/retreived correctly.

Technologies developed for supercomputers include:

Processing techniques

Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD processing instructions for general-purpose computers.

Operating systems

Their operating systems, often variants of UNIX, are every bit as if not more complex as those for smaller machines. Their user interfaces tends to be less developed however, as the os developers have limited programming resources. This stems from the fact that because these computers, often priced at millions of dollars, are sold to very small market, their R&D budgets are often limited. Interestingly this has been a continuing trend throughout the supercomputer industry, with former technology leaders such as Silicon Graphics taking a backseat to such companies as Nvidia, who have been able to produce cheap, feature rich, performant, and innovative products due to the vast number of consumers driving their R&D.

Historically, until the period from 1982 to 1985, supercomputers usually sacrificed instruction set compatibility and code portability for performance (speed and memory). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least 6 different and controversial operating systems largely unknown to the general computing community. Similarly different vectorizing and parallelizing and incompatible compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction-set compatibilty between the Cray-1 and the Cray X-MP and the adoption of the UNIX operating system.

For this reason, in the future, the highest performance systems are likely to have a UNIX favor but with incompatible system unique features (especially for the highest end systems at secure facilities).

Programming

The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Special-purpose Fortran compilers can often generate faster code than the C or C++ compilers, so Fortran remains the language of choice for scientific programming, and hence for most programs run on supercomputers. To exploit the parallelism of supercomputers, programming environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are being used.

Types of general-purpose supercomputers

There are three main classes of general-purpose supercomputers:

  • Vector processing machines allow the same (arithmetical) operation to be carried out on a large amount of data simultaneously.
  • Tightly connected cluster computers use specially developed interconnects to have many processors and their memory communicate with each other, typically in a NUMA architecture. Processors and networking components are engineered from the ground up for the supercomputer. The fastest general-purpose supercomputers in the world today use this technology.
  • Commodity clusters use a large number of commodity PCs, interconnected by high-bandwidth low-latency local area networks.

As of 2002, Moore's Law and economies of scale are the dominant factors in supercomputer design: a single modern desktop PC is now more powerful than a 15-year old supercomputer, and at least some of the design tricks that allowed past supercomputers to out-perform contemporary desktop machines have now been incorporated into commodity PC's. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production.

Additionally, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, particularly, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design which can be programmed to act as one large computer.

Special-purpose supercomputers

Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking.

Examples of special-purpose supercomputers:

The fastest supercomputers today

File:BlueGeneL-600x450.jpg
Blue Gene/L

The speed of a supercomputer is generally measured in "Linpack flops" (floating point operations per second); this measurement is based on a particular benchmark, which mimics a class of real-world problems, but is significantly easier to compute than a majority of real-world problems.

Current fastest

As of November 9, 2004, the fastest supercomputer in a single installation is IBM's Blue Gene/L prototype, with 32,768 processors. It is capable of 70.72 teraflops. The Blue Gene/L prototype is a customized version of IBM's PowerPC architecture. The prototype currently sits at IBM's Rochester, New York facility, but production versions will be at various sites, including Lawrence Livermore National Laboratory (LLNL). The LLNL system is expected to achieve at least 360 teraflops, and a future update will take it to 1.5 petaflops.

Past record holders

Prior to Blue Gene/L, the fastest supercomputer was the Earth Simulator at the Yokohama Institute for Earth Sciences, Japan. It is a cluster of 640 custom-designed 8-processor vector processor computers based on the NEC SX-6 architecture (a total of 5120 processors). It uses a customised version of the UNIX operating system.

At the time of introduction, the Earth Simulator's performance was over 5 times that of the previous fastest supercomputer, the cluster computer ASCI White at Lawrence Livermore National Laboratory. The Earth Simulator held the #1 position for 2 and 1/2 years.

A list of the 500 fastest supercomputers, the TOP500, is maintained at http://www.top500.org/ .

Quasi-supercomputing

Some types of large-scale computing take the clustered supercomputing concept to an extreme. One such example, the SETI@home distributed computing project has an average processing power of 72.53 teraflops [1], making it the fastest aggregate "supercomputer" in the world as of November 15, 2004.

Google's search engine system may be faster with estimated total processing power of between 126 and 316 teraflops.[2]

Timeline of supercomputers

Period Supercomputer Peak speed Location
19431944 Colossus 5000 characters per second Bletchley Park, England
19441950 ENIAC 5000 add/sub per second per accumulator pair
(20 accumulators)
Aberdeen Proving Ground, Maryland
19451950 Manchester Mark I 500 instructions per second University of Manchester, England
19501955 MIT Whirlwind 20 KIPS (CRT memory)
40 KIPS (Core)
Massachusetts Institute of Technology, Cambridge, MA
19561958 IBM 704 40 KIPS
12 kiloflops
 
19581959 IBM 709 40 KIPS
12 kiloflops
 
19591960 IBM 7090 210 kiloflops U.S. Air Force BMEWS (RADC), Rome, NY
19601961 LARC 500 kiloflops (2 CPUs) Lawrence Livermore National Laboratory, California
19611964 IBM 7030 "Stretch" 1.2 MIPS
~600 kiloflops
Los Alamos National Laboratory, New Mexico
19651969 CDC 6600 10 MIPS
3 megaflops
Lawrence Livermore National Laboratory, California
19691975 CDC 7600 36 megaflops Lawrence Livermore National Laboratory, California
19741975 CDC Star-100 100 megaflops (vector),
~2 megaflops (scalar)
Lawrence Livermore National Laboratory, California
19751983 Cray-1 80 megaflops (vector),
72 megaflops (scalar)
Los Alamos National Laboratory, New Mexico (1976)
19751982 ILLIAC IV 150 megaflops,
<100 megaflops (average)
NASA Ames Research Center, California
Had serious reliability problems.
19811983 CDC Cyber-205 400 megaflops (vector),
average much lower.
 
19831985 Cray X-MP 500 megaflops (4 CPUs) Los Alamos National Laboratory, New Mexico
19851990 Cray-2 1.95 gigaflops (4 CPUs)
3.9 gigaflops (8 CPUs)
Lawrence Livermore National Laboratory and NASA
Lawrence Berkeley National Laboratory (the only 8 CPU system)
19891990 ETA-10G 10.3 gigaflops (vector) (8 CPUs),
average much lower.
 
19901995 Fujitsu Numerical Wind Tunnel 236 gigaflops National Aerospace Lab
19952000 Intel ASCI Red 2.15 teraflops Sandia National Laboratories, New Mexico
20002002 IBM ASCI White, SP Power3 375 MHz 7.226 teraflops Lawrence Livermore National Laboratory, California
20022004 Earth Simulator 35.86 teraflops Yokohama Institute for Earth Sciences, Japan
2004 october SGI Altix 51.87 teraflops NASA Ames Research Center
2004 Blue Gene/L prototype 70.72 teraflops IBM, Rochester, Minnesota
Q1 2005 Blue Gene/L 280–360 teraflops (est.) Lawrence Livermore National Laboratory, California
future      

Forthcoming supercomputers:

See also

General concepts, history:

Companies, computers:

Other classes of computer: