Jump to content

Commodity computing: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m the work's name is "The New York Times"; italics on titles using AWB
Line 8: Line 8:
| url = http://spacejournal.ohio.edu/pdf/Dorband.pdf
| url = http://spacejournal.ohio.edu/pdf/Dorband.pdf
| quote = The purpose of commodity cluster computing is to utilize large numbers of readily available computing components for parallel computing to obtaining the greatest amount of useful computations for the least cost. The issue of the cost of a computational resource is key to computational science and data processing at GSFC as it is at most other places, the difference being that the need at GSFC far exceeds any expectation of meeting that need.
| quote = The purpose of commodity cluster computing is to utilize large numbers of readily available computing components for parallel computing to obtaining the greatest amount of useful computations for the least cost. The issue of the cost of a computational resource is key to computational science and data processing at GSFC as it is at most other places, the difference being that the need at GSFC far exceeds any expectation of meeting that need.
}}</ref> It is computing done in commodity computers as opposed to in high-cost [[superminicomputer]]s or in [[boutique computer]]s. Commodity computers are [[computer system]]s - manufactured by multiple vendors - incorporating components based on [[open standard]]s.{{cn|date=April 2017}} Such systems are said{{by whom?|date=April 2017}} to be based on [[commodity]] components, since the standardization process promotes lower costs and less differentiation among vendors' products. Standardization and decreased differentiation lower the switching or exit cost from any given vendor, increasing purchasers' leverage and preventing [[Lock-in (decision-making)|lock-in]]. A governing principle of commodity computing is that it is preferable to have more low-performance, low-cost hardware working in parallel ([[scalar computing]]) (e.g. [[Advanced Micro Devices|AMD]] x86 [[Complex instruction set computing| CISC]]<ref>http://www.computerworld.com/s/article/9154518/IBM_HP_servers_won_t_stop_x86_onslaught_on_Unix</ref>) than to have fewer high-performance, high-cost hardware items<ref>http://research.google.com/pubs/DistributedSystemsandParallelComputing.html</ref> (e.g. IBM [[POWER7]] or Sun-Oracle's [[SPARC]]<ref>ftp://ftp.software.ibm.com/common/ssi/pm/rg/n/poo03017usen/POO03017USEN.PDF</ref> [[Reduced instruction set computing|RISC]]). At some point, the number of discrete systems in a cluster will be greater than the [[mean time between failures]] (MTBF) for any hardware platform{{Dubious |sentence on fault tolerance/mtbf doesn't make sense|reason=Units don't match for compared items: 1st one is without unit and for the second one unit is time|date=September 2017}}, no matter how reliable, so [[fault tolerance]] must be built into the controlling software.<ref>
}}</ref> It is computing done in commodity computers as opposed to in high-cost [[superminicomputer]]s or in [[boutique computer]]s. Commodity computers are [[computer system]]s - manufactured by multiple vendors - incorporating components based on [[open standard]]s.{{citation needed|date=April 2017}} Such systems are said{{by whom|date=April 2017}} to be based on [[commodity]] components, since the standardization process promotes lower costs and less differentiation among vendors' products. Standardization and decreased differentiation lower the switching or exit cost from any given vendor, increasing purchasers' leverage and preventing [[Lock-in (decision-making)|lock-in]]. A governing principle of commodity computing is that it is preferable to have more low-performance, low-cost hardware working in parallel ([[scalar computing]]) (e.g. [[Advanced Micro Devices|AMD]] x86 [[Complex instruction set computing|CISC]]<ref>http://www.computerworld.com/s/article/9154518/IBM_HP_servers_won_t_stop_x86_onslaught_on_Unix</ref>) than to have fewer high-performance, high-cost hardware items<ref>http://research.google.com/pubs/DistributedSystemsandParallelComputing.html</ref> (e.g. IBM [[POWER7]] or Sun-Oracle's [[SPARC]]<ref>ftp://ftp.software.ibm.com/common/ssi/pm/rg/n/poo03017usen/POO03017USEN.PDF</ref> [[Reduced instruction set computing|RISC]]). At some point, the number of discrete systems in a cluster will be greater than the [[mean time between failures]] (MTBF) for any hardware platform{{Dubious |sentence on fault tolerance/mtbf doesn't make sense|reason=Units don't match for compared items: 1st one is without unit and for the second one unit is time|date=September 2017}}, no matter how reliable, so [[fault tolerance]] must be built into the controlling software.<ref>
http://www.morganclaypool.com/doi/abs/10.2200/S00193ED1V01Y200905CAC006
http://www.morganclaypool.com/doi/abs/10.2200/S00193ED1V01Y200905CAC006
</ref><ref>
</ref><ref>
http://insidehpc.com/2008/06/02/google-fellow-sheds-some-light-on-infrastructure-robustness-in-face-of-failure
http://insidehpc.com/2008/06/02/google-fellow-sheds-some-light-on-infrastructure-robustness-in-face-of-failure
</ref>
</ref>
Purchases should be optimized on cost-per-unit-of-performance, not just on absolute performance-per-CPU at any cost.{{cn|date=April 2017}}
Purchases should be optimized on cost-per-unit-of-performance, not just on absolute performance-per-CPU at any cost.{{citation needed|date=April 2017}}


== History ==
== History ==
Line 20: Line 20:
The first computers were large, expensive and proprietary. The move towards commodity computing began when [[Digital Equipment Corporation|DEC]] introduced the [[PDP-8]] in 1965. This was a computer that was relatively small and inexpensive enough that a department could purchase one without convening a meeting of the board of directors. The entire [[minicomputer]] industry sprang up to supply the demand for 'small' computers like the PDP-8. Unfortunately, each of the many different brands of minicomputers had to stand on its own because there was no software and very little hardware compatibility between the brands.
The first computers were large, expensive and proprietary. The move towards commodity computing began when [[Digital Equipment Corporation|DEC]] introduced the [[PDP-8]] in 1965. This was a computer that was relatively small and inexpensive enough that a department could purchase one without convening a meeting of the board of directors. The entire [[minicomputer]] industry sprang up to supply the demand for 'small' computers like the PDP-8. Unfortunately, each of the many different brands of minicomputers had to stand on its own because there was no software and very little hardware compatibility between the brands.


When the first general purpose [[microprocessor]] was introduced in 1974 it immediately began chipping away at the low end of the computer market, replacing [[embedded system|embedded minicomputers]] in many industrial devices.
When the first general purpose [[microprocessor]] was introduced in 1974 it immediately began chipping away at the low end of the computer market, replacing [[embedded system|embedded minicomputers]] in many industrial devices.


This process accelerated in 1977 with the introduction of the first commodity-like microcomputer, the [[Apple II]]. With the development of the [[VisiCalc]] application in 1979, microcomputers broke out of the factory and began entering office suites in large quantities, but still through the back door.
This process accelerated in 1977 with the introduction of the first commodity-like microcomputer, the [[Apple II]]. With the development of the [[VisiCalc]] application in 1979, microcomputers broke out of the factory and began entering office suites in large quantities, but still through the back door.
Line 41: Line 41:
* [[ImageShack]]
* [[ImageShack]]
* [[LinkedIn]]
* [[LinkedIn]]
* [[New York Times]]
* ''[[The New York Times]]''
* [[Twitter]]
* [[Twitter]]
* [[Yahoo!]]
* [[Yahoo!]]

Revision as of 17:50, 14 August 2018

Commodity computing (also known as commodity cluster computing) involves the use of large numbers of already-available computing components for parallel computing, to get the greatest amount of useful computation at low cost.[1] It is computing done in commodity computers as opposed to in high-cost superminicomputers or in boutique computers. Commodity computers are computer systems - manufactured by multiple vendors - incorporating components based on open standards.[citation needed] Such systems are said[by whom?] to be based on commodity components, since the standardization process promotes lower costs and less differentiation among vendors' products. Standardization and decreased differentiation lower the switching or exit cost from any given vendor, increasing purchasers' leverage and preventing lock-in. A governing principle of commodity computing is that it is preferable to have more low-performance, low-cost hardware working in parallel (scalar computing) (e.g. AMD x86 CISC[2]) than to have fewer high-performance, high-cost hardware items[3] (e.g. IBM POWER7 or Sun-Oracle's SPARC[4] RISC). At some point, the number of discrete systems in a cluster will be greater than the mean time between failures (MTBF) for any hardware platform[dubiousdiscuss], no matter how reliable, so fault tolerance must be built into the controlling software.[5][6] Purchases should be optimized on cost-per-unit-of-performance, not just on absolute performance-per-CPU at any cost.[citation needed]

History

The mid-1960s to early 1980s

The first computers were large, expensive and proprietary. The move towards commodity computing began when DEC introduced the PDP-8 in 1965. This was a computer that was relatively small and inexpensive enough that a department could purchase one without convening a meeting of the board of directors. The entire minicomputer industry sprang up to supply the demand for 'small' computers like the PDP-8. Unfortunately, each of the many different brands of minicomputers had to stand on its own because there was no software and very little hardware compatibility between the brands.

When the first general purpose microprocessor was introduced in 1974 it immediately began chipping away at the low end of the computer market, replacing embedded minicomputers in many industrial devices.

This process accelerated in 1977 with the introduction of the first commodity-like microcomputer, the Apple II. With the development of the VisiCalc application in 1979, microcomputers broke out of the factory and began entering office suites in large quantities, but still through the back door.

The 1980s to mid-1990s

The IBM PC was introduced in 1981 and immediately began displacing Apple IIs in the corporate world, but commodity computing as we know it today truly began when Compaq developed the first true IBM PC compatible. More and more PC-compatible microcomputers began coming into big companies through the front door and commodity computing was well established.

During the 1980s microcomputers began displacing larger computers in a serious way. At first, price was the key justification but by the late 1980s and early 1990s, VLSI semiconductor technology had evolved to the point where microprocessor performance began to eclipse the performance of discrete logic designs. These traditional designs were limited by speed-of-light delay issues inherent in any CPU larger than a single chip, and performance alone began driving the success of microprocessor-based systems.

By the mid-1990s, nearly all computers made were based on microprocessors, and the majority of general purpose microprocessors were implementations of the x86 instruction set architecture. Although there was a time when every traditional computer manufacturer had its own proprietary micro-based designs there are only a few manufacturers of non-commodity computer systems today.

Today

Today, there are fewer and fewer general business computing requirements that cannot be met with off-the-shelf commodity computers. It is likely that the low-end of the supermicrocomputer genre will continue to be pushed upward by increasingly powerful commodity microcomputers.

Deployment

See also

References

  1. ^ John E. Dorband; Josephine Palencia Raytheon; Udaya Ranawake. "Commodity Computing Clusters at Goddard Space Flight Center" (PDF). http://spacejournal.ohio.edu/: Goddard Space Flight Center. Retrieved 2010-03-07. The purpose of commodity cluster computing is to utilize large numbers of readily available computing components for parallel computing to obtaining the greatest amount of useful computations for the least cost. The issue of the cost of a computational resource is key to computational science and data processing at GSFC as it is at most other places, the difference being that the need at GSFC far exceeds any expectation of meeting that need. {{cite web}}: External link in |location= (help)
  2. ^ http://www.computerworld.com/s/article/9154518/IBM_HP_servers_won_t_stop_x86_onslaught_on_Unix
  3. ^ http://research.google.com/pubs/DistributedSystemsandParallelComputing.html
  4. ^ ftp://ftp.software.ibm.com/common/ssi/pm/rg/n/poo03017usen/POO03017USEN.PDF
  5. ^ http://www.morganclaypool.com/doi/abs/10.2200/S00193ED1V01Y200905CAC006
  6. ^ http://insidehpc.com/2008/06/02/google-fellow-sheds-some-light-on-infrastructure-robustness-in-face-of-failure