Larrabee (microarchitecture): Difference between revisions
Line 67: | Line 67: | ||
==Public demonstrations== |
==Public demonstrations== |
||
{{unreferenced section|date=November 2017}} |
{{unreferenced section|date=November 2017}} |
||
A public demonstration of the Larrabee [[Ray-tracing hardware|ray-tracing capabilities]] took place at the [[Intel Developer Forum]] in San Francisco on September 22, 2009. An experimental version of [[Enemy Territory: Quake Wars]] titled Quake Wars: [[Ray tracing (graphics)|Ray Traced]]<nowiki/> was shown in real-time. The scene contained a ray traced water surface that reflected the surrounding objects, like a ship and several flying vehicles, accurately.<ref>{{Citation|last=Geeks3D|title=Ray Traced Quake Wars|date=2008-06-12|url=https://www.geeks3d.com/20080612/ray-traced-quake-wars/ |archive-url=https://web.archive.org/web/20210917043541/https://www.geeks3d.com/20080612/ray-traced-quake-wars/ |archive-date=2021-09-17 |url-status=live|access-date=2022-03-07}}</ref> |
A public demonstration of the Larrabee [[Ray-tracing hardware|ray-tracing capabilities]] took place at the [[Intel Developer Forum]] in San Francisco on September 22, 2009. An experimental version of [[Enemy Territory: Quake Wars]] titled Quake Wars: [[Ray tracing (graphics)|Ray Traced]]<nowiki/> was shown in real-time. The scene contained a ray traced water surface that reflected the surrounding objects, like a ship and several flying vehicles, accurately.<ref>{{Citation|last=Geeks3D|title=Ray Traced Quake Wars|date=2008-06-12|url=https://www.geeks3d.com/20080612/ray-traced-quake-wars/ |archive-url=https://web.archive.org/web/20210917043541/https://www.geeks3d.com/20080612/ray-traced-quake-wars/ |archive-date=2021-09-17 |url-status=live|access-date=2022-03-07}}</ref><ref>{{cite web|url=http://isdlibrary.intel-dispatch.com/vc/2224/VA2_QuakeWars_NOcovers_final.pdf|title=Light It Up! Quake Wars* Gets Ray Traced|access-date=March 07, 2022|archive-date=February 15, 2010|archive-url=https://web.archive.org/web/20100215022526/http://isdlibrary.intel-dispatch.com/vc/2224/VA2_QuakeWars_NOcovers_final.pdf|url-status=live|access-date=2022-03-07}}</ref> |
||
<ref>{{cite web|url=http://isdlibrary.intel-dispatch.com/vc/2224/VA2_QuakeWars_NOcovers_final.pdf|title=Light It Up! Quake Wars* Gets Ray Traced|access-date=March 07, 2022|archive-date=February 15, 2010|archive-url=https://web.archive.org/web/20100215022526/http://isdlibrary.intel-dispatch.com/vc/2224/VA2_QuakeWars_NOcovers_final.pdf|url-status=live|access-date=2022-03-07}}</ref> |
|||
A second demo was given at the SC09 conference in Portland at November 17, 2009 during a keynote by Intel CTO [[Justin Rattner]]. A Larrabee card was able to achieve 1006 GFLops in the SGEMM 4Kx4K calculation. |
A second demo was given at the SC09 conference in Portland at November 17, 2009 during a keynote by Intel CTO [[Justin Rattner]]. A Larrabee card was able to achieve 1006 GFLops in the SGEMM 4Kx4K calculation. |
Revision as of 19:06, 7 March 2022
Larrabee is the codename for a cancelled GPGPU chip that Intel was developing separately from its current line of integrated graphics accelerators. It is named after either Mount Larrabee or Larrabee State Park in Whatcom County, Washington, near the town of Bellingham.[1][2] The chip was to be released in 2010 as the core of a consumer 3D graphics card, but these plans were cancelled due to delays and disappointing early performance figures.[3][4] The project to produce a GPU retail product directly from the Larrabee research project was terminated in May 2010[5] and its technology was passed on to the Xeon Phi. The Intel MIC multiprocessor architecture announced in 2010 inherited many design elements from the Larrabee project, but does not function as a graphics processing unit; the product is intended as a co-processor for high performance computing.
Almost a decade later, on June 12, 2018; the idea of an Intel dedicated GPU was revived again (as the Intel Xe) with Intel's desire to create a discrete GPU, set to launch by 2020.[6] Whether this new development is connected to the developments of Larrabee remain uncertain, however.
Project status
On December 4, 2009, Intel officially announced that the first-generation Larrabee would not be released as a consumer GPU product.[7] Instead, it was to be released as a development platform for graphics and high-performance computing. The official reason for the strategic reset was attributed to delays in hardware and software development.[8] On May 25, 2010, the Technology@Intel blog announced that Larrabee would not be released as a GPU, but instead would be released as a product for high-performance computing competing with the Nvidia Tesla.[9]
The project to produce a GPU retail product directly from the Larrabee research project was terminated in May 2010.[5] The Intel MIC multiprocessor architecture announced in 2010 inherited many design elements from the Larrabee project, but does not function as a graphics processing unit; the product is intended as a co-processor for high performance computing. The prototype card was named Knights Ferry, a production card built at a 22 nm process named Knights Corner was planned for production in 2012 or later.[citation needed]
Comparison with competing products
Larrabee can be considered a hybrid between a multi-core CPU and a GPU, and has similarities to both. Its coherent cache hierarchy and x86 architecture compatibility are CPU-like, while its wide SIMD vector units and texture sampling hardware are GPU-like.
As a GPU, Larrabee would have supported traditional rasterized 3D graphics (Direct3D & OpenGL) for games. However, its hybridization of CPU and GPU features should also have been suitable for general purpose GPU (GPGPU) or stream processing tasks. For example, it might have performed ray tracing or physics processing,[10] in real time for games or offline for scientific research as a component of a supercomputer.[11]
Larrabee's early presentation drew some criticism from GPU competitors. At NVISION 08, an Nvidia employee called Intel's SIGGRAPH paper about Larrabee "marketing puff" and quoted an industry analyst (Peter Glaskowsky) who speculated that the Larrabee architecture was "like a GPU from 2006".[12] By June 2009, Intel claimed that prototypes of Larrabee were on par with the Nvidia GeForce GTX 285.[13] Justin Rattner, Intel CTO, delivered a keynote at the Supercomputing 2009 conference on November 17, 2009. During his talk he demonstrated an overclocked Larrabee processor topping one teraFLOPS in performance. He claimed this was the first public demonstration of a single-chip system exceeding one teraFLOPS. He pointed out this was early silicon, thereby leaving open the question on eventual performance for the architecture. Because this was only one fifth that of available competing graphics boards, Larrabee was cancelled "as a standalone discrete graphics product" on December 4, 2009.[3]
Differences with contemporary GPUs
This section needs additional citations for verification. (November 2017) |
Larrabee was intended to differ from older discrete GPUs such as the GeForce 200 Series and the Radeon 4000 series in three major ways:
- It was to use the x86 instruction set with Larrabee-specific extensions.[14]
- It was to feature cache coherency across all its cores.[14]
- It was to include very little specialized graphics hardware, instead performing tasks like z-buffering, clipping, and blending in software, using a tile-based rendering approach.[14]
This had been expected to make Larrabee more flexible than current GPUs, allowing more differentiation in appearance between games or other 3D applications. Intel's SIGGRAPH 2008 paper mentioned several rendering features that were difficult to achieve on current GPUs: render target read, order-independent transparency, irregular shadow mapping, and real-time raytracing.[14]
More recent GPUs such as ATI's Radeon HD 5xxx and Nvidia's GeForce 400 Series feature increasingly broad general-purpose computing capabilities via DirectX11 DirectCompute and OpenCL, as well as Nvidia's proprietary CUDA technology, giving them many of the capabilities of Larrabee.
Differences with CPUs
The x86 processor cores in Larrabee differed in several ways from the cores in current Intel CPUs such as the Core 2 Duo or Core i7:
- Its x86 cores were based on the much simpler P54C Pentium design which is still being maintained for use in embedded applications.[15] The P54C-derived core is superscalar but does not include out-of-order execution, though it has been updated with modern features such as x86-64 support,[14] similar to the Bonnell microarchitecture used in Atom. In-order execution means lower performance for individual cores, but since they are smaller, more can fit on a single chip, increasing overall throughput. Execution is also more deterministic so instruction and task scheduling can be done by the compiler.
- Each core contained a 512-bit vector processing unit, able to process 16 single precision floating point numbers at a time. This is similar to, but four times larger than, the SSE units on most x86 processors, with additional features like scatter/gather instructions and a mask register designed to make using the vector unit easier and more efficient. Larrabee was to derive most of its number-crunching power from these vector units.[14]
- It included one major fixed-function graphics hardware feature: texture sampling units. These perform trilinear and anisotropic filtering and texture decompression.[14]
- It had a 1024-bit (512-bit each way) ring bus for communication between cores and to memory.[14] This bus can be configured in two modes to support Larrabee products with 16 cores or more, or fewer than 16 cores.[16]
- It included explicit cache control instructions to reduce cache thrashing during streaming operations which only read/write data once.[14] Explicit prefetching into L2 or L1 cache is also supported.
- Each core supported four-way interleaved multithreading, with four copies of each processor register.[14]
Theoretically Larrabee's x86 processor cores would have been able to run existing PC software, or even operating systems. A different version of the processor might sit in motherboard CPU sockets using QuickPath,[17] but Intel never announced any plans for this. Though Larrabee's native C/C++ compiler included auto-vectorization and many applications were able to execute correctly after having been recompiled, maximum efficiency was expected to have required code optimization using C++ vector intrinsics or inline Larrabee assembly code.[14] However, as in all GPGPUs, not all software would have benefited from utilization of a vector processing unit. One tech journalism site claims that Larrabee's graphics capabilities were planned to be integrated in CPUs based on the Haswell microarchitecture.[18]
Comparison with the Cell broadband engine
Larrabee's philosophy of using many small, simple cores was similar to the ideas behind the Cell processor. There are some further commonalities, such as the use of a high-bandwidth ring bus to communicate between cores.[14] However, there were many significant differences in implementation which were expected to make programming Larrabee simpler.
- The Cell processor includes one main processor which controls many smaller processors. Additionally, the main processor can run an operating system. In contrast, all of Larrabee's cores are the same, and the Larrabee was not expected to run an OS.
- Each computer core in the Cell (SPE) has a local store, for which explicit (DMA) operations are used for all accesses to DRAM. Ordinary reads and writes to DRAM are not allowed. In Larrabee, all on-chip and off-chip memories are under automatically managed coherent cache hierarchy, so that its cores virtually shared a uniform memory space through standard copy (MOV) instructions. Larrabee cores each had 256 KB of local L2 cache, and an access which hits another L2 segment takes longer to access.[14]
- Because of the cache coherency noted above, each program running in Larrabee had virtually a large linear memory just as in traditional general-purpose CPU; whereas an application for Cell should be programmed taking into consideration limited memory footprint of the local store associated with each SPE (for details see this article) but with theoretically higher bandwidth. However, since local L2 is faster to access, an advantage can still be gained from using Cell-style programming methods.[citation needed]
- Cell uses DMA for data transfer to and from on-chip local memories, which enables explicit maintenance of overlays stored in local memory to bring memory closer to the core and reduce access latencies, but requiring additional effort to maintain coherency with main memory; whereas Larrabee used a coherent cache with special instructions for cache manipulation (notably cache eviction hints and pre-fetch instructions), which mitigated miss and eviction penalties and reduce cache pollution (e.g. for rendering pipelines and other stream-like computation) at the cost of additional traffic and overhead to maintain cache coherency.[14]
- Each compute core in the Cell runs only one thread at a time, in-order. A core in Larrabee ran up to four threads, but only one at a time. Larrabee's hyperthreading helped hide the latencies inherent to in-order execution. [citation needed]
Comparison with Intel GMA
Intel began integrating a line of GPUs onto motherboards under the Intel GMA brand in 2004. Being integrated onto motherboards (newer versions, such as those released with Sandy Bridge, are incorporated onto the same die as the CPU) these chips were not sold separately. Though the low cost and power consumption of Intel GMA chips made them suitable for small laptops and less demanding tasks, they lack the 3D graphics processing power to compete with contemporary Nvidia and AMD/ATI GPUs for a share of the high-end gaming computer market, the HPC market, or a place in popular video game consoles. In contrast, Larrabee was to be sold as a discrete GPU, separate from motherboards, and was expected to perform well enough for consideration in the next generation of video game consoles.[19][20]
The team working on Larrabee was separate from the Intel GMA team. The hardware was designed by a newly formed team at Intel's Hillsboro, Oregon, site, separate from those that designed the Nehalem. The software and drivers were written by a newly formed team. The 3D stack specifically was written by developers at RAD Game Tools (including Michael Abrash).[21]
The Intel Visual Computing Institute will research basic and applied technologies that could be applied to Larrabee-based products.[22]
Projected performance data
Intel's SIGGRAPH 2008 paper describes cycle-accurate simulations (limitations of memory, caches and texture units was included) of Larrabee's projected performance.[14] Graphs show how many 1 GHz Larrabee cores are required to maintain 60 frame/s at 1600×1200 resolution in several popular games. Roughly 25 cores are required for Gears of War with no antialiasing, 25 cores for F.E.A.R with 4× antialiasing, and 10 cores for Half-Life 2: Episode 2 with 4× antialiasing. Intel claimed that Larrabee would likely run faster than 1 GHz, so these numbers do not represent actual cores, rather virtual timeslices of such. Another graph shows that performance on these games scales nearly linearly with the number of cores up to 32 cores. At 48 cores the performance drops to 90% of what would be expected if the linear relationship continued.[23]
A June 2007 PC Watch article suggested that the first Larrabee chips would feature 32 x86 processor cores and come out in late 2009, fabricated on a 45 nanometer process. Chips with a few defective cores due to yield issues would be sold as a 24-core version. Later in 2010, Larrabee would be shrunk for a 32 nanometer fabrication process to enable a 48-core version.[24]
The last statement of performance can be calculated (theoretically this is maximum possible performance) as follows: 32 cores × 16 single-precision float SIMD/core × 2 FLOP (fused multiply-add) × 2 GHz = 2 TFLOPS theoretically.
Public demonstrations
A public demonstration of the Larrabee ray-tracing capabilities took place at the Intel Developer Forum in San Francisco on September 22, 2009. An experimental version of Enemy Territory: Quake Wars titled Quake Wars: Ray Traced was shown in real-time. The scene contained a ray traced water surface that reflected the surrounding objects, like a ship and several flying vehicles, accurately.[25][26]
A second demo was given at the SC09 conference in Portland at November 17, 2009 during a keynote by Intel CTO Justin Rattner. A Larrabee card was able to achieve 1006 GFLops in the SGEMM 4Kx4K calculation.
An engineering sample of a Larrabee card was procured and reviewed by Linus Sebastian in a video published May 14, 2018. He was unable to make the card give video output however, with the motherboard displaying POST code D6.[27]
See also
- Xeon Phi
- Intel740
- Intel GMA
- x86
- x86-64
- P5 (microarchitecture)
- Bonnell (microarchitecture)
- List of Intel CPU microarchitectures
- Intel MIC
- Nvidia Tesla
- AMD Fusion
- AVX-512
References
- ^ Forsythe, Tom. "SMACNI to AVX512 the life cycle of an instruction set" (PDF).
- ^ Forsyth, Tom (2020-12-22). "Tom Forsyth on Naming of Larrabee Instruction Set". Archived from the original on 2020-12-22. Retrieved 2020-12-22.
- ^ a b Crothers, Brooke (December 4, 2009). "Intel: Initial Larrabee graphics chip canceled". CNET. CBS Interactive.
- ^ Charlie Demerjian (December 4, 2009). "Intel kills consumer Larrabee, focuses on future variants - SemiAccurate". SemiAccurate.com. Retrieved April 9, 2017.
- ^ a b Smith, Ryan (May 25, 2010). "Intel Kills Larrabee GPU, Will Not Bring a Discrete Graphics Product to Market". AnandTech.
- ^ Smith, Ryan (June 13, 2018). "Intel's First (Modern) Discrete GPU Set For 2020". Anandtech. Retrieved November 4, 2018.
- ^ Stokes, Jon (5 December 2009). "Intel's Larrabee GPU put on ice, more news to come in 2010". Ars Technica. Condé Nast.
- ^ Smith, Ryan. "Intel Cancels Larrabee Retail Products, Larrabee Project Lives On". AnandTech.com. Retrieved April 9, 2017.
- ^ "Blogs@Intel - Intel Blogs". Intel.com. Retrieved April 9, 2017.
- ^ Stokes, Jon. "Intel picks up gaming physics engine for forthcoming GPU product". Ars Technica. Retrieved 2007-09-17.
- ^ Stokes, Jon. "Clearing up the confusion over Intel's Larrabee". Ars Technica. Retrieved June 1, 2007.
- ^ "Larrabee performance--beyond the sound bite". CNet.com. Retrieved April 9, 2017.
- ^ "Intel's 'Larrabee' on Par With GeForce GTX 285". TomsHardware.com. June 2, 2009. Retrieved April 9, 2017.
- ^ a b c d e f g h i j k l m n o Seiler, L.; Cavin, D.; Espasa, E.; Grochowski, T.; Juan, M.; Hanrahan, P.; Carmean, S.; Sprangle, A.; Forsyth, J.; Abrash, R.; Dubey, R.; Junkins, E.; Lake, T.; Sugerman, P. (August 2008). "Larrabee: A Many-Core x86 Architecture for Visual Computing" (PDF). ACM Transactions on Graphics. Proceedings of ACM SIGGRAPH 2008. 27 (3): 18:11. doi:10.1145/1360612.1360617. ISSN 0730-0301. Archived from the original (PDF) on 2021-03-07. Retrieved 2008-08-06.
- ^ "Intel's Larrabee GPU based on secret Pentagon tech, sorta [Updated]". Ars Technica. Retrieved 2008-08-06.
- ^ Glaskowsky, Peter. "Intel's Larrabee--more and less than meets the eye". CNET. Retrieved 2008-08-20.
- ^ Stokes, Jon. "Clearing up the confusion over Intel's Larrabee, part II". Ars Technica. Retrieved 2008-01-16.
- ^ "Intel to use Larrabee graphics on CPUs - SemiAccurate". SemiAccurate.com. August 19, 2009. Retrieved April 9, 2017.
- ^ Chris Leyton (August 13, 2008). "Intel's Larrabee Shaping Up For Next-Gen Consoles?". Archived from the original on August 17, 2008. Retrieved August 24, 2008.
- ^ Charlie Demerjian (February 5, 2009). "Intel Will Design PlayStation 4 GPU". Archived from the original on May 11, 2009. Retrieved August 28, 2009.
{{cite web}}
: CS1 maint: unfit URL (link) - ^ Wilson, Anand Lal Shimpi & Derek. "Intel's Larrabee Architecture Disclosure: A Calculated First Move". AnandTech.com. Retrieved April 9, 2017.
- ^ Ng, Jansen (May 13, 2009). "Intel Visual Computing Institute Opens, Will Spur "Larrabee" Development". DailyTech. Archived from the original on May 16, 2009. Retrieved May 13, 2009.
- ^ Steve Seguin (August 20, 2008). "Intel's 'Larrabee' to Shakeup [sic] AMD, Nvidia". Tom's Hardware. Retrieved August 24, 2008.
- ^ "Intel is promoting the 32 core CPU "Larrabee"" (in Japanese). pc.watch.impress.co.jp. Retrieved August 6, 2008.translation
- ^ Geeks3D (2008-06-12), Ray Traced Quake Wars, archived from the original on 2021-09-17, retrieved 2022-03-07
{{citation}}
: CS1 maint: numeric names: authors list (link) - ^ "Light It Up! Quake Wars* Gets Ray Traced" (PDF). Archived (PDF) from the original on February 15, 2010. Retrieved 2022-03-07.
- ^ Linus Tech Tips (2018-05-14), WE GOT INTEL'S PROTOTYPE GRAPHICS CARD!!, archived from the original on 2021-12-21, retrieved 2019-05-10
External links
This article's use of external links may not follow Wikipedia's policies or guidelines. (April 2017) |
- Video of a raytracer running on one of the first Larrabee cards at IDF '09
- Whitepapers on LRBni, Physics Simulations and more using Larrabee
- Rasterization on Larrabee
- A First Look at the Larrabee New Instructions (LRBni)
- C++ implementation of the Larrabee new instructions
- Game Physics Performance on Larrabee
- Intel fact sheet about Larrabee
- Intel's SIGGRAPH 2008 paper on Larrabee
- Techgage.com - Discusses how Larrabee differs from normal GPUs, includes block diagram illustration
- Intel's Larrabee Architecture Disclosure: A Calculated First Move