Jump to content

Kepler (microarchitecture)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Cretman121 (talk | contribs) at 19:17, 28 July 2013. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Template:Unreviewed Nvidia Kepler Architecture is the first Nvidia GPU architecture to focuse on energy efficiency.

Overview

Where the goal of the previous architecture, Fermi, was to increase raw performance (particularly for compute and tessellation), Nvidia's goal with the Kepler architecture was to increase performance per watt, while still striving for overall performance increases.[1] The primary way they achieved this goal was through the use of a unified clock. By abandoning the shader clock found in their previous GPU designs, efficiency is increased, even though it requires more cores to achieve similar levels of performance. This is not only because the cores are more power efficient (two Kepler cores using about 90% of the power of one Fermi core, according to Nvidia's numbers), but also because the reduction in clock speed delivers a 50% reduction in power consumption in that area.[2]

Kepler also introduced a new form of texture handling known as bindless textures. Previously, textures needed to be bound by the CPU to a particular slot in a fixed-size table before the GPU could reference them. This led to two limitations: one was that because the table was fixed in size, there could only be as many textures in use at one time as could fit in this table (128). The second was that the CPU was doing unnecessary work: it had to load each texture, and also bind each texture loaded in memory to a slot in the binding table.[1] With bindless textures, both limitations are removed. The GPU can access any texture loaded into memory, increasing the number of available textures and removing the performance penalty of binding.

Finally, with Kepler, Nvidia was able to increase the memory clock to 6 GHz. To accomplish this, they needed to design an entirely new memory controller and bus. While still shy of the theoretical 7 GHz limitation of GDDR5, this is well above the 4 GHz speed of the memory controller for Fermi.[2]

Features

The GeForce 600 Series contains products from both the older Fermi and newer Kepler generations of Nvidia GPUs. Kepler based members of the 600 series add the following standard features to the GeForce family:

  • PCI Express 3.0 interface
  • DisplayPort 1.2
  • HDMI 1.4a 4K x 2K video output
  • Purevideo VP5 hardware video acceleration (up to 4K x 2K H.264 decode)
  • Hardware H.264 encoding acceleration block (NVENC)
  • Support for up to 4 independent 2D displays, or 3 stereoscopic/3D displays (NV Surround)
  • Next Generation Streaming Multiprocessor (SMX)
  • A New Instruction Scheduler
  • Bindless Textures
  • CUDA Compute Capability 3.0
  • GPU Boost
  • TXAA Support
  • Manufactured by TSMC on a 28 nm process

Next Generation Streaming Multiprocessor (SMX)

The Kepler architecture employs a new Streaming Multiprocessor Architecture called SMX. The SMX are the key method for Kepler's power efficiency as the whole GPU uses a single "Core Clock" rather than the double-pump "Shader Clock".[2] Although the SMX usage of a single unified clock increases the GPU power efficiency due to the fact that one Kepler CUDA Cores consume 90% power of two Fermi CUDA Core, consequently the SMX needs additional processing units to execute a whole warp per cycle. Kepler also needed to increase raw GPU performance as to remain competitive. As a result, it doubled the CUDA Cores from 16 to 32 per CUDA array, 3 CUDA Cores Array to 6 CUDA Cores Array, 1 load/store and 1 SFU group to 2 load/store and 2 SFU group. The GPU processing resources are also double. From 2 warp schedulers to 4 warp schedulers, 4 dispatch unit became 8 and the register file doubled to 64K entries as to increase performance. With the doubling of GPU processing units and resources increasing the usage of die spaces, The capability of the PolyMorph Engine aren't double but enhanced, making it capable of spurring out a polygon in 2 cycles instead of 4.[3] With Kepler, Nvidia not only have to work on power efficiency but also on area efficiency, thus Nvidia opted to use 8 dedicated FP64 CUDA cores in a SMX as to save die space while still offering FP64 capabilities since all Kepler CUDA cores are not FP64 capable. With the improvement Nvidia made on Kepler, the results include an increase in GPU graphic performance while downplaying FP64 performance.

A New Instruction Scheduler

Additional die areas are acquired by replacing the complex hardware scheduler with a simple software scheduler. With software scheduling, warps scheduling was moved to Nvidia's compiler and as the GPU math pipeline now has a fixed latency, it now include the utilization of Instruction-Level Parallelism and superscalar execution in addition to Thread-Level Parallelism. As instructions are statically scheduled, scheduling inside a warp becomes redundant since the latency of the math pipeline is already known. This resulted an increase in die area space and power efficiency.[2] [4] [1]

GPU Boost

GPU Boost is a new feature which is roughly analogous to turbo boosting of a CPU. The GPU is always guaranteed to run at a minimum clock speed, referred to as the "base clock". This clock speed is set to the level which will ensure that the GPU stays within TDP specifications, even at maximum loads.[1] When loads are lower, however, there is room for the clock speed to be increased without exceeding the TDP. In these scenarios, GPU Boost will gradually increase the clock speed in steps, until the GPU reaches a predefined power target (which is 170W by default).[2] By taking this approach, the GPU will ramp its clock up or down dynamically, so that it is providing the maximum amount of speed possible while remaining within TDP specifications.

The power target, as well as the size of the clock increase steps that the GPU will take, are both adjustable via third-party utilities and provide a means of overclocking Kepler-based cards.[1]

Microsoft Direct3D Support

Nvidia Fermi and Kepler GPUs of the GeForce 600 series support the Direct3D 11.0 specification. Nvidia originally stated that the Kepler architecture has full DirectX 11.1 support, which includes the Direct3D 11.1 path.[5] The following " Modern UI " Direct3D 11.1 features, however, are not supported:[6][7]

  • Target-Independent Rasterization (2D rendering only).
  • 16xMSAA Rasterization (2D rendering only).
  • Orthogonal Line Rendering Mode.
  • UAV (Unordered Access View) in non-pixel-shader stages.

According to the definition by Microsoft, Direct3D Feature Level 11_1 must be complete, otherwise the Direct3D 11.1 path can not be executed.[8] The integrated Direct3D features of the Kepler architecture are the same as those of the GeForce 400 series Fermi architecture.[7]

TXAA Support

Exclusive to Kepler GPUs, TXAA is a new anti-aliasing method from Nvidia that is designed for direct implementation into game engines. TXAA is based on the MSAA technique and custom resolve filters. It is design to addresses a key problem in games known as shimmering or temporal aliasing. TXAA resolves that by smoothing out the scene in motion, making sure that any in-game scene is being cleared of any aliasing and shimmering.[9]

NVENC

NVENC is Nvidia's power efficient fixed-function encode that is able to take codecs, decode, preprocess, and encode H.264-based content. NVENC specification input formats are limited to H.264 output. But still, NVENC, through its limited format, can support up to 4096x4096 encode.[10]

Like Intel’s Quick Sync, NVENC is currently exposed through a proprietary API, though Nvidia does have plans to provide NVENC usage through CUDA.[10]

GK110 Overview

With GK110, Nvidia focuses on compute performance. With 7.1 billion transistors it is the biggest GPU in terms of transistor count, dwarfing the GK104 and GF110. GK110 is unrivaled from a fabrication and power consumption standpoint, but the end result is that the performance per watt is unmatched due to the fact that so many tasks (graphical and compute) are massively parallel and map well to the large arrays of streaming processors found in GK110.

With GK110, increase in space and bandwidth for both the register file and the L2 cache are seen. At the SMX level, GK110 register file space has increased to 256KB composed of 65K 32bit registers, as compared to Fermi. As for the L2 cache, GK110 L2 cache space increased by up to 1.5MB, twice as big as GF110. Both the L2 cache and register file bandwidth have also doubled. Performance in register-starved scenarios is also improved as there are more registers available to each thread. This goes in hand with an increase of total number of registers each thread can address, moving from 63 registers per thread to 255 registers per thread with GK110.

With GK110, Nvidia also reworked the GPU texture cache to be used for compute. With 48KB in size, in compute the texture cache becomes a read-only cache, specializing in unaligned memory access workloads. Furthermore error detection capabilities have been added to make it safer for use with workloads that rely on ECC.[11]

Features

The GeForce 700 Series contains features from both GK104 and GK110. Kepler based members of the 700 series add the following standard features to the GeForce family.

Derive from GK104 :

  • DisplayPort 1.2
  • HDMI 1.4a 4K x 2K video output
  • Purevideo VP5 hardware video acceleration (up to 4K x 2K H.264 decode)
  • Hardware H.264 encoding acceleration block (NVENC)
  • Support for up to 4 independent 2D displays, or 3 stereoscopic/3D displays (NV Surround)
  • Bindless Textures
  • GPU Boost
  • TXAA
  • Manufactured by TSMC on a 28 nm process

New Features from GK110 :

  • Compute Focus SMX Improvement
  • CUDA Compute Capability 3.5
  • New Shuffle Instructions
  • Dynamic Parallelism
  • Hyper-Q (Hyper-Q's MPI functionality reserve for Tesla only)
  • Grid Management Unit
  • NVIDIA GPUDirect (GPU Direct’s RDMA functionality reserve for Tesla only)

Compute Focus SMX Improvement

With GK110, Nvidia opted to increase compute performance. The single biggest change from GK104 is that rather than 8 dedicated FP64 CUDA cores, GK110 has up to 64, giving it 8x the FP64 throughput of a GK104 SMX. The SMX also sees an increase in space for register file. Register file space has increased to 256KB compared to Fermi. The texture cache are also improved. With a 48KB space, the texture cache can become a read-only cache for compute workloads.[11]

New Shuffle Instructions

At a low level, GK110 sees an additional instructions and operations to further improve performance. New shuffle instructions allow for threads within a warp to share data without going back to memory, making the process much quicker than the previous load/share/store method. Atomic operations are also overhauled, speeding up the execution speed of atomic operations and adding some FP64 operations that were previously only available for FP32 data.[11]

Hyper-Q

Hyper-Q expands GK110 hardware work queues from 1 to 32. The significance of this being that having a single work queue meant that Fermi could be under occupied at times as there wasn’t enough work in that queue to fill every SM. By having 32 work queues, GK110 can in many scenarios, achieve higher utilization by being able to put different task streams on what would otherwise be an idle SMX. The simple nature of Hyper-Q is further reinforced by the fact that it’s easily map to MPI, a common message passing interface frequently used in HPC. As legacy MPI-based algorithms that were originally designed for multi-CPU systems that became bottlenecked by false dependencies now have a solution. By increasing the number of MPI jobs, it’s possible to utilize Hyper-Q on these algorithms to improve the efficiency all without changing the code itself.[11]

Dynamic Parallelism

Dynamic Parallelism ability is for kernels to be able to dispatch other kernels. With Fermi, only the CPU could dispatch a kernel, which incurs a certain amount of overhead by having to communicate back to the CPU. By giving kernels the ability to dispatch their own child kernels, GK110 can both save time by not having to go back to the CPU, and in the process free up the CPU to work on other tasks.[11]

Grid Management Unit

Enabling Dynamic Parallelism requires a new grid management and dispatch control system. The new Grid Management Unit (GMU) manages and prioritizes grids to be executed. The GMU can pause the dispatch of new grids and queue pending and suspended grids until they are ready to execute, providing the flexibility to enable powerful runtimes, such as Dynamic Parallelism. The CUDA Work Distributor in Kepler holds grids that are ready to dispatch, and is able to dispatch 32 active grids, which is double the capacity of the Fermi CWD. The Kepler CWD communicates with the GMU via a bidirectional link that allows the GMU to pause the dispatch of new grids and to hold pending and suspended grids until needed. The GMU also has a direct connection to the Kepler SMX units to permit grids that launch additional work on the GPU via Dynamic Parallelism to send the new work back to GMU to be prioritized and dispatched. If the kernel that dispatched the additional workload pauses, the GMU will hold it inactive until the dependent work has completed. [12]

NVIDIA GPUDirect

NVIDIA GPUDirect™ is a capability that enables GPUs within a single computer, or GPUs in different servers located across a network, to directly exchange data without needing to go to CPU/system memory. The RDMA feature in GPUDirect allows third party devices such as SSDs, NICs, and IB adapters to directly access memory on multiple GPUs within the same system, significantly decreasing the latency of MPI send and receive messages to/from GPU memory. It also reduces demands on system memory bandwidth and frees the GPU DMA engines for use by other CUDA tasks. Kepler GK110 also supports other GPUDirect features including Peer‐to‐Peer and GPUDirect for Video.

References

  1. ^ a b c d e Cite error: The named reference gtx680-nvidia-paper was invoked but never defined (see the help page).
  2. ^ a b c d e Smith, Ryan (March 22, 2012). "NVIDIA GeForce GTX 680 Review: Retaking The Performance Crown". AnandTech. Retrieved November 25, 2012.
  3. ^ "GK104: The Chip And Architecture GK104: The Chip And Architecture". Tom;s Hardware. March 22, 2012.
  4. ^ "NVIDIA Kepler GK110 Architecture Whitepaper" (PDF).
  5. ^ "NVIDIA Launches First GeForce GPUs Based on Next-Generation Kepler Architecture". Nvidia. March 22, 2012.
  6. ^ Edward, James (November 22, 2012). "NVIDIA claims partially support DirectX 11.1". TechNews.
  7. ^ a b "Nvidia Doesn't Fully Support DirectX 11.1 with Kepler GPUs, But…". BSN.
  8. ^ "D3D_FEATURE_LEVEL enumeration (Windows)". MSDN.
  9. ^ "Introducing The GeForce GTX 680 GPU". Nvidia. March 22, 2012.
  10. ^ a b "Benchmark Results: NVEnc And MediaEspresso 6.5". Tom’s Hardware. March 22, 2012.
  11. ^ a b c d e "NVIDIA Launches Tesla K20 & K20X: GK110 Arrives At Last". AnandTech. 11/12/2012. {{cite web}}: Check date values in: |date= (help)
  12. ^ "NVIDIA-Kepler-GK110-Architecture-Whitepaper" (PDF). {{cite web}}: Cite has empty unknown parameter: |1= (help)