GeForce 40 series
Release date | October 12, 2022 |
---|---|
Codename | AD10x |
Architecture | Ada Lovelace |
Models | GeForce RTX series |
Transistors |
|
Fabrication process | TSMC 4N[1] |
Cards | |
Enthusiast |
|
API support | |
DirectX | Direct3D 12.0 Ultimate (feature level 12_2) |
OpenCL | OpenCL 3.0 |
OpenGL | OpenGL 4.6 |
Vulkan | Vulkan 1.3 |
History | |
Predecessor | GeForce 30 series |
The GeForce 40 series is a family of graphics processing units developed by Nvidia, succeeding the GeForce 30 series. The series was announced on September 20, 2022 at GTC 2022 GeForce Beyond A Special Broadcast at GTC event and are expected to start shipping on October 12, 2022.[1] The cards are based on the Ada Lovelace architecture and feature hardware-accelerated raytracing (RTX) with Nvidia's third-generation RT cores and fourth-generation Tensor Cores.
Details
Architectural highlights of the Ada Lovelace architecture include the following:[2]
- CUDA Compute Capability 8.9[3]
- TSMC 4N process (custom designed for NVIDIA)[1] – not to be confused with N4
- Fourth-generation Tensor Cores with FP8, FP16, bfloat16, TensorFloat-32 (TF32) and sparsity acceleration
- Third-generation Ray Tracing Cores, along with concurrent ray tracing, shading and compute
- Shader Execution Reordering - needs to be enabled by the developer[4]
- NVENC with 8K 10-bit 60FPS AV1 fixed function hardware encoding[5][6]
- A new generation of Optical Flow Accelerator to aid DLSS 3.0 intermediate AI-based frame generation[7]
- No NVLink support[8]
Products
- Double-precision (FP64) performance of the Ada Lovelace chips are 1/64 of single-precision (FP32) performance.
- All the cards feature GDDR6X video memory.
Model | Launch | Launch MSRP (USD) |
Code name(s) |
Transistors (billion) | Die size (mm2) | Core config[a] |
SM count[b] |
L2 cache (MB) |
Clock speeds[c] | Fillrate[d][e] | Memory | Processing power (TFLOPS) | TDP (watts) | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core clock (MHz) |
Memory (GT/s) |
Pixel (Gpx/s) |
Texture (Gtex/s) |
Size (GB) |
Bandwidth (GB/s) |
Bus width (bit) |
Half precision (boost) |
Single precision (boost) |
Double precision (boost) |
Tensor compute [sparse] | ||||||||||
GeForce RTX 4080[9] |
Nov 16, 2022 | $1199 | AD103-300 | 45.9 | 378.6 | 9728 304:112:76:304 |
76 | 64 | 2210 (2505) |
22.4 | 247.5 (280.6) |
671.8 (761.5) |
16 | 716.8 | 256 | 42.998 (48.737) |
42.998 (48.737) |
0.672 (0.762) |
194.9 [389.8] |
320 |
GeForce RTX 4090[10] |
Oct 12, 2022 | $1599 | AD102-300 | 76.3 | 608.5 | 16384 512:176:128:512 |
128 | 72 | 2230 (2520) |
21.0 | 392.5 (443.5) |
1141.8 (1290.2) |
24 | 1008 | 384 | 73.073 (82.575) |
73.073 (82.575) |
1.142 (1.290) |
330.3 [660.6] |
450 |
- ^ Shader Processors : Texture mapping units : Render output units : Ray tracing cores : Tensor Cores
- ^ The number of Streaming multi-processors on the GPU.
- ^ Core boost values (if available) are stated below the base value inside brackets.
- ^ Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.
- ^ Texture fillrate is calculated as the number of texture mapping units (TMUs) multiplied by the base (or boost) core clock speed.
RTX 4080 12GB naming and pricing controversy
Numerous outlets, prominent YouTubers, reviewers, and the community have criticized Nvidia on calling the 12GB RTX 4080 using the AD104 chip an RTX 4080 instead of RTX 4070 given previous Nvidia GPU generations and the large gap in specifications and performance.[11][12][13][14] Unlike other cases of the same-named product with differing memory configurations that were otherwise very close in performance, the 12GB RTX 4080 uses a completely different chip and configuration (27% fewer CUDA cores and other related hardware,[clarification needed] along with the cut down 192 bit memory bus width which was typically used for the xx60 cards[citation needed]). This leaves it up to 30% slower than the 16GB RTX 4080 in raw performance without even considering the VRAM difference, which is indicative of an entirely different tier of performance and should entail a different name. The community[who?] has speculated that because it's priced significantly higher than previous xx70 cards, $900 vs. eg. $500 for RTX 3070, i.e. $400/80% more expensive, that could be a factor in why Nvidia decided to give it the RTX 4080 name.[citation needed]
Aftermath and "unlaunching"
On October 14, 2022, Nvidia announced that due to the confusion caused by the naming scheme, it would be "unlaunching"—i.e. pausing the launch of—the 12GB RTX 4080, with the 16GB RTX 4080's launch remaining unaffected. It is currently unclear on if or when the product will be rebranded or when the product will launch.[15][16]
See also
- GeForce 10 series
- GeForce 16 series
- GeForce 20 series
- GeForce 30 series
- Nvidia Workstation GPUs (formerly Quadro)
- Nvidia Data Center GPUs (formerly Tesla)
- List of Nvidia graphics processing units
Notes
References
- ^ a b c "NVIDIA Delivers Quantum Leap in Performance, Introduces New Era of Neural Rendering With GeForce RTX 40 Series". NVIDIA Newsroom. September 20, 2022. Retrieved October 8, 2022.
- ^ "NVIDIA Ada Lovelace Architecture". NVIDIA.
- ^ "I.7. Compute Capability 9.x". docs.nvidia.com.
- ^ Palumbo, Alessio (September 23, 2022). "NVIDIA Ada Lovelace Follow-Up Q&A - DLSS 3, SER, OMM, DMM and More". Wccftech. Retrieved September 25, 2022.
- ^ "Creativity At The Speed of Light: GeForce RTX 40 Series Graphics Cards Unleash Up To 2X Performance in 3D Rendering, AI, and Video Exports For Gamers and Creators". NVIDIA.
- ^ "Nvidia Video Codec SDK". August 23, 2013.
- ^ Chiappetta, Marco (September 22, 2022). "NVIDIA GeForce RTX 40 Architecture Overview: Ada's Special Sauce Unveiled". HotHardware. Retrieved September 25, 2022.
- ^ "Jensen Confirms: NVLink Support in Ada Lovelace is Gone". TechPowerUp. September 21, 2022.
- ^ "NVIDIA GeForce RTX 4080 Graphics Cards for Gaming". Nvidia. Retrieved October 14, 2022.
- ^ "NVIDIA Ada GPU Architecture" (PDF). Nvidia. Retrieved October 1, 2022.
- ^ Laird, Jeremy (September 22, 2022). "We've run the numbers and Nvidia's RTX 4080 cards don't add up". PC Gamer. Retrieved September 27, 2022.
- ^ "Why the RTX 4080 12GB feels a lot like a rebranded RTX 4070". Digital Trends. September 21, 2022. Retrieved September 27, 2022.
- ^ Walton, Jarred (September 23, 2022). "Why Nvidia's RTX 4080, 4090 Cost so Damn Much". Tom's Hardware. Retrieved September 27, 2022.
- ^ Guyton, Christian (September 22, 2022). "Buyer beware: the 12GB RTX 4080 is hiding a dirty little secret". TechRadar. Retrieved September 27, 2022.
- ^ "Unlaunching The 12GB 4080". NVIDIA. Retrieved October 14, 2022.
- ^ Warren, Tom (October 14, 2022). "Nvidia says it's 'unlaunching' the 12GB RTX 4080 after backlash". The Verge. Retrieved October 14, 2022.