Recommended System Requirements | ||
---|---|---|
Game | GeForce RTX 3080 | GeForce RTX 2080 Super 8GB |
Cyberpunk 2077 | 55% | 46% |
Hitman 3 | 41% | 29% |
Assassins Creed: Valhalla | 59% | 50% |
Call of Duty: Black Ops Cold War | 60% | 52% |
FIFA 21 | 77% | 73% |
Grand Theft Auto VI | 36% | 23% |
Far Cry 6 | 35% | 21% |
Genshin Impact | 55% | 46% |
World of Warcraft: Shadowlands | 35% | 22% |
Battlefield 6 | 41% | 29% |
In terms of overall gaming performance, the graphical capabilities of the Nvidia GeForce RTX 3080 are noticeably better than the Nvidia GeForce RTX 2080 Super 8GB.
The RTX 2080 has a 210 MHz higher core clock speed than the GeForce RTX 3080, but the GeForce RTX 3080 has 52 more Texture Mapping Units than the RTX 2080. As a result, the GeForce RTX 3080 exhibits a 41.3 GTexel/s better Texture Fill Rate than the RTX 2080. This still holds weight but shader performance is generally more relevant, particularly since both of these GPUs support at least DirectX 10.
The RTX 2080 has a 210 MHz higher core clock speed than the GeForce RTX 3080 and the same number of Render Output Units. This results in the RTX 2080 providing 13.4 GPixel/s better pixeling performance. However, both GPUs support DirectX 9 or above, and pixeling performance is only really relevant when comparing older cards.
The GeForce RTX 3080 was released over a year more recently than the RTX 2080, and so the GeForce RTX 3080 is likely to have better driver support, meaning it will be more optimized for running the latest games when compared to the RTX 2080.
Both GPUs exhibit very powerful performance, so it probably isn't worth upgrading from one to the other, as both are capable of running even the most demanding games at the highest settings.
The GeForce RTX 3080 has 2048 MB more video memory than the RTX 2080, so is likely to be much better at displaying game textures at higher resolutions. However, overall, the RTX 2080 has superior memory performance.
The RTX 2080 has 496.6 GB/sec greater memory bandwidth than the GeForce RTX 3080, which means that the memory performance of the RTX 2080 is massively better than the GeForce RTX 3080.
The GeForce RTX 3080 has 8704 Shader Processing Units and the GeForce RTX 2080 Super 8GB has 3072. However, the actual shader performance of the GeForce RTX 3080 is 14884 and the actual shader performance of the RTX 2080 is 5576. The GeForce RTX 3080 having 9308 better shader performance is not particularly notable, as altogether the RTX 2080 performs better when taking into account other relevant data.
The GeForce RTX 3080 transistor size technology is 4 nm (nanometers) smaller than the RTX 2080. This means that the GeForce RTX 3080 is expected to run very slightly cooler and achieve higher clock frequencies than the RTX 2080.
The GeForce RTX 3080 requires 320 Watts to run and the GeForce RTX 2080 Super 8GB requires 215 Watts. We would recommend a PSU with at least 750 Watts for the GeForce RTX 3080 and a PSU with at least 600 Watts for the RTX 2080. The GeForce RTX 3080 requires 105 Watts more than the RTX 2080 to run. The difference is significant enough that the GeForce RTX 3080 may have an adverse affect on your yearly electricity bills in comparison to the RTX 2080.
Core Speed | 1440 MHz | vs | ![]() | 1650 MHz | |
---|---|---|---|---|---|
Boost Clock | 1710 MHz | vs | ![]() | 1815 MHz | |
Architecture | Ampere GA102 | Turing TU104 | |||
OC Potential | - | vs | - | ||
Driver Support | - | vs | - | ||
Release Date | 17 Sep 2020 | ![]() | vs | 09 Jul 2019 | |
GPU Link | GD Link | GD Link | |||
Approved | ![]() | ![]() | |||
Comparison |
1366x768 | 10
|
![]() |
vs | ![]() |
10
|
---|---|---|---|---|---|
1600x900 | 10
|
![]() |
vs | ![]() |
10
|
1920x1080 | 10
|
![]() |
vs | ![]() |
10
|
2560x1440 | 10
|
![]() |
vs | ![]() |
10
|
3840x2160 | 8.9
|
![]() |
vs | 8.2
|
Memory | 10240 MB | ![]() | vs | 8192 MB | |
---|---|---|---|---|---|
Memory Speed | 1750 MHz | vs | ![]() | 1940 MHz | |
Memory Bus | 320 Bit | ![]() | vs | 256 Bit | |
Memory Type | - | vs | ![]() | GDDR6 | |
Memory Bandwidth | - | vs | ![]() | 496.6GB/sec | |
L2 Cache | 0 KB | vs | ![]() |
4096 KB | |
Delta Color Compression | no | vs | no | ||
Memory Performance | 0% | ![]() |
vs | ![]() |
0% |
Comparison |
Shader Processing Units | 8704 | ![]() | vs | 3072 | |
---|---|---|---|---|---|
Actual Shader Performance | 100% | ![]() | vs | ![]() | 100% |
Technology | 8nm | ![]() | vs | 12nm | |
Texture Mapping Units | 212 | ![]() | vs | 160 | |
Texture Rate | 305.3 GTexel/s | ![]() | vs | 264 GTexel/s | |
Render Output Units | 64 | ![]() | vs | ![]() | 64 |
Pixel Rate | 92.2 GPixel/s | vs | ![]() | 105.6 GPixel/s | |
Comparison |
Max Digital Resolution (WxH) | 7680x4320 | ![]() | vs | ![]() | 7680x4320 |
---|---|---|---|---|---|
VGA Connections | 0 | vs | 0 | ||
DVI Connections | 0 | vs | ![]() | 1 | |
HDMI Connections | 1 | ![]() | vs | ![]() | 1 |
DisplayPort Connections | 3 | ![]() | vs | 2 | |
Comparison |
Max Power | 320 Watts | vs | ![]() | 215 Watts | |
---|---|---|---|---|---|
Recommended PSU | 750 Watts | vs | ![]() | 600 Watts |
DirectX | 12.1 | ![]() | vs | ![]() | 12.1 |
---|---|---|---|---|---|
Shader Model | 6.1 | vs | ![]() | 6.4 | |
Open GL | 4.6 | ![]() | vs | ![]() | 4.6 |
Open CL | - | vs | - | ||
Notebook GPU | no | no | |||
SLI/Crossfire | no | vs | no | ||
Dedicated | yes | ![]() | vs | ![]() | yes |
Comparison |
Recommended Processor | - | Intel Core i9-9900K 8-Core 3.6GHz | |||
---|---|---|---|---|---|
Recommended RAM | 16 GB | ![]() | vs | ![]() | 16 GB |
Maximum Recommended Gaming Resolution | 3840x2160 | ![]() | vs | ![]() | 3840x2160 |
Performance Value | ![]() |
---|
Mini Review | Overview The GeForce RTX 3080 is a High-End Graphics Card based on the Ampere Architecture. This card has also been rumoured to be called the GeForce GTX 2180. Architecture It equips an Ampere GPU Codenamed GA102 which is a First Gen Ampere GPU and has 20 SM activated, offering 3386 Shader Processing Units, 212 TMUs and 64 ROPs. The Ampere architecture provides the world's best consumer support for real-time ray-tracing technology, or RTX. It also has 544 Tensor Cores for AI purposes. GPU Name: GA102 GPU The central unit runs at 1424MHz and goes up to 1872MHz in Turbo Mode. Memory The GPU accesses a 10GB frame buffer of Third Gen GDDR6X through a 320-bit memory interface, while the Memory Clock Operates at 1750MHz, or 14GHz effective. Total memory bandwidth is 448 Gb/s. Power Consumption With a rated board TDP of 320W, it requires at least a 650W PSU with available 8-pin + 6-pin power connector. Performance Predicted performance of the GeForce RTX 3080 suggests it will be faster than the Geforce RTX 2080 Ti, making it a suitable graphics card for 4K gaming. System Suggestions GeForce RTX 3080 is best suited for resolutions up to and including 3840x2160. We recommend a High-End Processor and 16GB of RAM for Optimal Performance. | Overview The GeForce RTX 2080 Super 8GB is a High-End Graphics Card based on the Turing Architecture. This is the 'Super' variant of the RTX 2080. Architecture It equips a Turing GPU Codenamed TU104-450-A11 which is a 2nd-Gen Turing GPU and has 48 SM activated, offering 3072 CUDA Shader Processing Units, 192 TMUs and 64 ROPs. The Turing architecture provides the world's first consumer support for real-time ray-tracing technology, or RTX. It is equipped with 384 Tensor Cores for AI purposes. GPU The central unit runs at 1650 MHz and goes up to 1815 MHz in Turbo Mode. Memory The GPU accesses an 8GB frame buffer of Second Gen GDDR6 through a 256-bit memory interface, while the Memory Clock Operates at 1940MHz, or 15.5GHz effective. Total memory bandwidth is 496.6GB/s. Power Consumption With a rated board TDP of 215W, it requires at least a 600W PSU with one available 8+6-pin power connector. Performance Performance of the GeForce RTX 2080 Super 8GB puts it between the RTX 2080 and GeForce RTX 2080 Ti making it a suitable graphics card for 4K gaming. System Suggestions GeForce RTX 2080 Super 8GB is best suited for resolutions up to and including 3840x2160. We recommend a high-end processor and 16GB of RAM for optimal performance. |
---|
Recommended CPU | - | ||||
---|---|---|---|---|---|
Possible GPU Upgrades | - | ||||
GPU Variants | - | - |