The Radeon RX 6700 XT is the new kid on the block from AMD, and arguably its most important RX 6000 series graphics card launched to date as it’s the most affordable (on paper) and targets the heart of the performance segment. The card is designed for high refresh-rate 1440p gaming and capable of real-time raytracing. It introduces the company’s second discrete GPU based on the RDNA2 graphics architecture, the 7 nm Navi 22. AMD also claims that the RX 6700 XT should disrupt the sub-$500 graphics market, taking the fight to two of NVIDIA’s popular Ampere products, the GeForce RTX 3060 Ti and RTX 3070.
The new RDNA2 graphics architecture from AMD breathed life back into the consumer graphics market by competing with NVIDIA at the highest market segments with the RX 6800 series and the flagship RX 6900 XT Big Navi. It offers full DirectX 12 Ultimate readiness, including real-time raytracing, variable-rate shading, mesh shaders, and sampler feedback. Raytracing is the holy grail of 3D graphics, and while fully raytraced interactive 3D is beyond the capabilities of consumer hardware, it’s possible to combine conventional raster 3D graphics with certain real-time raytraced elements, such as lighting, shadows, reflections, global illumination, and so on, to significantly increase realism.
Even this much raytracing requires an enormous amount of compute power. The most compute intensive task of ray intersection is handled by special hardware AMD calls Ray Accelerators, while shaders handle other related tasks, such as denoising. A side-effect of this approach is that AMD has had to boost shader performance significantly over the past generation, which means most games that only use raster 3D graphics should see enormous performance gains over the previous RDNA generation.
The Radeon RX 6700 XT debuts the Navi 22 silicon, which is leaner and more space efficient than the Big Navi silicon powering the larger RX 6000 series cards. The chip physically packs 40 RDNA2 compute units, working out to 2,560 stream processors and 40 Ray Accelerators. The number of stream processors is identical to that of the RX 5700 XT Navi, but the performance uplift comes from the higher IPC of the RDNA2 compute unit, besides much higher engine clocks—2424 MHz vs. 1755 MHz (game clocks).
AMD has made a significant yet frugal change to the memory setup. You now get 12 GB of GDDR6 memory, which is 50% higher than the 8 GB of the RX 5700 XT, but at 192-bit wide, the memory bus width is 25% narrower. AMD has tried to make up for this by using the fastest JEDEC-standard 16 Gbps GDDR6 memory chips, resulting in 384 GB/s bandwidth. This is still much lower than the 448 GB/s of the RX 5700 XT. The company deployed its new Infinity Cache technology that it debuted with Big Navi. A 96 MB fast cache on the GPU die cushions memory access, and operates at 1.5 TB/s.
AMD is pricing the Radeon RX 6700 XT at US$479 for the reference design, undercutting the $499 price of the GeForce RTX 3070, but $479 is higher than the $399 starting price of the RTX 3060 Ti the card is extensively compared against in AMD’s marketing materials. The card also faces some internal competition from the $100 pricier RX 6800, which AMD is marketing as a 4K-capable 1440p card. All these prices are pure fiction; real-world graphics card pricing is completely whack right now. In this review, we’ll focus on how the card competes with others in its vicinity on our swanky new March 2021 test system.
Our Radeon RX 6700 XT launch-day coverage includes six articles including this one. Do check them out! PowerColor Radeon RX 6700 XT Red Devil | MSI Radeon RX 6700 XT Gaming X | ASUS ROG Strix Radeon RX 6700 XT OC | Sapphire Radeon RX 6700 XT NITRO+ | XFX Radeon RX 6700 XT Speedster Merc 319
We have with us the ASUS ROG STRIX Radeon RX 6700 XT O12G, the company’s top custom-design RX 6700 XT graphics card. AMD today launched the RX 6700 XT along with the new Navi 22 silicon, to take the fight to NVIDIA’s sub-$500 performance segment, including popular SKUs such as the GeForce RTX 3060 Ti and the RTX 3070. It establishes 12 GB as the new memory size standard, and promises full DirectX 12 Ultimate capability, including real-time raytracing. The target buyer is a serious gamer looking to play both AAA and e-sports at 1440p with maxed out settings.
The Radeon RX 6700 XT is based on the latest RDNA2 graphics architecture that brings a compelling feature set to the PC market, nearly leveling up to NVIDIA. AMD’s approach to real-time raytracing is the use of special purpose components called Ray Accelerators, which calculate ray-intersection, while compute shaders handle everything else, including de-noising. To achieve this, AMD significantly bolstered the programmable shaders of this architecture, improving their IPC, and running them at extremely high engine clocks. A side-effect of this is the card’s high performance outlook with games that don’t have raytracing.
The Navi 22 silicon at the heart of the RX 6700 XT physically features 40 RDNA2 compute units, working out to 2,560 stream processors, 40 Ray Accelerators, 160 TMUs, and 64 ROPs. The increased 12 GB memory size over the past generation comes with a catch—the memory bus width is narrowed to 192-bit. AMD attempted to overcome this by using the fastest standard GDDR6 memory chips that run at 16 Gbps, and the use of Infinity Cache, a fast on-die 96 MB cache memory that cushions data transfers between the GPU and memory.
The ROG STRIX RX 6700 XT OC comes with the latest generation DirectCU III cooler that’s used across RTX 30-series and RX 6000 series ROG cards from ASUS. It features a meaty aluminium fin-stack heatsink that’s ventilated by a trio of Axial-Tech fans. You also get goodies such as dual-BIOS, a blinding amount of RGB LED illumination, additional fan headers to let you sync your ventilation to the card; and an additional RGB out. ASUS hasn’t provided any pricing info yet, we suspect it will end up costing close to $800 at current market conditions.
Our Radeon RX 6700 XT launch-day coverage includes six articles including this one. Do check them out! AMD Radeon RX 6700 XT (reference) | MSI Radeon RX 6700 XT Gaming X | Sapphire Radeon RX 6700 XT NITRO+ | PowerColor Radeon RX 6700 XT Red Devil | XFX Radeon RX 6700 XT Speedster Merc 319
We have with us the MSI Radeon RX 6700 XT Gaming X, a premium custom-design graphics card based on the new RX 6700 XT AMD is debuting today. With this, AMD is taking the fight to NVIDIA’s GeForce RTX 3060 Ti and RTX 3070, heating things up in the sub-$500 market segment. With the new RDNA2 Radeon RX 6000 series, the playing field is mostly leveled, as AMD now supports real-time raytracing. AMD took everyone by surprise with its Radeon RX 6800/6900 series launch, spring-boarding the company back to the high-end segment, and the company is planning a similar move on the performance segment.
The Radeon RX 6700 XT debuts AMD’s new 7 nm Navi 22 silicon that’s roughly as big as the GA104, with a similar transistor-count. It packs 40 RDNA2 compute units, which translate to 2,560 stream processors running at speeds in excess of 2.40 GHz; 40 Ray Accelerators, which are specialized hardware that accelerate raytracing by calculating ray-intersections; 160 TMUs, and 64 ROPs. AMD has also raised the standard memory size with this generation, and the RX 6700 XT comes with 12 GB of it—a 50% increase over the RX 5700 XT. The only gotcha is the narrower 192-bit memory bus width. The memory clock is increased to 16 Gbps, and AMD deployed Infinity Cache, a fast on-die 96 MB cache that works to improve the overall memory system bandwidth. We get into the details in the next page.
The MSI Radeon RX 6700 XT features a the company’s latest MSI Performance Gaming Twin Frozr cooling solution. While this is smaller than the triple-fan Tri Frozr, it’s one of the heavier cards in our lab today, with a chunky triple-slot heatsink. You also get plenty of RGB goodness. As a Gaming X SKU, MSI has given the card factory-overclocked speeds of 2.63 GHz max boost, compared to 2.58 GHz reference. MSI hasn’t provided any pricing info, we expect it to end up selling for $750, or higher.
Our Radeon RX 6700 XT launch-day coverage includes six articles including this one. Do check them out! AMD Radeon RX 6700 XT (reference) | ASUS ROG Strix Radeon RX 6700 XT OC | Sapphire Radeon RX 6700 XT NITRO+ | PowerColor Radeon RX 6700 XT Red Devil | XFX Radeon RX 6700 XT Speedster Merc 319
Sapphire Radeon RX 6700 XT Nitro+ is the company’s most premium take on AMD’s new RX 6700 XT graphics card that’s debuting today. Positioned bang in the middle of the performance segment, with a starting price under $500, the RX 6700 XT launches AMD’s second, smaller silicon based on the RDNA2 architecture, and brings the full DirectX 12 Ultimate experience from the AMD stable to this segment, including real-time raytracing. Although targeting the GeForce RTX 3060 Ti in performance, AMD claims that the card can trade blows with even the pricier RTX 3070.
The new RDNA2 graphics architecture powers not just AMD’s Radeon RX 6000 series discrete graphics, but also the latest consoles, including the PlayStation 5 and Xbox Series X/S, making it easier for game developers to optimize for the cards. The company has mostly leveled up to NVIDIA on the features front, thanks to DirectX 12 Ultimate. Its approach to real-time raytracing involves using special hardware called Ray Accelerators to calculate ray intersections; and a hefty compute muscle for everything else, including de-noising.
The Radeon RX 6700 XT maxes out the new 7 nm Navi 22 silicon, which packs 40 RDNA2 compute units, working out to 2,560 stream processors, 40 Ray Accelerators, 160 TMUs, and 64 ROPs. The company went with 12 GB as the standard memory amount, and uses fast 16 Gbps GDDR6 memory chips, however the memory bus width is narrowed to 192-bit. The card now only needs six 16 Gbit memory chips. This bandwidth deficit over the previous-gen RX 5700 XT is claimed to be overcome by the Infinity Cache technology—a fast 96 MB scratchpad directly on the die, operating at over five times the speed and much lower latencies, than even the GDDR6 memory.
The Sapphire RX 6700 XT Nitro+ comes with the slimmest iteration of the company’s Nitro+ cooling solution that has many innovations, such as dedicated memory/VRM heatsinks with aluminium fins, wave-shaped aluminium fins that add turbulence to improve heat-dissipation, double ball-bearing fans, and a generous amount of addressable RGB bling. Sapphire has given this card its highest factory-overclock, with the maximum boost frequency set at 2.65 GHz (compared to 2.58 GHz reference). Sapphire is pricing the card at USD $579, a $100 premium over the $479 baseline price.
Our Radeon RX 6700 XT launch-day coverage includes six articles including this one. Do check them out! AMD Radeon RX 6700 XT (reference) | MSI Radeon RX 6700 XT Gaming X | ASUS ROG Strix Radeon RX 6700 XT OC | PowerColor Radeon RX 6700 XT Red Devil | XFX Radeon RX 6700 XT Speedster Merc 319
XFX Radeon RX 6700 XT Speedster Merc 319 is the company’s premium custom-design Radeon RX 6700 XT card debuting today. With this, AMD intends to dominate the performance segment, taking the fight to popular NVIDIA Ampere SKUs such as the GeForce RTX 3060 Ti and even RTX 3070. The card is targeted at serious gamers seeking maxed out 1440p gaming, and also supports real-time raytracing, as it supports the full DirectX 12 Ultimate feature-set. It’s based on the same RDNA2 graphics architecture as the RX 6900 XT “Big Navi.”
The latest RDNA2 graphics architecture debuted with next-generation consoles, making its way to the PC with the Radeon RX 6000 series. This gives AMD a unique advantage as game developers optimizing for console also end up doing so for Radeon. AMD’s raytracing architecture involves specialized hardware called Ray Accelerators, which compute ray intersection; while much else of it is handled by the enormous compute muscle of these cards. This also means increased performance in non-raytraced games, as these programmable shaders can be made to do anything.
The Radeon RX 6700 XT is based on the new 7 nm Navi 22 silicon, and maxes it out. The chip is equipped with 40 RDNA2 compute units, which means 2,560 stream processors, 40 Ray Accelerators, 160 TMUs, and 64 ROPs. The company has also generationally increased the memory amount to 12 GB, which is certainly welcome, however, the memory bus is narrower at 192-bit. The company worked to overcome this bandwidth deficit by using the fastest 16 Gbps JEDEC-standard GDDR6 memory chips, and deploying its Infinity Cache technology, a fast on-die 96 MB cache located in the GPU, which operates at much higher bandwidths and lower latencies, cushioning data-transfers between the GPU and memory.
The XFX RX 6700 XT Speedster Merc 319 features a large triple-slot, cooling solution with a heatsink that outgrows the PCB not just lengthwise, but also in height, which means a significant amount of airflow from the three fans flows right through, resulting in much better ventilation. The design has certainly come a long way from the THICC series. XFX is giving the card its highest factory OC, running it at 2.65 GHz max boost, up from 2.58 GHz reference. The company is pricing the card at $570 USD, a $90 premium over AMD’s reference price. Both these prices are fantasy in today’s market situation, and one can expect to pick this up closer to $750. In this review, we take the card for a spin across our brand new test bench.
Our Radeon RX 6700 XT launch-day coverage includes six articles including this one. Do check them out! AMD Radeon RX 6700 XT (reference) | MSI Radeon RX 6700 XT Gaming X | ASUS ROG Strix Radeon RX 6700 XT OC | Sapphire Radeon RX 6700 XT NITRO+ | PowerColor Radeon RX 6700 XT Red Devil
PowerColor announced its top custom design AMD Radeon RX 6700 XT graphics card, the RX 6700 XT Red Devil. After surprising everyone with competitive graphics cards in the enthusiast segment with the RX 6800 series and the flagship RX 6900 XT, AMD is turning its attention to the segment that earns NVIDIA the most attention from serious gamers—the sub-$500 performance segment, where it’s looking to take on established rivals, the GeForce RTX 3060 Ti and the RTX 3070. Unlike the last time, AMD has largely leveled up to NVIDIA on the features front, with the RX 6700 XT being full DirectX 12 Ultimate capable, including real-time raytracing. The target user of this card is someone who games at 1440p with settings maxed out.
At the heart of the RX 6700 XT is the new 7 nm Navi 22 silicon by AMD, has half the compute muscle of the Navi 21 powering the RX 6900 XT. The chip has 40 RDNA2 compute units, which mean 2,560 stream processors, 40 Ray Accelerators, 160 TMUs, and 64 ROPs. AMD also increased the memory size to 12 GB compared to the previous generation, but the memory bus width is narrowed to 192-bit. The company attempted to make up for this by increasing the memory clocks and using the new Infinity Cache on-die cache memory that the company claims to significantly improve effective bandwidth. The new RDNA2 graphics architecture uses fixed function hardware to accelerate raytracing intersections, but the tech also heavily relies on the compute shader. A side-effect of this is a massive raster 3D performance gain over the previous generation. We detail the silicon in the next page.
The PowerColor RX 6700 XT Red Devil uses a lavish triple-slot cooling solution that looks a segment above when installed in your case. Thick aluminium fin-stack heatsinks peek through the cooler shroud, giving it an industrial look. All this cooling muscle comes together to support factory overclocked speeds of up to 2.65 GHz max boost engine clocks, a roughly 100 MHz increase over the reference design. You get plenty of goodies, including RGB LED lighting, dual-BIOS, including a noise-optimized Silent BIOS, and a 3-pin addressable-RGB header, letting you sync your lighting to the card. We expect PowerColor to price the card at a roughly $100 premium over the $479 reference MSRP, like most other custom RX 6700 XT cards we’re reviewing today.
Our Radeon RX 6700 XT launch-day coverage includes six articles including this one. Do check them out! AMD Radeon RX 6700 XT (reference) | MSI Radeon RX 6700 XT Gaming X | ASUS ROG Strix Radeon RX 6700 XT OC | Sapphire Radeon RX 6700 XT NITRO+ | XFX Radeon RX 6700 XT Speedster Merc 319
Unless AMD actually sells them for $479, of course
AMD and Nvidia are pretending they live in a fantasy world, one where you can buy a state-of-the-art graphics card for under $700. Heck, a fantasy world where you can buy a new AMD GPU at all — though AMD has repeatedly promised it would stock additional RX 6800 and RX 6800 XT graphics cards at its website for $579 and $649 respectively, I’ve seen no evidence the company has ever replenished those supplies since those cards first debuted four months ago.
Today, the company’s launching a GPU that might change that: the $479 Radeon RX 6700 XT. AMD tells The Verge it will have “significantly more GPUs available.” If that’s true — if this is the moment the clouds part, the GPU shortage recedes, and you can actually buy a RX 6700 XT for $479 for even a limited time — you absolutely should. It’s a solid performer at 1440p.
But amusingly, in the fantasy world where we’re pretending a $400 GeForce RTX 3060 Ti and $500 RTX 3070 still exist, the RX 6700 XT actually feels outgunned. It’s not the clear-cut Nvidia competitor you might have hoped for.
The AMD Radeon RX 6700 XT comes with a simple pitch: a way to max out your 2560 x 1440 monitor with the latest games at maximum settings, all for $100 less than the RX 6800 I reviewed late last year. If that sounds familiar, that’s because it’s the same playbook Nvidia used last fall, where its $400 RTX 3060 Ti undercut the $500 RTX 3070 by the same amount.
So in our fantasy world where those prices held, this would be the lay of the land:
Mid-range desktop gaming GPUs in 2021
Price
Product
Promise
Price
Product
Promise
$399
Nvidia RTX 3060 Ti
Solid 1440p
$479
AMD RX 6700 XT
Maxed 1440p
$499
Nvidia RTX 3070
Maxed 1440p / entry-level 4K
$579
AMD RX 6800
Maxed 1440p / decent 4K
Given those prices, I’d expect a $480 AMD card to soundly thrash a $400 Nvidia card. (That’s what the $580 Radeon RX 6800 did to the $500 RTX 3070, after all.) I’d also expect it to be within spitting distance of the $500 Nvidia card if there were only a $20 bill between the two.
But running AMD’s card through my 15-game gauntlet on my own 1440p monitor, AMD’s new card sometimes lost to Nvidia’s 3060 Ti — sometimes by a lot — and it can even lose to Nvidia’s vanilla $329 RTX 3060 in tests where I had ray tracing turned on.
1440p gaming (with Core i7-7700K, 32GB DDR4)
Game
RTX 3060
RTX 3060 Ti
RX 6700 XT
RTX 3070
Does AMD pull its weight?
Game
RTX 3060
RTX 3060 Ti
RX 6700 XT
RTX 3070
Does AMD pull its weight?
AC: Odyssey
57
66
58
72
No
AC: Valhalla
50
63
75
70
Yes
Arkham Knight
126
147
140
156
No
Borderlands 3 (Badass)
50
65
82
81
Yes
Borderlands 3 (Ultra)
55
69
87
86
Yes
CS: GO
231
236
225
238
No
Control
50
63
65
76
Not quite
Control (RT)
29
37
25
42
Definitely no
Control (RT+DLSS)
51
64
N/A
68
N/A
Cyberpunk 2077
38
49
52
55
Yes
Cyberpunk 2077 (RT)
18
22
N/A
27
No AMD RT support yet
Cyberpunk 2077 (RT+DLSS)
39
45
N/A
52
N/A
COD: Warzone
87
102
114
120
Yes
DX: Mankind Divided
57
74
89
88
Yes
Metro Exodus (Extreme)
27
37
38
44
No
Metro Exodus (Ultra+RT)
35
47
46
64
No
Metro Exodus (RT+DLSS)
46
61
N/A
72
N/A
Shadow of the Tomb Raider
78
99
105
105
Yes
SotR (RT)
47
59
55
72
No
SotR (RT+DLSS)
57
72
N/A
82
N/A
Star Wars Squadrons
125
151
167
165
Yes
Watch Dogs: Legion
46
61
64
70
Not quite
WD: Legion (RT)
27
35
25
42
Definitely no
WD: Legion (RT+DLSS)
50
62
N/A
62
N/A
Valheim
60
81
70
89-91
Definitely no
Frames per second; all games tested at highest graphical preset unless specified.
As you’ll see in the chart above, AMD does notch some wins, and in many cases lives up to its promise of maxed settings at 1440p. Shadow of the Tomb Raider looks glorious on my G-Sync / FreeSync monitor at an average 105 frames per second with all the eye candy turned up, and the difference between 120 frames and 114 frames in Call of Duty: Warzone at max settings (probably) isn’t worth quibbling about.
With a beefier processor, you should be able to get 60+ frames per second in a maxed-out Cyberpunk 2077 experience, too — I’m still testing with my old Core i7-7700K, which is more than good enough for most of the games on this list (SotR shows I’m 100 percent GPU-bound), but a few titles like Microsoft Flight Simulator and Cyberpunk are still notoriously CPU limited and more cores could help. As another example, my colleague Tom Warren saw identical framerates to me in Watch Dogs: Legion and Metro Exodus pairing the 6700 XT with a far newer Intel chip, but slightly higher in Assassin’s Creed Valhalla.
But look at Valheim. Look at Assassin’s Creed Odyssey. Look at Control. How is AMD underperforming or merely tying these lower tier cards? Things aren’t any rosier at 4K resolution, in case you’re wondering: while I still recommend Nvidia’s 3070 for entry-level 4K, the 6700 XT just doesn’t have the same oomph. In games like Control and Metro Exodus, I had to drop the settings to a comparatively dull “low,” where the 3070 managed to play those games at 4K and medium-spec without trouble. And of course, AMD doesn’t yet have a DLSS competitor if you’re a fan of AI-upscaled resolution.
It’s not like AMD is winning in any other particular way, either. While I’m actually fond of the company’s new two-slot design (clearly inspired by a certain late 1960s muscle car) and I don’t hugely mind going from three to two axial fans, the RX 6700 XT is just as chunky and nearly as power-hungry as the RX 6800 (230W vs. 250W) without as much to show for it. AMD still recommends a 650W power supply, a pricey proposition for those of us with tall PC cases, and you’ll need both 6 and 8-pin PCIe connectors, while Nvidia’s comparable cards make do with a single 8-pin.
I also saw the 6700 XT hit 89 degrees Celsius on one occasion. While I don’t have enough evidence to say that’s a problem, I’ve yet to see Nvidia’s 3070 cross 82 degrees C even with my case fans unplugged.
But as I alluded to in my intro, very little of this will matter if you can actually find this card for $479.
In December, I reported that the true street price of a $399 Nvidia RTX 3060 Ti was actually $675, and that the street price of an $499 RTX 3070 was actually $819. Those were the average prices people actually paid on eBay that month.
If you think that sounds unbelievable, hear this: the street price of the RTX 3060 Ti and RTX 3070 are now approaching $1,200 each. People are paying triple for what should have been Nvidia’s bang-for-the-buck graphics card now, and the launch of a “$329” RTX 3060 didn’t slow that at all.
To me, this indicates two things: First, AMD probably isn’t going to be able to make anywhere near enough RX 6700 XTs to hit anywhere near that $479 price for the vast majority of buyers, even if it produces “significantly” more cards. I wouldn’t expect today to play out any differently than any previous GPU launch since the pandemic began. It’s a well-established pattern now. Second, none of these fantasyland prices will necessarily keep the RX 6700 XT from being a hit if the supply is there. As you can see from the similar street prices for the 3060 Ti and 3070, the market has a way of balancing prices out.
If you miraculously see this card for its list price, buy it, because you won’t get anything better for anywhere near that price in the near future. But it feels inferior to every Nvidia GPU that can compare — and I wouldn’t trade my 3060 Ti for one.
Tightly curved monitors like the MSI MPG Artymis 343CQR can really enhance gameplay, especially in first-person environments. With class-leading contrast, accurate out-of-box color and superb HDR, the 343CQR should be on everyone’s curved screen short list.
For
High contrast
Accurate out-of-box color
Solid gaming performance
1000R curve
Against
Slightly light gamma
Blur reduction feature makes the screen too bright
Higher input lag than some 144 Hz screens
Features and Specifications
In the world of curved monitors, there are more things to consider than just screen size. Not only are there three different aspect ratios, 16:9, 21:9 and 32:9, they also come in a wide variety of curve radii. This number is expressed in millimeters like 1500R or 1800R. Larger numbers indicate less curvature. When you see 1000R, you know the curve is as extreme as it gets
MSI has jumped on the 1000R train with its MPG Artymis 343CQR. In addition to that tight curve, it sports a high-contrast VA panel running at 3440×1440 resolution with USB-C, HDR support, Adaptive-Sync and an impressive 165 Hz refresh rate worthy of competing with the best gaming monitors. Selling for a premium price ($900 as of writing), the 343CQR is a sharply focused display that is at its best when gaming — going even as far as to include an aim magnifier for shooters.
MSI MPG Artymis 343CQR Specs
Panel Type / Backlight
VA / W-LED, edge array
Screen Size, Aspect Ratio & Curve
34 inches / 21:9
Curve radius: 1000mm
Max Resolution & Refresh
3440×1440 @ 165 Hz
FreeSync: 48-165 Hz
Native Color Depth & Gamut
10-bit (8-bit+FRC) / DCI-P3
DisplayHDR 400, HDR10
Response Time (MPRT)
1ms
Brightness (mfr)
SDR: 350 nits
HDR: 550 nits
Contrast (mfr)
3,000:1
Speakers
None
Video Inputs
1x DisplayPort 1.4
2x HDMI 2.0
1x USB-C
Audio
3.5mm headphone output
USB 3.2
1x up, 2x down
Power Consumption
32.6w, brightness @ 200 nits
Panel Dimensions WxHxD w/base
31.3 x 16.5-20.5 x 12.4 inches (795 x 419-521 x 315mm)
Panel Thickness
6.5 inches (165mm)
Bezel Width
Top/sides: 0.4 inch (9mm)
Bottom: 0.9 inch (22mm)
Weight
20.2 pounds (9.2kg)
Warranty
3 years
The 343CQR is all about gaming with support for AMD FreeSync from 48-165 Hz. It’s not G-Sync Compatible-certified, but we still got Nvidia G-Sync to work (see our How to Run G-Sync on a FreeSync Monitor article for instructions).
MSI’s specs sheet includes nearly 85% coverage of the DCI-P3 color gamut. You’ll be using that gamut for all content, SDR and HDR alike, because there is no sRGB mode available.
MSI designed the 343CQR with consoles in mind too. It will accept 4K resolution signals and down-convert them to 3440 x 1440 resolution. The 343CQR is also the first monitor we’ve seen with HDMI CEC (Consumer Electronics Control). Originally developed to support universal remotes, the CEC implementation in this monitor is designed to sense whether the incoming signal is coming from a PC or a console and adjust its picture mode based on designated profiles. The feature supports both PlayStation and Nintendo Switch.
Assembly and Accessories of MSI MPG Artymis 343CQR
To assemble the MSI MPG Artymis 343CQR, the panel and upright are mated with four fasteners, so you’ll need to have a Phillip’s head screwdriver handy. Next, you attach the base with a captive bolt. The resulting package is rock-solid and shows impressive build quality. It certainly meets the standard one expects for the price.
Bundled cables include IEC for the internal power supply, DisplayPort, HDMI and USB. A small snap-on cover hides the panel’s mounting hardware. And if you’d rather use a monitor arm, the bolt holes are in a 100mm VESA pattern with large-head bolts included. In a nice touch, a small hook snaps onto the bottom of the panel to help manage your best gaming mouse’s cable.
MSI MPG Artymis Product 360
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
From the front, the MSI MPG Artymis 343CQR is all business with a thin flush bezel around the top and sides and a molded strip across the bottom adorned only with a small MSI logo. A tiny LED appears red in standby mode and white when the power’s on. Around the back right is a joystick and two buttons. One activates the Gaming OSD (on-screen display) app, and the other toggles power.
The upright is very solid with a stiff-moving 4-inch height adjustment. You also get 30 degrees swivel to both sides and 5/20 degrees tilt. There isn’t even a hint of slop or wobble. A small hole helps tidy up cables. The base is solid metal with thin legs that go more than 1 foot deep. That, combined with the fact that the panel’s 6.5-inch thick means you’ll need a bit of extra desktop space to accommodate the 343CQR.
From the top, you can see the 1000R curvature clearly. That radius means that if you made a circle from 343CQRs, it would be just two meters in diameter. If you have the room for three of them, they’ll wrap around almost 180 degrees. They would make a great flight simulator or, perhaps, a solid solution for a Zwift (cycling virtual training app) setup.
The back of the MSI MPG Artymis 343CQR is styled nicely with a variety of different textures and an RGB effect that shows as a strip and MSI shield graphic with a dragon. The color breaths gently through different shades. You can turn it on and off in the OSD and control it ever further with the Gaming OSD app. You can also sync up the lighting effect with that of other MSI products that support the vendor’s Mystic Light-branded RGB. That way, you can create a custom light show with everything working in concert.
The input panel includes two HDMI 2.0 ports that support refresh rates up to 100 Hz with Adaptive-Sync and HDR. Meanwhile, the DisplayPort 1.4 and USB-C inputs accept 165 Hz signals, also with HDR and Adaptive-Sync. There are no built-in speakers, but you get a 3.5mm audio port for headphones.
OSD Features of MSI MPG Artymis
Pressing the joystick brings up the MSI MPG Artymis 343CQR’s OSD,which is divided into seven sub-menus. There are plenty of gaming features as well as most of what you’ll need for calibration.
The Gaming menu offers five picture modes. Four are game genre-specific, and there’s also the default mode, User. User’s the mode to pick because it delivers solid accuracy with no need for calibration. There are a few minor flaws, but the 343CQR definitely makes our Calibration Not Required list.
The Night Vision option is a shadow detail enhancer. We didn’t need it because the monitor’s black levels are both deep and richly detailed. Response Time is a three-level overdrive. Fast, the middle setting, is best. Next, MPRT is a backlight strobe that reduces motion blur and cancels out Adaptive-Sync.
It also pegs the brightness at over 860 nits, which is unusual. You can reduce this with the contrast control, but that removes much of the picture’s depth and quality. We recommend sticking with Adaptive-Sync and leaving MPRT off. Finally, Zero Latency should always be turned on for the lowest possible input lag.
Additional features include a frame rate indicator, alarm clock, aiming points and an Optix Scope feature. This is geared at fans of first-person shooters and lets you magnify the area underneath your crosshair in multiple levels using hot keys. As this will take some finessing to execute smoothly and without slowing down your gameplay, it won’t be for everyone.
The OSD will always show you the MSI MPG Artymis 343CQR’s signal status at the top with resolution, refresh rate, HDR status, FreeSync status and the active video input.
The Image menu offers three color temperature presets, plus a User mode. Normal is the default and best choice. We were unable to make a visual improvement to the color temp with calibration. The test numbers show a tiny gain but not one that can be seen with the naked eye. The only thing we wished for was a gamma control. The default luminance curve is a tad light, though that’s somewhat mitigated by the 343CQR’s extremely high contrast.
Calibration Settings of MSI MPG Artymis 343CQR
You can simply unpack the MSI MPG Artymis 343CQR, plug it in and enjoy. The image is very accurate by default — even the brightness is already set close to 200 nits in the User picture mode. We attempted a calibration and made no visible improvement.
Our settings are below if you want to try them. Note that in the User color temp, the RGB sliders start at 50%, which reduces brightness by roughly that amount. We turned them all up to 100%, then adjusted from there to achieve maximum dynamic range.
Picture Mode
User
Brightness 200 nits
49
Brightness 120 nits
6 (min. 109 nits)
Contrast
70
Color Temp User
Red 100, Green 93, Blue 93
HDR signals lock out all picture controls. You can still access the modes, but changing them does not affect the image. HDR grayscale runs a tad red, but the EOTF is spot-on, as is the color tracking. The 343CQR also uses dynamic contrast to achieve tremendous contrast for HDR content.
Gaming and Hands-on with MSI MPG Artymis 343CQR
At 1000R, the MSI MPG Artymis 343CQR is as curvy as a gaming monitor gets today. At first, we noticed a little image distortion when working in productivity apps, like word processors and spreadsheets. However, we got used to the look after a short time.
When browsing the web, that distortion became unnoticeable. The monitor’s image is sharp and contrast-y enough to overshadow any horizontal line curvature. It’s best to set the panel exactly vertical with no back or forward tilt. By adjusting the height so our eyes were centered, it made all parts of the screen equidistant from the body. The 343CQR is perfectly usable for workday tasks.
Color was nicely balanced with slightly more than sRGB saturation but not so much that it looked unnatural. MSI has tuned the gamut so it renders SDR content more accurately without the need to switch color spaces, a capability the MSI MPG Artymis 343CQR lacks. When HDR was on, color looked far more vibrant, as it should. This is one of the few monitors that you could leave in HDR mode all the time for Windows apps. Brightness is reasonable with the highest levels reserved only for small highlights.
The monitor also supports 10-bit color, though the panel uses Frame Rate Conversion to achieve this. Despite the internal upconversion, we didn’t see any banding artifacts.
Gaming tests started with our usual trip through Tomb Raider, which clipped along at a sprightly 165 fps on a Radeon RX 5700 XT and GeForce RTX 3090. Both FreeSync and G-Sync worked without a hitch. The MSI MPG Artymis 343CQR’s middle overdrive setting, Fast, struck the best balance between ghosting and blur reduction. The MPRT backlight strobe feature also worked well at reducing blur without artifacts but at the cost of a very bright and overly harsh image. Playing games at over 800 nits peak grew tiring after a short time.
Engaging HDR for a few hours of Call of Duty: WWII proved to be a singular experience. The MSI MPG Artymis 343CQR nears equalling a FALD display when it comes to HDR contrast and color. Every hue, down to the murkiest greens and browns, leapt from the screen. Black levels seemed almost OLED-like in their depth and detail, offset by perfectly balanced highlight areas. Color accuracy was also top-notch. Though we noted a slight red tint during the grayscale tests, it did not affect games or movies we played. This is one of the best HDR monitors we’ve seen in a while.
If you download MSI’s Dragon Center software, you can also use the 343CQR’s Sound Tune feature which uses “AI calculations” to block out background noise coming through a plugged in headset. Since it requires software and many of the best gaming headsets include similar tech on their own, its usefulness will vary depending on the gamer.
Another unique feature comes in what MSI calls Mobile Projector. It lets you display your phone’s screen in a 5:9 column on the side of the monitor. Although having your phone on your computer screen could generally be distracting, if you have a specific task that requires using both your smartphone and PC, this could come in handy.
An update to Overwatch’s Public Test Region (PTR) is bringing Nvida’s latency-reduction tech, called Reflex, to the popular esports title (via Engadget). The tech aims to help reduce the amount of time between when you click your mouse, and when you see the resulting action on screen, making the game feel more responsive. The fact that it’s coming to Overwatch was announced back in January, but it’s now available to players who can access the PTR, and who have the latest Nvidia drivers.
If you haven’t been able to get your hands on one of Nvidia’s latest graphics cards, there’s still hope that you’ll be able to try Reflex out for yourself in Overwatch — the tech was announced alongside the 30-series graphics cards, but works on cards going back to the GTX 900-series.
Nvidia has an incredibly in-depth explainer on how the tech works, but the very surface-level overview is that the game will work with your GPU to make sure that frames are made “just-in-time” to be shown on your monitor, so you should theoretically always be seeing the latest information.
It’s worth noting that latency can have a few meanings, especially when it comes to online games. Reflex isn’t designed to help improve your network latency, so if you’ve got a bad internet connection it probably won’t help improve your gaming experience all that much.
Whether the difference in latency will be noticeable will depend a lot on the type of equipment you’re using, how much better it is, and how eagle-eyed you are. Still, if you’re one of the testers, it’s probably worth turning it on to try it out, and seeing if you notice the improvement. For everyone else, it’s something to look forward to trying out in a future update.
AMD also has a feature meant to reduce input latency on its graphics cards, called Radeon Anti-Lag, which can also be turned on for Overwatch.
VideoCardz has just leaked renders of one of PowerColor’s new Hellhound series graphics card. As it’s pretty straightforward, the Radeon RX 6700 XT Hellhound is based on AMD’s latest Radeon RX 6700 XT and designed to contend with the best graphics cards on the market.
For the Radeon RX 6700 XT Hellhound, PowerColor is experimenting with a black and blue theme. The graphics card arrives with a dual-slot black cooler that employs a trio of cooling fans with translucent fan blades. PowerColor even dipped the bracket in black paint, which is a nice finishing touch on the vendor’s part.
The cooling fans feature blue lighting, but it’s uncertain if the RGB palette is available or not. The Radeon RX 6700 XT Hellhound also incorporates a full-cover backplate that has the new Hellhound logo. The cutout on the backplate should help with heat dissipation.
The clock speeds for the Radeon RX 6700 XT Hellhound remain a mystery. Given the tier of the Hellhound series, it should come with lower operating clocks than PowerColor’s other higher tier models, such as the Liquid Devil, Red Devil or Red Dragon family of graphics cards.
Image 1 of 2
Image 2 of 2
The Radeon RX 6700 XT Hellhound may be using a custom PCB as the PCIe power connector layout is different from AMD’s reference design. The vanilla Radeon RX 6700 XT utilizes one 6-pin and one 8-pin PCIe power connector. The Radeon RX 6700 XT Hellhound, on the other hand, resorts to two 8-pin PCIe power connectors, which also insinuate a strong factory overclock.
The display outputs on the Radeon RX 6700 XT Hellhound fall in line with the reference design though. You get access to one HDMI 2.1 port and three DisplayPort 1.4a outputs with DSC support.
The Radeon RX 6700 XT will have its official coming out party on March 18 so we should know pricing for the Radeon RX 6700 XT Hellhound in the upcoming days. For reference, the Radeon RX 6700 XT will start at $479. Taking into account the amount of customization on the Radeon RX 6700 XT Hellhound, it’ll probably carry a small premium.
Our GPU pricing index tracks all the best graphics cards and pretty much everything else in our GPU benchmarks hierarchy. Yes, they’re still sold out at retail, which means the fastest way to buy one is to pay extreme pricing on eBay. Sadly, after a drop in Ethereum mining profitability the previous week, things mostly rebounded and the latest GPUs continue to sell for double to triple their official launch prices.
Our data tracks eBay sold listings, using code originally developed by Michael Driscoll, modified for our purposes. Michael deserves an extra shout out this week, as eBay changed its XML format and that necessitated changes to the code, which Michael helpfully provided. (I’m basically just beta testing his code and making minor tweaks to the charting at this point.) We’re looking at the past week of sales, but we also have a longer look at pricing history on the next page.
We’ll update this article on a weekly basis with the latest pricing trends, as long as it makes sense to do so (meaning, until prices return to normal — at which point we may shift to monthly updates). We’ve included all major GPUs of the past several generations and ran queries against eBay’s sold auctions, filtering for junk data. While we do our best to exclude listings where it’s an image of a card, or just the box, some of those may slip through, but we’re mostly interested in the overall trends.
Demand for new graphics cards remains high, fueled by pandemic-induced shortages and cryptocurrency mining. The trend over the past seven days is mostly flat — some cards have a slight upward slope, others are heading down, but things clearly aren’t improving any time soon.
Here are the charts for Mar 4–11, 2021. We’ve gathered data on 61 different GPUs, which we’ll organize into groups based on the GPU generations.
Ampere and RDNA2 Graphics Cards
Image 1 of 8
Image 2 of 8
Image 3 of 8
Image 4 of 8
Image 5 of 8
Image 6 of 8
Image 7 of 8
Image 8 of 8
The latest generation graphics cards from AMD and Nvidia see the most demand, from gamers and miners alike. More people want the ‘best’ option available, and shiny new hardware tends to meet that requirement more than dusty cards from yesteryear. With retail supplies still limited, everything continues to sell for far above the official launch prices.
Not much has changed here, and after a slight downward trend a week ago, several of the cards went back up in average eBay pricing. RTX 3090, 3080, and 3060 Ti show the steepest upward slopes, while the new RTX 3060 12GB and AMD RDNA2 cards are all headed in the right direction (down) — but they’re not getting back to MSRP any time soon.
Total numbers of each GPU sold (on eBay) continue to strongly favor Ampere over RDNA2, and the RTX 3060 12GB at comes in as the second most popular card in this group. Interestingly, RTX 3090 actual outsold RTX 3080 for a change, as well as the RTX 3060 Ti. Also, RTX 3090 sold as many units alone as the three RDNA2 cards combined — though retail sales (i.e., not eBay) could be very different.
Turing and RDNA1 Graphics Cards
Image 1 of 17
Image 2 of 17
Image 3 of 17
Image 4 of 17
Image 5 of 17
Image 6 of 17
Image 7 of 17
Image 8 of 17
Image 9 of 17
Image 10 of 17
Image 11 of 17
Image 12 of 17
Image 13 of 17
Image 14 of 17
Image 15 of 17
Image 16 of 17
Image 17 of 17
The previous generation Turing and RDNA1 cards are now coming up on two, sometimes three years old. Prices remain severely inflated on these cards, with all of them above their original MSRPs. As above, most cards are showing generally flat pricing trends, with a few exceptions. It’s particularly surprising to see the GTX 1650 and 1650 Super — cards with 4GB VRAM that really aren’t much good for mining — selling at such high prices.
AMD’s RX 5700 XT and RX 5600 XT continue to be popular GPUs. The 5700 XT ranks as the highest selling AMD card, with a massive lead over the RX 5700. We have to assume there just weren’t that many vanilla 5700 cards produced, which makes sense considering both use the same Navi 10 die. Those that couldn’t make it as a full RX 5700 XT apparently ended up in the RX 5600 XT more often than not, at least in the later days of Navi 10.
Pascal, Vega, and Polaris Graphics Cards
Image 1 of 22
Image 2 of 22
Image 3 of 22
Image 4 of 22
Image 5 of 22
Image 6 of 22
Image 7 of 22
Image 8 of 22
Image 9 of 22
Image 10 of 22
Image 11 of 22
Image 12 of 22
Image 13 of 22
Image 14 of 22
Image 15 of 22
Image 16 of 22
Image 17 of 22
Image 18 of 22
Image 19 of 22
Image 20 of 22
Image 21 of 22
Image 22 of 22
Even with cards that mostly launched three or four years back (or more), prices remain high. That’s especially true of AMD’s 8GB cards, which are all still double their launch prices on Polaris GPUs. The Vega cards also carry a significant price premium, with the Radeon VII at nearly triple its launch price. Miners, right?
Only cards like the GTX 1050 and 1060 3GB come close to MSRP, and the 4GB Polaris cards like the 570 and 580 aren’t priced too badly. Well, until you remember they’re three or more years old. Even the relatively ancient RX 400-series cards with 8GB command a premium.
If you just need a graphics card that can run games at modest settings, meaning 1080p medium or high, these old GPUs are the best bet right now. You’re still looking at $200–$250, on eBay, but maybe you can find one for closer to $150 if you shop around.
Legacy GPUs and Titans
Image 1 of 14
Image 2 of 14
Image 3 of 14
Image 4 of 14
Image 5 of 14
Image 6 of 14
Image 7 of 14
Image 8 of 14
Image 9 of 14
Image 10 of 14
Image 11 of 14
Image 12 of 14
Image 13 of 14
Image 14 of 14
Last, we’ve lumped together some legacy GPUs along with Nvidia’s four most recent Titan cards — the latter mostly being to satisfy our own curiosity. Not many people are buying or selling Titans, which makes perfect sense considering the price to performance ratio. (We didn’t provide the charts, but we’ve also looked at Quadro cards, and most of those have tiny volumes as well.)
Nvidia’s Maxwell series might also suffice in a pinch for modest gaming requirements. The 970 can’t do 60 fps in the latest titles, but it’s still fast enough to be playable, and at around $200 it’s a reasonable choice to tide you over. The 960 also lands fairly close to the 1050 Ti in performance, and the GTX 950 comes close to the GTX 1050, for a lower price (but higher power requirements).
The GTX 980 and 980 Ti, along with AMD’s Fiji GPUs (Fury, Fury X, and Nano) perform reasonably well, in the 1060 to 1070 range for the most part. Hawaii cards like the 390 and 390X that have 8GB still end up relatively close to their launch prices, sadly, and gaming performance is pretty underwhelming.
Weekly Summary: On the Rebound
After a slight dip in prices last week, many GPUs are trending back up again. That’s not particularly suprising if you follow the cryptocurrency scene, as Bitcoin is back in the $56,000 range and Ethereum is around $1,775, both up over 15 percent in the past week. Will we get hit with another surge in crypto prices, leading to even more mining, or will this just be a temporary correction before things drop? That’s the question.
We mentioned this as a possibility last week, though we didn’t really hope it would happen. Mining and crypto prices still feel inflated, but then we thought the same thing when Bitcoin was at $10,000 so clearly we’re not the best judges of value. All that volatility ends up being perfect for market manipulation and day trading, apparently.
With additional reports of various shortages popping up — mobile chips from Qualcomm have joined the ‘party’ — most analysts now predict supply problems and inflated prices to last throughout 2021. Here’s hoping 2022 proves better.
Flip to the next page for a look at historical charts going back three months. (eBay doesn’t provide public data going back further, unfortunately.)
ASRock has revealed its lineup of custom-designed Radeon RX 6700 XT graphics cards. The family includes three boards that offer different levels of performance and features, and therefore will sell at three different price points. All of the cards feature enhanced voltage regulating modules (VRMs), require two eight-pin power connectors, and come with sophisticated cooling systems. So expect them to be faster than AMD’s reference designs.
ASRock’s lineup features the Radeon RX 6700 XT Phantom Gaming D 12GB OC (RX6700XT PGD 12GO), the Radeon RX 6700 XT Challenger Pro 12GB OC(RX6700XT CLP 12GO), and the Radeon RX 6700 XT Challenger D 12GB (RX6700XT CLD 12G). All of these cards are aimed at enthusiasts, but for some reason ASRock does not disclose their exact frequencies. So it’s impossible to make guesses about how exactly these boards stack up against each other and against AMD’s reference Radeon RX 6700 XT. That said, ASRock is not the only company to keep the specifications of its Radeon RX 6700 XT cards a secret.
ASRock’s Phantom Gaming is the company’s premium gaming brand, so it’s not surprising that the Radeon RX 6700 XT Phantom Gaming D 12GB OC is the company’s top-of-the-range Navi 22-powered product. The graphics card relies on an ASRock-designed black printed circuit board (PCB) featuring DrMOS power stage devices, 60A inductors, and solid-state capacitors to ensure very clean power is delivered to the GPU.
The card is equipped with a sophisticated cooling solution comprising two massive aluminum heatsinks, four heat pipes, three striped axial fans, a reinforced metal frame, and a metal backplate. Traditionally for higher-end products, the card is outfitted with addressable RGB LEDs (which can be disabled with a switch). The cooler seems to be at least 2.5 slots wide, so the Radeon RX 6700 XT Phantom Gaming will need to be installed into fairly roomy cases.
ASRock’s Radeon RX 6700 XT Challenger Pro 12GB and the Radeon RX 6700 XT Challenger D 12GB sit below the Phantom Gaming, yet they feature the same AG2005 revision 1.00 PCB as their higher-end counterpart. So from power delivery, components, and build quality standpoints, all three products are exactly the same (or at least very similar).
As the name suggests, the Challenger Pro is positioned above the Challenger D, which is why it is equipped with a larger triple-fan cooling system with three heat-pipes. By contrast, the Challenger D is slightly more compact and has two fans. Meanwhile, both boards are two slots wide, so they will fit into smaller cases.
ASRock has not revealed any MSRPs for its custom Radeon RX 6700 XT graphics cards, which is not particularly surprising given shortages of GPUs and components. Whatever the launch price, expect them to sell out faster than you can fat-finger them into a shopping cart.
ASRock has revealed its lineup of custom-designed Radeon RX 6700 XT graphics cards. The family includes three boards that offer different levels of performance and features, and therefore will sell at three different price points. All of the cards feature enhanced voltage regulating modules (VRMs), require two eight-pin power connectors, and come with sophisticated cooling systems. So expect them to be faster than AMD’s reference designs.
ASRock’s lineup features the Radeon RX 6700 XT Phantom Gaming D 12GB OC (RX6700XT PGD 12GO), the Radeon RX 6700 XT Challenger Pro 12GB OC(RX6700XT CLP 12GO), and the Radeon RX 6700 XT Challenger D 12GB (RX6700XT CLD 12G). All of these cards are aimed at enthusiasts, but for some reason ASRock does not disclose their exact frequencies. So it’s impossible to make guesses about how exactly these boards stack up against each other and against AMD’s reference Radeon RX 6700 XT. That said, ASRock is not the only company to keep the specifications of its Radeon RX 6700 XT cards a secret.
ASRock’s Phantom Gaming is the company’s premium gaming brand, so it’s not surprising that the Radeon RX 6700 XT Phantom Gaming D 12GB OC is the company’s top-of-the-range Navi 22-powered product. The graphics card relies on an ASRock-designed black printed circuit board (PCB) featuring DrMOS power stage devices, 60A inductors, and solid-state capacitors to ensure very clean power is delivered to the GPU.
The card is equipped with a sophisticated cooling solution comprising two massive aluminum heatsinks, four heat pipes, three striped axial fans, a reinforced metal frame, and a metal backplate. Traditionally for higher-end products, the card is outfitted with addressable RGB LEDs (which can be disabled with a switch). The cooler seems to be at least 2.5 slots wide, so the Radeon RX 6700 XT Phantom Gaming will need to be installed into fairly roomy cases.
ASRock’s Radeon RX 6700 XT Challenger Pro 12GB and the Radeon RX 6700 XT Challenger D 12GB sit below the Phantom Gaming, yet they feature the same AG2005 revision 1.00 PCB as their higher-end counterpart. So from power delivery, components, and build quality standpoints, all three products are exactly the same (or at least very similar).
As the name suggests, the Challenger Pro is positioned above the Challenger D, which is why it is equipped with a larger triple-fan cooling system with three heat-pipes. By contrast, the Challenger D is slightly more compact and has two fans. Meanwhile, both boards are two slots wide, so they will fit into smaller cases.
ASRock has not revealed any MSRPs for its custom Radeon RX 6700 XT graphics cards, which is not particularly surprising given shortages of GPUs and components. Whatever the launch price, expect them to sell out faster than you can fat-finger them into a shopping cart.
TechPowerUp is one of the most highly cited graphics card review sources on the web, and we strive to keep our testing methods, game selection, and, most importantly, test bench up to date. Today, I am pleased to announce our newest March 2021 VGA test system, which has one of many firsts for TechPowerUp. This is our first graphics card test bed powered by an AMD CPU. We are using the Ryzen 7 5800X 8-core processor based on the “Zen 3” architecture. The new test setup fully supports the PCI-Express 4.0 x16 bus interface to maximize performance of the latest generation of graphics cards by both NVIDIA and AMD. The platform also enables the Resizable BAR feature by PCI-SIG, allowing the processor to see the whole video memory as a single addressable block, which could potentially improve performance.
A new test system heralds completely re-testing every single graphics card used in our performance graphs. It allows us to kick out some of the older graphics cards and game tests to make room for newer cards and games. It also allows us to refresh our OS, testing tools, update games to the latest version, and explore new game settings, such as real-time raytracing, and newer APIs.
A VGA rebench is a monumental task for TechPowerUp. This time, I’m testing 26 graphics cards in 22 games at 3 resolutions, or 66 game tests per card, which works out to 1,716 benchmark runs in total. In addition, we have doubled our raytracing testing from two to four titles. We also made some changes to our power consumption testing, which is now more detailed and more in-depth than ever.
In this article, I’ll share some thoughts on what was changed and why, while giving you a first look at the performance numbers obtained on the new test system.
Hardware
Below are the hardware specifications of the new March 2021 VGA test system.
Windows 10 Professional 64-bit Version 20H2 (October 2020 Update)
Drivers:
AMD: 21.2.3 Beta NVIDIA: 461.72 WHQL
The AMD Ryzen 7 5800X has emerged as the fastest processor we can recommend to gamers for play at any resolution. We could have gone with the 12-core Ryzen 9 5900X or even maxed out this platform with the 16-core 5950X, but neither would be faster at gaming, and both would be significantly more expensive. AMD certainly wants to sell you the more expensive (overpriced?) CPU, but the Ryzen 7 5800X is actually the fastest option because of its single CCD architecture. Our goal with GPU test systems over the past decade has consistently been to use the fastest mainstream-desktop processor. Over the years, this meant a $300-something Core i7 K-series LGA115x chip making room for the $500 i9-9900K. The 5900X doesn’t sell for anywhere close to this mark, and we’d rather not use an overpriced processor just because we can. You’ll also notice that we skipped upgrading to the 10-core “Comet Lake” Core i9-10900K processor from the older i9-9900K because we saw no significant increases and negligible gaming performance gains, especially considering the large overclock on the i9-9900K. The additional two cores do squat for nearly all gaming situations, which is the second reason besides pricing that had us decide against the Ryzen 9 5900X.
We continue using our trusted Thermaltake TOUGHRAM 16 GB dual-channel memory kit that served us well for many years. 32 GB isn’t anywhere close to needed for gaming, so I didn’t want to hint at that, especially to less experienced readers checking out the test system. We’re running at the most desirable memory configuration for Zen 3 to reduce latencies inside the processor: Infinity Fabric at 2000 MHz, memory clocked at DDR4-4000, in 1:1 sync with the Infinity Fabric clock. Timings are at a standard CL19 configuration that’s easily found on affordable memory modules—spending extra for super-tight timings usually is overkill and not worth it for the added performance.
The MSI B550-A PRO was an easy choice for a motherboard. We wanted a cost-effective motherboard for the Ryzen 9 5800X and don’t care at all about RGB or other bling. The board can handle the CPU and memory settings we wanted for this test bed, the VRM barely gets warm. It also doesn’t come with any PCIe gymnastics—a simple PCI-Express 4.0 x16 slot wired to the CPU without any lane switches along the way. The slot is metal-reinforced and looks like it can take quite some abuse over time. Even though I admittedly swap cards hundreds of times each year, probably even 1000+ times, it has never been any issue—insertion force just gets a bit softer, which I actually find nice.
Software and Games
Windows 10 was updated to 20H2
The AMD graphics driver used for all testing is now 21.2.3 Beta
All NVIDIA cards use 461.72 WHQL
All existing games have been updated to their latest available version
The following titles were removed:
Anno 1800: old, not that popular, CPU limited
Assassin’s Creed Odyssey: old, DX11, replaced by Assassin’s Creed Valhalla
Hitman 2: old, replaced by Hitman 3
Project Cars 3: not very popular, DX11
Star Wars: Jedi Fallen Order: horrible EA Denuvo makes hardware changes a major pain, DX11 only, Unreal Engine 4, of which we have several other titles
Strange Brigade: old, not popular at all
The following titles were added:
Assassin’s Creed Valhalla
Cyberpunk 2077
Hitman 3
Star Wars Squadrons
Watch Dogs: Legion
I considered Horizon Zero Dawn, but rejected it because it uses the same game engine as Death Stranding. World of Warcraft or Call of Duty won’t be tested because of their always-online nature, which enforces game patches that mess with performance—at any time. Godfall is a bad game, Epic exclusive, and commercial flop.
The full list of games now consists of Assassin’s Creed Valhalla, Battlefield V, Borderlands 3, Civilization VI, Control, Cyberpunk 2077, Death Stranding, Detroit Become Human, Devil May Cry 5, Divinity Original Sin 2, DOOM Eternal, F1 2020, Far Cry 5, Gears 5, Hitman 3, Metro Exodus, Red Dead Redemption 2, Sekiro, Shadow of the Tomb Raider, Star Wars Squadrons, The Witcher 3, and Watch Dogs: Legion.
Raytracing
We previously tested raytracing using Metro Exodus and Control. For this round of retesting, I added Cyberpunk 2077 and Watch Dogs Legion. While Cyberpunk 2077 does not support raytracing on AMD, I still felt it’s one of the most important titles to test raytracing with.
While Godfall and DIRT 5 support raytracing, too, neither has had sufficient commercial success to warrant inclusion in the test suite.
Power Consumption Testing
The power consumption testing changes have been live for a couple of reviews already, but I still wanted to detail them a bit more in this article.
After our first Big Navi reviews I realized that something was odd about the power consumption testing method I’ve been using for years without issue. It seemed the Radeon RX 6800 XT was just SO much more energy efficient than NVIDIA’s RTX 3080. It definitely is more efficient because of the 7 nm process and AMD’s monumental improvements in the architecture, but the lead just didn’t look right. After further investigation, I realized that the RX 6800 XT was getting CPU bottlenecked in Metro: Last Light at even the higher resolutions, whereas the NVIDIA card ran without a bottleneck. This of course meant NVIDIA’s card consumed more power in this test because it could run faster.
The problem here is that I used the power consumption numbers from Metro for the “Performance per Watt” results under the assumption that the test loaded the card to the max. The underlying reason for the discrepancy is AMD’s higher DirectX 11 overhead, which only manifested itself enough to make a difference once AMD actually had cards able to compete in the high-end segment.
While our previous physical measurement setup was better than what most other reviewers use, I always wanted something with a higher sampling rate, better data recording, and a more flexible analysis pipeline. Previously, we recorded at 12 samples per second, but could only store minimum, maximum, and average. Starting and stopping the measurement process was a manual operation, too.
The new data acquisition system also uses professional lab equipment and collects data at 40 samples per second, which is four times faster than even NVIDIA’s PCAT. Every single data point is recorded digitally and stashed away for analysis. Just like before, all our graphics card power measurement is “card only”, not the “whole system” or “GPU chip only” (the number displayed in the AMD Radeon Settings control panel).
Having all data recorded means we can finally chart power consumption over time, which makes for a nice overview. Below is an example data set for the RTX 3080.
The “Performance per Watt” chart has been simplified to “Energy Efficiency” and is now based on the actual power and FPS achieved during our “Gaming” power consumption testing run (Cyberpunk 2077 at 1440p, see below).
The individual power tests have also been refined:
“Idle” testing is now measuring at 1440p, whereas it used 1080p previously. This is to follow the increasing adoption rates of high-res monitors.
“Multi-monitor” is now 2560×1440 over DP + 1920×1080 over HDMI—to test how well power management works with mixed resolutions over mixed outputs.
“Video Playback” records power usage of a 4K30 FPS video that’s encoded with H.264 AVC at 64 Mbps bitrate—similar enough to most streaming services. I considered using something like madVR to further improve video quality, but rejected it because I felt it to be too niche.
“Gaming” power consumption is now using Cyberpunk 2077 at 1440p with Ultra settings—this definitely won’t be CPU bottlenecked. Raytracing is off, and we made sure to heat up the card properly before taking data. This is very important for all GPU benchmarking—in the first seconds, you will get unrealistic boost rates, and the lower temperature has the silicon operating at higher efficiency, which screws with the power consumption numbers.
“Maximum” uses Furmark at 1080p, which pushes all cards into its power limiter—another important data point.
Somewhat as a bonus, and I really wasn’t sure if it’s as useful, I added another run of Cyberpunk at 1080p, capped to 60 FPS, to simulate a “V-Sync” usage scenario. Running at V-Sync not only removes tearing, but also reduces the power consumption of the graphics card, which is perfect for slower single-player titles where you don’t need the highest FPS and would rather conserve some energy and have less heat dumped into your room. Just to clarify, we’re technically running a 60 FPS soft cap so that weaker cards that can’t hit 60 FPS (GTX 1650S and GTX 1660) won’t run 60/30/20 FPS V-Sync, but go as high as able.
Last but not least, a “Spikes” measurement was added, which reports the highest 20 ms spike recorded in this whole test sequence. This spike usually appears at the start of Furmark, before the card’s power limiting circuitry can react to the new conditions. On RX 6900 XT, I measured well above 600 W, which can trigger the protections of certain power supplies, resulting in the machine suddenly turning off. This happened to me several times with a different PSU than the Seasonic, so it’s not a theoretical test.
Radeon VII Fail
Since we’re running with Resizable BAR enabled, we also have to boot with UEFI instead of CSM. When it was time to retest the Radeon VII, I got no POST, and it seemed the card was dead. Since there’s plenty of drama around Radeon VII cards suddenly dying, I already started looking for a replacement, but wanted to give it another chance in another machine, which had it working perfectly fine. WTF?
After some googling, I found our article detailing the lack of UEFI support on the Radeon VII. So that was the problem, the card simply didn’t have the BIOS update AMD released after our article. Well, FML, the page with the BIOS update no longer exists on AMD’s website.
Really? Someone on their web team made the decision to just delete the pages that contain an important fix to get the product working, a product that’s not even two years old? (launched Feb 7 2019, page was removed no later than Nov 8 2020).
Luckily, I found the updated BIOS in our VGA BIOS collection, and the card is working perfectly now.
Performance results are on the next page. If you have more questions, please do let us know in the comments section of this article.
Sales of PCs increased sharply in 2020 as people needed new desktops and notebooks to work and learn from home. As a result, shipments of GPUs (graphics processing units) also increased compared to 2019, mostly because the majority of today’s CPUs (central processing units) come with integrated GPUs. But while many people were hoping to pick up one of the best graphics cards or best CPUs, availability was severely limited, particularly in the second half of the year. The surprising result is that, despite record demand for gaming PCs and hardware, sales of dedicated GPUs were not exactly exceptional in 2020. Data from Jon Peddie Research (JPR) confirms what we experienced, but let’s look at the details.
As PC Sales Increase, Intel’s GPUs Eat AMD’s and Nvidia’s Lunch
According to Gartner, PC shipments increased 10.7% year-over-year to 79.392 million units in Q4 2020. Quarter-over-quarter, PC sales increased by 11.2%. For the year, they totaled 274.147 million units, up 4.8% from 2019. JPR says that shipments of both integrated and discrete GPUs in the fourth quarter were up 20.5% quarter-over-quarter, as some chips are sold well before actual systems become available. That sounds good, but the actual GPU sales aren’t quite as impressive.
Since Intel remains the leading supplier of CPUs, it’s also the No. 1 supplier of GPUs. In Q4 2020, Intel actually managed to solidify its lead by increasing its market share to 69%. Meanwhile, the three GPU vendors posted mixed results in the fourth quarter compared to the previous quarter: AMD’s shipments were up 6.4% and Intel’s sales increased 33.2%, while Nvidia’s unit shipments decreased by a rather significant 7.3%.
As far as market share goes, Intel controlled 69% of the PC GPU market in Q4 2020, Nvidia’s share dropped to 17%, and AMD’s share fell to 15%. We’ve been talking about graphics card shortages since the Ampere launch back in September, but it’s good to have hard numbers.
Both Intel and AMD sell loads of CPUs with integrated graphics, and last year the companies released quite successful 11th-Gen Core ‘Tiger Lake’ chips as well as Ryzen 4000-series ‘Renoir’ processors for laptops and compact desktops. To that end, it isn’t surprising that the two companies increased GPU sales as shipments of notebooks boomed. Meanwhile, note that when counting CPU shipments, some of these devices may be sold to fill in backlog, or to sit in the inventory of PC makers.
Unlike AMD and Intel, Nvidia supplies only discrete GPUs and its unit shipments decreased because it suffered severe shortages of its desktop products in Q4. Despite the drop in unit sales and the deficit, Nvidia’s position as the leading discrete GPU supplier didn’t suffer. More on that later.
Sales of Desktop Graphics Cards Hit by Shortages, but ASPs Set Records
Unit sales of desktop discrete graphics cards were down 3.9% sequentially in Q4 2020. Jon Peddie Research reports that around 11 million add-in-boards (AIBs) were sold during the quarter.
Generally, approximately 11 million graphics cards sold in the fourth quarter looks like a rather modest result (it’s below Q4 2019, Q4 2017, Q4 2016, and Q4 2015), but keeping in mind how significantly average selling prices of AIBs increased recently, 11 million GPUs mean a lot of money for AMD, Nvidia, and their AIB partners. Meanwhile, since it’s extremely hard to get a new graphics board these days, it becomes rather evident that there is a great undersupply.
Overall, the market shipped around 41.5 million discrete graphics cards for desktop PCs in 2020, which is about 3 million more than 2019 but still below sales in prior years. Jon Peddie Research reports that the AIB market reached $14.8 billion last year, which means that an average graphics card cost $360 last year.
In recent months we saw numerous reports and videos about cryptocurrency mining farms using hundreds or thousands of graphics AIBs to mine Ethereum. While there certainly are mining farms using loads of GPUs, it doesn’t appear that they somehow substantially increased the total available market of video cards. Most probably, they have had added to the deficit of graphics cards on the market, but neither miners nor scalpers are the key reasons for the shortages.
Nvidia has been leading the desktop discrete GPU market since the early 2000s, and in Q4 2020 it actually captured its highest unit market share ever. Last quarter the company controlled 83% of the shipments. By contrast, AMD’s share hit an all-time low of just 17%.
Throughout the fourth quarter, Nvidia complained about shortages of GPUs and other components. However, it still managed to ship over 9 million standalone graphics processors for desktops throughout the quarter, which is its best result in two years. We do not know the share of Nvidia’s Ampere GPUs in its shipments in Q4 2020, but previously the company indicated that it was draining its Turing inventory for a couple of quarters, so the GeForce RTX 30-series was probably sold in noticeable quantities in Q4.
AMD’s desktop discrete AIB market share in the fourth quarter 2020 dropped sharply by 6% sequentially. The company commanded 17% of shipments and sold about 1.87 million standalone GPUs for desktops in Q4 2020, 780 thousand less than in Q3 2020, according to Jon Peddie Research. For AMD, this is the worst result in two years.
During the last quarter, AMD launched its latest Radeon RX 6800/6900-series graphics cards based on the Navi 21 GPU that were sold out as soon as they reached store shelves. The company also complained about shortages of components. In addition to its desktop discrete GPUs, AMD also had to ramp up production of its Ryzen 5000-series processors as well as system-on-chips (SoCs) for the latest game consoles in Q4, which naturally decreased the number of pre-allocated wafers it could use for the Radeon products.
Nvidia Solidifies Positions as Leading dGPU Supplier
Although Nvidia’s unit sales dropped by 7.3% quarter-over-quarter in Q4 2020, the company still managed to post massive gaming revenue gains and actually increased its discrete GPU market share (which includes graphics processors for both desktops and laptops) to 82%, based on data from Jon Peddie Research.
In recent years, Nvidia has outsold AMD 7:3 or 4:1 in the discreet GPU market, which is a significant lead. Nvidia has also long dominated the standalone GPUs market for laptops with gaming GPUs for laptops. Historically, this market was small, but it grew by 7X in seven years, according to Nvidia. So far, the company has sold 50 million mobile GeForce graphics processors for gaming, which is a lot.
“Laptops right now, gaming laptops, is probably the fastest-growing gaming platform that is out there,” said Colette Kress, CFO of Nvidia, at the Morgan Stanley Technology, Media and Telecom Broker Conference (via SeekingAlpha). “It is up 7 times in just 7 years. Q4, for example, was our 12th consecutive quarter of double-digit year-over-year growth in our overall laptops. Our GeForce RTX 30 series laptops launch was one of our largest launches ever with more than 70-plus different devices. […] We have got 50 million GeForce laptop gamers at this time.”
Summary
Demand for PCs is booming, which helps AMD and Intel sell tens of millions of CPUs every quarter. Over 274 million systems were sold in 2020, which means these two companies supplied over 274 million client processors throughout the year, and most of these CPUs featured an integrated GPU. In contrast, the situation looks different when it comes to discrete GPU sales.
Traditionally, GPU sales increase when new GPUs and new games arrive, or at least stay at high levels. Both AMD and Nvidia started shipments of their new AIBs based on the latest RDNA2 and Ampere architectures in Q4 2020. Several new AAA games were released during the quarter as well, including Cyberpunk 2077, Assassin’s Creed Valhalla, Marvel’s Avengers, and Microsoft Flight Simulator. However, despite new hardware and game launches, actual discrete GPU shipments dropped in Q4 vs Q3, which was almost certainly caused by component shortages.
Nvidia sold over 9.13 million of desktop discrete GPUs in Q4 2020, which was a bit up from the prior quarter, but the shipments were constrained. With its Radeon RX 6800/6900-series graphics cards based on the Navi 21 GPU, AMD’s Radeon products turned out to be competitive against Nvidia’s in the lucrative enthusiast segment for the first time in years. Unfortunately, the company’s market share and unit shipments declined sequentially because of the shortages and because the company had to ramp up production of other products, which lowered its ability to produce enough Radeons for the market.
In general, discrete desktop GPU shipments in 2020 totaled approximately 41.5 million units and exceeded shipments of graphics cards in 2019. That’s likely due to AMD and Nvidia both selling out of previous generation cards, rather than significant numbers of the latest generation GPUs. Shortages constrained and continue to limit AIB sales, and it’s difficult to estimate just how high the actual demand for standalone desktop GPUs was in 2020. Looking forward, there’s still unmet demand, and the GPU and graphics card makers would need to produce plenty of products to keep up in 2021. Unfortunately, that’s likely not possible, as the shortages continue to plague the industry.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.