A few days ago the new graphics cards were put on sale from AMD, Radeon 6800 and 6800 XT , these were AMD reference models under the brands that normally sell graphics cards, but go on sale today starting at 16: 11 hours custom models of these same brands, the so-called “custom” models , where we can see what the manufacturers offer us regarding the reference model. To get an idea, it has already been filtered. Adopted the Red Dragon model of the PowerColor XT 6900, we’ll see offered by the rest of the manufacturers.
Radeon RX cards 6800 and 6800 XT have been released to compete with the new NVIDIA RTX 40, which have also been launched recently, where the AMD Radeon have obtained results that place them on a par with the NVIDIA cards . The new Radeon RX 6800 has 60 Compute Units, a base clock speed of 1815 MHz and 2105 MHz boost , dispose of 128 MB of Infinity Cache and 15 GB of GDDR6 RAM, this card has a TDP of 250 W, his older sister, the Radeon RX 6800 XT, reaches 72 Compute Units with 2015 MHz base frequency and 2250 MHz in Boost, it also has the 250 MB of Infinity Cache, 16 GB GDDR6 memory and a TDP of 300 W this time.
GPU
Compute Units
Gaming Speed (MHz)
Boost Speed (MHz)
Infinity Cache
RAM (GB)
TDP (W)
Price (departure)
Radeon RX 6800
72
2011
2105
250 MB
16 GB GDDR6
128
$ 579
Radeon RX 6800 XT
72
2020
2105
250 MB
16 GB GDDR6
300
$ 579
Given the last events os with NVIDIA cards and their shortage of stock , something that has been repeated with Ryzen processors 5000 on launch day, and the same play with the Radeon reference models 6800 Y 6800 XT , you have to be very attentive to the pages of the usual sellers in Spain to get these “custom” models of AMD Radeon cards. Recall that the Radeon 6900 XT will go on sale on December 8 , so we will have to wait a few days if we want to get this model.
End of Article. Tell us something in the Comments or come to our Forum!
Antonio Delgado
Computer Engineer by training, writer and hardware analyst at Geeknetic since 2011. I love gutting everything that comes my way, especially the latest hardware that we get here for reviews. In my spare time I fiddle with 3d printers, drones and other gadgets. For anything here you have me.
You also may interest Other articles and news about technology
The development studio IO Interactive is cooperating with Intel in the optimization of “Hitman 3”. The chip manufacturer’s software developers should help ensure that the game directly supports Intel’s upcoming graphics chips. Intel has entered into such partnerships earlier, for example for its own processor optimizations or smooth frame rates with the integrated GPUs.
Intel goes for the action game “Hitman 3” Apparently a step further: As IO Interactive explains, there will be a patch after the release that integrates ray tracing graphics effects into the game. Intel will support the calculation of virtual rays in hardware with the gaming GPU Xe HPG. The developers also built in Variable Rate Shading (VRS), which combines several pixels into blocks in order to reduce the amount of shading required for the color gradations and thus increase the frame rate.
Ray tracing optimizations common Intel’s Xe-HPG-GPU, which is manufactured at TSMC in 7-nanometer technology, should 2021 appear. Cooperation with developer studios should ensure that showcase projects for your own hardware can be played at the start. AMD (“Godfall” with Radeon ray tracing) and Nvidia (including “Cyberpunk 2077” with GeForce ray tracing) proceed in a similar way. Ultimately, however, the graphics cards of all three manufacturers should be able to display the ray tracing graphics effects thanks to standardized APIs – possibly just less well optimized for one architecture.
Further optimizations concern processors: IO Interactive wants to utilize eight computing cores and more, for example for more complex destruction physics in cities or larger NPC collections. The first “Hitman” part of 2016 was one of the first titles with DirectX – 12 -Support including multi-core utilization. (mma)
Nvidia’s RTX 3060 Ti has been highly rumored by now, to the point where we now know the card is in the works, with a rumored debut on December 2nd. To add even more information to the mix, we’ve spotted an RTX 3060 Ti benchmark run in Ashes of the Singularity, giving us our first inkling as to how the GPU will perform.
In the Ashes of the Singularity benchmark, the RTX 3060 received a total score of 7900 points, running the “Crazy_1080P” preset and DX12 API. The 3060 Ti was able to maintain an average (GPU) frame rate of 95.3 FPS. For comparison to other current-generation GPUs, the RTX 3070 scores 9000 points, and Radeon RX 6800 scores 9500 points.
But of course, we already knew it would be less powerful than the RTX 3070. What’s most interesting is how it compares to Nvidia’s previous-generation Turing architecture. The only GPU that can keep up with the 3060 Ti from Turing is the RTX 2080 Super, and it manages to pull just ahead of the 3060 Ti with 8400 points and an average frame rate of 101.6 FPS.
Note that Ashes is very much a CPU benchmark, but both GPUs were tested on a Core i7-8700K. There are other differences that may account for some of the change in performance, but things are still looking quite good.
For a 60 series card, this is certainly excellent performance and something we usually see with every new generation of Nvidia GPUs. For example, Nvidia’s Pascal-based GTX 1060 6GB was able to match the Maxwell-based GTX 980 in performance for half the price.
If Nvidia follows this same trend, mathematically the RTX 3060 Ti could land with a $349 MSRP. More likely, it will take over the 2060 Super’s $399 price point. Either way, this will be one of the best Ampere based cards for the price. You will get roughly 13% less performance than the RTX 3070, but it will cost 20-30% less.
Again, the price is purely speculation but at least we have some idea as to where the RTX 3060 Ti will land in rasterization performance. Hopefully, Nvidia will maintain an excellent price to performance ratio for the 3060 Ti, as is usual for Nvidia’s beloved 60 series cards.
The coronavirus has thrown the world economy into a tailspin, but according to industry soothsayer IC Insights, that hasn’t slowed down the semiconductor industry. Instead, chipmakers are enjoying booming business – the firm projects that sales will swell 13% for the entire industry in 2020, twice the original forecasts. That’s quite the bounceback from 2019, which found industry revenue shrinking by 15%.
The list is built off of the firm’s projections for the final 2020 revenue numbers, and given that we’re in the final stretch of the year, it should be close to the final numbers. The report also reminds us that AMD, Apple, and Nvidia all lack Intel’s sheer production volume – Intel generated twice the revenue of those three companies, combined.
Intel continues to post impressive revenue even as it faces an uncertain future due to its continuing manufacturing challenges. As expected, it should cling to its leading position in the market with $73.89 billion in IC sales this year. That’s an increase of four percent over 2019. Samsung comes in a close second, but there’s still quite the gap – it’s revenue trails Intel’s by 22%.
TSMC lands in third place, but this pure-play foundry has fast become the world’s foundry, and its 31% growth rate eclipses Intel’s. It isn’t surprising that this growth comes as the fruits of the company’s 7nm node, and the continuing ramp of 7nm production and the debut of its highly-anticipated 5nm process next year should hasten its rate of growth. Intel should generate ~63% more revenue than TSMC this year, but it’s rational to think that delta will shrink significantly next year.
The list includes companies that don’t make their own chips, too. The fast-charging AMD appears on the top 15 semiconductor sales leaders list for the first time, landing in fifteenth place with a 41% increase in revenue, propelling it from its previous position in eleventh place.
Putting AMD’s projected $9.5 billion in IC sales next to Intel’s $73.9 billion is a stark reminder that AMD’s startling comeback truly has happened against all odds: Even after three years of explosive growth, Intel’s revenue is still 7.8 times larger. That also explains the market’s optimism for AMD; there’s plenty of room to steal share from Intel, and that appears to be moving along swimmingly. Things should accelerate for AMD this year, too, as its performant Ryzen 5000 processors now lead Intel’s, and its Radeon RX 6000 series GPUs have proven to be a formidable foe to Nvidia’s Ampere. Now all AMD needs is more supply from TSMC to hasten its rate of growth.
Interestingly, Apple, the lone producer on the list that doesn’t sell chips to the open market, lands in 13th place, beating AMD by ~5.5% with a 25% rate of growth. Now that Apple is in the desktop PC chip design game with the Apple M1 chips, we could see even larger growth next year.
In terms of yearly revenue growth, Nvidia, another fabless chip company, has the best showing with a projected 50% yearly increase in revenue, reaching the tenth spot on the 2020 projections list. That’s a lot of GeForce and data center graphics cards, but it’s important to note that while Nvidia’s growth is explosive, we aren’t sure if that includes its Mellanox acquisition. Also, Nvidia’s latest generation of graphics cards isn’t even fully on the market yet. Given the popularity of the Ampere series, we could see the company post even larger gains next year.
However, the stellar growth in the semiconductor industry is largely spurred by the pandemic, so the unanticipated demand has punished all of the companies with a series of rolling shortages. For now, the challenge is for those companies to carefully manage any production expansions to prevent sudden and massive oversupply conditions as the pandemic eventually recedes. An oversupply isn’t good for business or profits, so we can expect a slow-but-sure expansion of chip production, which could restrict potential short-term growth rates.
That also means that, despite the money the semiconductor firms are shoveling into their coffers at an astounding rate, it might not be easier to score a leading CPU or graphics card any time soon.
Following PowerColor’s official announcement of the Red Devil Radeon RX 6800 XT and RX 6800, VideoCardz has shared renders of what we can expect from the vendor’s Red Dragon series.
In regards to hierarchy, PowerColor’s Red Dragon-branded graphics cards are a notch below the Red Devil. Therefore, the offerings typically come with a more modest design and a factory overclock. As the renders show, the Red Dragon Radeon RX 6800 XT features less flamboyant aesthetics. There are zero traces of RGB lighting on the shroud, which will surely appease consumers that loathe having a Christmas tree inside their PCs.
Given Big Navi’s considerable power requirements, the Red Dragon series will still lean on a triple-fan cooling solution to remain cool. The small cooling fan in the center is in the company of two larger cooling fans – one on each side.
For the moment, the Red Dragon Radeon RX 6800 XT’s dimensions are unknown, but it looks just as long as the Red Devil iteration. While the latter occupies up to three PCI slots, the Red Dragon seems to adhere to a 2.5-slot design. The difference in thickness owes to the fact that the Red Dragon doesn’t come with an aggressive factory overclock compared to the Red Devil, and therefore, it can get away with a smaller cooler.
Image 1 of 2
Image 2 of 2
The Red Dragon Radeon RX 6800 XT also appears to come with a metal backplate and a dual-BIOS switch. In addition to the PCIe slot, the graphics card draws power from two 8-pin PCIe power connectors.
The display outputs on the Red Dragon Radeon RX 6800 XT differ from AMD’s reference design and the Red Devil. The latter two provide one HDMI 2.1 port, two DisplayPort 1.4 outputs, and a single USB Type-C port. On the Red Dragon, PowerColor decided to dispose of the USB Type-C port in favor of another DisplayPort 1.4 output, bringing the total to three.
AMD’s partners have the green light to launch their custom models tomorrow, so we’ll likely see both the Red Devil and Red Dragon invade the hardware shelves. We can only hope that the supply is better than the disappointment with the Radeon RX 6800 XT and RX 6800 reference edition.
The new graphics cards of the Radeon-RX – 6000 – series from AMD and the GeForce-RTX – 30 – Family from NVIDIA are on everyone’s lips even days and weeks after their official market launch, they were finally able to significantly increase the performance compared to the generations and still improve the efficiency.
A hot topic in the Hardwareluxx community is currently clearly their availability, because the great interest in the new player graphics cards, but of course the ongoing worldwide corona pandemic, ensure a low supply and high prices. While a street price of around 500 euros was actually targeted for the cheapest models for the NVIDIA GeForce RTX 3070, you now have to go well over 600 Put euros on the virtual counter – if you are lucky and a model is actually available, of course.
Our forum moderator ” ralle_h “now wants to help interested buyers and significantly increase their chances of getting a new graphics card. For this reason, he has started a small project for the community in which numerous online shops and their APIs are automatically checked for the availability of new RTX and RX graphics cards. If there is sufficient storage capacity, this will be communicated immediately in a corresponding forum post so that interested users can strike.
A subscription with automatic email notification for new posts can be worthwhile. In order to keep any notifications as low as possible, we have closed the thread for comments. Discussions can of course take place in the corresponding availability threads.
Another little tip from the developer: Since many shops deliver their article pages cached in order to save traffic and accelerate the delivery of the website, it can happen that the shop API already marks a certain article as “available”, but the shop still marks this in the fronted as “not available” or “not available” and an order is not (yet) possible. In some shops this can sometimes take up to 40 minutes until the article pages are updated.
ralle_h therefore recommends either regularly updating the article pages manually or placing the article on the wish list or memo, depending on the online shop, in order to be automatically informed by the shop about the later availability. In addition, some Kaufbots are currently supposed to be up to mischief who buy up immediately available graphics cards. Of course, all information without prejudice.
A bit of luck with the purchase of an AMD Radeon RX 6800 or NVIDIA GeForce RTX 30 remains so continue to exist.
We wish you a lot of fun shopping and good luck with your order! More such availability reporting threads for the Ryzen processors from AMD are planned in the coming days.
It seems that AMD is preparing a new graphics card to increase its family of Radeon RX 6000, formed for the moment by RX 6700, RX 6700 XT and RX 6900 XT that will hit the market in the next few days.
The leak comes from Patrick Schur, a “leaker” with a fairly accurate prediction and past leaks. In it, he assures that the next AMD Radeon RX 6700 XT will arrive with a TGP that will be around the 186 – 221 W next to 12 GB of GDDR6 memory. Along with it will predictably come the AMD Radeon RX 6700 , keeping the 12 GB of GDDR6 and a TGP between 146 Y 156 W or even lower.
The name of the chips, based on RDNA 2, will be NV 22 XT for the GPU that will bring the RX to life 6700 XT and NV 22 XTL for GPU integrated into the supposed RX 6700. These cards are expected to occupy a place in the mid-range, with more affordable prices for the vast majority
Be that as it may, as with all these leaks and rumors, you have to take the data with tweezers and wait for the official launch by AMD to see what final specifications they offer and how they are in the company’s current range of graphics. Of what there is no doubt is that it would be very strange that AMD did not launch a model to fight in this range, as it already leaked in its day when showing some designs for the RX 6000.
End of Article. Tell us something in the Comments or come to our Forum!
Antonio Delgado
Computer Engineer by training, editor and hardware analyst at Geeknetic since 2011. I love gutting everything that comes my way, especially the latest hardware that we get here for reviews. In my spare time I fiddle with 3d printers, drones and other gadgets. For anything here you have me.
After a few months’ development, the extensions dedicated to ray tracing for the Vulkan API are finally ready. AMD, Intel, Nvidia and many other companies are ready to support them, also because they are easily adopted by those who already have experience with DirectX Raytracing 12 from Microsoft.
by Manolo De Agostini published 24 November 2020 , at 12: 21 in the Video Cards channel Vulkan
The Khronos Group , the consortium that oversees the development of numerous industrial APIs, has released the final version of the dedicated Vulkan API extensions al ray tracing , after having published the draft last March. Developers therefore have a way from now on to develop ray traced games around the Vulkan API , by exploiting the hardware acceleration of GPUs, both with dedicated units (RT core for Nvidia or Ray Accelerator for AMD) and through shader units.
According to the consortium, “Vulkan Ray Tracing will be familiar to anyone who has used DirectX Raytracing (DXR) within DirectX 12 “, but also introduces other features such as the ability to balance the load using the central CPU. “Although ray tracing will primarily be applied to desktop systems, these extensions have been designed to enable and encourage the spread of ray tracing on mobile devices as well “.
The consortium has received and integrated the opinions of hardware manufacturers and software developers in recent months, and in the coming weeks some components of the ecosystem will be updated, culminating in the publication of a new Vulkan SDK (version 1.2. 162. 0 or later) with ray tracing support in mid-December.
In the past few hours both AMD and Nvidia have released drivers with support for Vulkan ray tracing extensions. Intel is also working to support Vulkan Ray Tracing with future Xe architecture-based gaming GPUs.
It is good to remember that Vulkan games that support ray tracing have arrived on the market, such as Wolfenstein: Youngblood , are based on VKRay , a set of three extensions developed by Nvidia to offer ray tracing in the title despite the fact that the main Vulkan API is not yet ready. There are three titles based on VKRay and ray tracing cannot be enabled in the presence of AMD Radeon GPUs, even if according to Nvidia there are no technical impediments, only AMD’s lack of support for its extension.
Until the release of the Nvidia GeForce RTX graphics card 3060 Ti stayed just over a week. We already know almost everything about it. The specification, which is very close to the GeForce RTX 3070, clearly suggests what to expect from performance upcoming model. Last week, there was a leak of the official Nvidia graphics, which compared GeForce RTX 3060 Ti vs. RTX 2060 SUPER and RTX 2080 SUPER and the first one is best in both games and programs. Now we have the opportunity to look at the results of the popular benchmarks Ashes of the Singularity (AotS) and Geekbench. How does the Nvidia GeForce RTX 3060 Ti?
Nvidia GeForce RTX 3060 Ti, which has virtually no secrets from us, has been tested in Ashes of the Singularity and Geekbench. As expected, the card ranks just below the RTX 3070.
GeForce RTX graphics card test 3070 vs GeForce RTX 2080 Ti
It’s hard to talk about any surprise. Nvidia GeForce RTX 3060 Ti fares slightly worse than the recently released GeForce RTX 3070. It was tested in the Ashes of the Singularity benchmark in the resolution 1080 p with settings Crazy. He obtained a score of 7900 points, by 1100 points less than RTX 3070 and 1600 points less than the Radeon RX 6800. The cards have been paired with the Intel Core i7 processor – 8700 K. RTX 3060 Ti also appeared in the Geekbench database, reaching 123279 point (OpenCL), 133974 (CUDA), 107979 ( Vulkan) and 326720 (OpenCL; Geekbench 4). GeForce worked in conjunction with AMD Ryzen 7 5800 X – the exception is the last result, which was achieved on the Intel Core i9- processor 10900 K.
NVIDIA GeForce RTX 3060 Ti vs RTX 2080 S Comparison performance
Revealed in rumors and various leaks, the specification can be considered as official. It has been confirmed by the Manli company. GeForce RTX 3060 Ti, which will be based on the Ampere GA core 104 – 200 with 4864 CUDA cores (about 1024 less than in RTX 3070), will be equipped with 8 GB of GDDR6 memory (256 – bit) with effective timing 14000 MHz and bandwidth 448 GB / s . The card is to work with the 1410 MHz (basic) and 1665 MHz (Boost mode). The TGP of non-reporters will be 200 In (in the case of Founders Edition will be this 180. The GeForce RTX 3060 Ti will launch on December 2, with a suggested retail price starting at 399 US dollars (approx. 1900 PLN).
The latest Radeons from the RX series 6000 are after their official premiere, but what if so many players and enthusiasts are still not able to buy them … The current situation with the availability of new graphics cards is not very interesting, so the only thing we can do , it is to be patient, follow the information in this matter calmly, as well as admire the next non-reference models. It’s a strange feeling to watch units that you probably won’t be able to buy now, but at least you can pick your favorite for the future. The next PowerColor systems from the Red Dragon series promise to be very interesting, including the RX 6800 XT and RX models) . Rather, they will not cost a fortune, and they should provide a sufficient work culture.
Red Dragon abandons the effective RGB styling in favor of a simple black cover, but what Most importantly, we still have a solid build with three fans.
PowerColor Radeon RX 6800 XT and RX 6800 Red Devil – new cards
Red Dragon abandons the effective RGB styling in favor of a simple black cover, but most importantly, we still have a solid construction here with three fans. Cooling will take slightly more than two slots in the housing, and the heat sink itself clearly exceeds beyond the length of the laminate The card is equipped with dual 8-pin power connectors and has a custom configuration of video connectors – on the herring we find three DisplayPort ports and one HDMI. .
AMD Radeon RX 6800 (XT) from 2021 on copyright only
As reported by the source, in addition to the Red Devil and Red Dragon models, PowerColor will also release Radeons RX 6000 in a nameless series of custom cards that have cooling similar to Red Dragon. Special variants are also planned, such as Red Devil LC or Red Devil Limited Edition. All custom AMD Radeon RX 6800 XT and RX 6800 will be officially launched tomorrow, 25 November, and probably then we will know something about the timings of the mentioned cards. Unfortunately, we do not expect all interested parties to receive their dream play soon.
The AMD Radeon RX 6800 XT and Radeon RX 6800 have arrived, joining the ranks of the best graphics cards and making some headway into the top positions in our GPU benchmarks hierarchy. Nvidia has had a virtual stranglehold on the GPU market for cards priced $500 or more, going back to at least the GTX 700-series in 2013. That’s left AMD to mostly compete in the high-end, mid-range, and budget GPU markets. “No longer!” says Team Red.
Big Navi, aka Navi 21, aka RDNA2, has arrived, bringing some impressive performance gains. AMD also finally joins the ray tracing fray, both with its PC desktop graphics cards and the next-gen PlayStation 5 and Xbox Series X consoles. How do AMD’s latest GPUs stack up to the competition, and could this be AMD’s GPU equivalent of the Ryzen debut of 2017? That’s what we’re here to find out.
We’ve previously discussed many aspects of today’s launch, including details of the RDNA2 architecture, the GPU specifications, features, and more. Now, it’s time to take all the theoretical aspects and lay some rubber on the track. If you want to know more about the finer details of RDNA2, we’ll cover that as well. If you’re just here for the benchmarks, skip down a few screens because, hell yeah, do we have some benchmarks. We’ve got our standard testbed using an ‘ancient’ Core i9-9900K CPU, but we wanted something a bit more for the fastest graphics cards on the planet. We’ve added more benchmarks on both Core i9-10900K and Ryzen 9 5900X. With the arrival of Zen 3, running AMD GPUs with AMD CPUs finally means no compromises.
Update: We’ve added additional results to the CPU scaling charts. This review was originally published on November 18, 2020, but we’ll continue to update related details as needed.
AMD Radeon RX 6800 Series: Specifications and Architecture
Let’s start with a quick look at the specifications, which have been mostly known for at least a month. We’ve also included the previous generation RX 5700 XT as a reference point.
Graphics Card
RX 6800 XT
RX 6800
RX 5700 XT
GPU
Navi 21 (XT)
Navi 21 (XL)
Navi 10 (XT)
Process (nm)
7
7
7
Transistors (billion)
26.8
26.8
10.3
Die size (mm^2)
519
519
251
CUs
72
60
40
GPU cores
4608
3840
2560
Ray Accelerators
72
60
N/A
Game Clock (MHz)
2015
1815
1755
Boost Clock (MHz)
2250
2105
1905
VRAM Speed (MT/s)
16000
16000
14000
VRAM (GB)
16
16
8
Bus width
256
256
256
Infinity Cache (MB)
128
128
N/A
ROPs
128
96
64
TMUs
288
240
160
TFLOPS (boost)
20.7
16.2
9.7
Bandwidth (GB/s)
512
512
448
TBP (watts)
300
250
225
Launch Date
Nov. 2020
Nov. 2020
Jul-19
Launch Price
$649
$579
$399
When AMD fans started talking about “Big Navi” as far back as last year, this is pretty much what they hoped to see. AMD has just about doubled down on every important aspect of its architecture, plus adding in a huge amount of L3 cache and Ray Accelerators to handle ray tracing ray/triangle intersection calculations. Clock speeds are also higher, and — spoiler alert! — the 6800 series cards actually exceed the Game Clock and can even go past the Boost Clock in some cases. Memory capacity has doubled, ROPs have doubled, TFLOPS has more than doubled, and the die size is also more than double.
Support for ray tracing is probably the most visible new feature, but RDNA2 also supports Variable Rate Shading (VRS), mesh shaders, and everything else that’s part of the DirectX 12 Ultimate spec. There are other tweaks to the architecture, like support for 8K AV1 decode and 8K HEVC encode. But a lot of the underlying changes don’t show up as an easily digestible number.
For example, AMD says it reworked much of the architecture to focus on a high speed design. That’s where the greater than 2GHz clocks come from, but those aren’t just fantasy numbers. Playing around with overclocking a bit — and the software to do this is still missing, so we had to stick with AMD’s built-in overclocking tools — we actually hit clocks of over 2.5GHz. Yeah. I saw the supposed leaks before the launch claiming 2.4GHz and 2.5GHz and thought, “There’s no way.” I was wrong.
AMD’s cache hierarchy is arguably one of the biggest changes. Besides a shared 1MB L1 cache for each cluster of 20 dual-CUs, there’s a 4MB L2 cache and a whopping 128MB L3 cache that AMD calls the Infinity Cache. It also ties into the Infinity Fabric, but fundamentally, it helps optimize memory access latency and improve the effective bandwidth. Thanks to the 128MB cache, the framebuffer mostly ends up being cached, which drastically cuts down memory access. AMD says the effective bandwidth of the GDDR6 memory ends up being 119 percent higher than what the raw bandwidth would suggest.
The large cache also helps to reduce power consumption, which all ties into AMD’s targeted 50 percent performance per Watt improvements. This doesn’t mean power requirements stayed the same — RX 6800 has a slightly higher TBP (Total Board Power) than the RX 5700 XT, and the 6800 XT and upcoming 6900 XT are back at 300W (like the Vega 64). However, AMD still comes in at a lower power level than Nvidia’s competing GPUs, which is a bit of a change of pace from previous generation architectures.
It’s not entirely clear how AMD’s Ray Accelerators stack up against Nvidia’s RT cores. Much like Nvidia, AMD is putting one Ray Accelerator into each CU. (It seems we’re missing an acronym. Should we call the ray accelerators RA? The sun god, casting down rays! Sorry, been up all night, getting a bit loopy here…) The thing is, Nvidia is on its second-gen RT cores that are supposed to be around 1.7X as fast as its first-gen RT cores. AMD’s Ray Accelerators are supposedly 10 times as fast as doing the RT calculations via shader hardware, which is similar to what Nvidia said with its Turing RT cores. In practice, it looks as though Nvidia will maintain a lead in ray tracing performance.
That doesn’t even get into the whole DLSS and Tensor core discussion. AMD’s RDNA2 chips can do FP16 via shaders, but they’re still a far cry from the computational throughput of Tensor cores. That may or may not matter, as perhaps the FP16 throughput is enough for real-time inference to do something akin to DLSS. AMD has talked about FidelityFX Super Resolution, which it’s working on with Microsoft, but it’s not available yet, and of course, no games are shipping with it yet either. Meanwhile, DLSS is in a couple of dozen games now, and it’s also in Unreal Engine, which means uptake of DLSS could explode over the coming year.
Anyway, that’s enough of the architectural talk for now. Let’s meet the actual cards.
Meet the Radeon RX 6800 XT and RX 6800 Reference Cards
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
We’ve already posted an unboxing of the RX 6800 cards, which you can see in the above video. The design is pretty traditional, building on previous cards like the Radeon VII. There’s no blower this round, which is probably for the best if you’re worried about noise levels. Otherwise, you get a similar industrial design and aesthetic with both the reference 6800 and 6800 XT. The only real change is that the 6800 XT has a fatter heatsink and weighs 115g more, which helps it cope with the higher TBP.
Both cards are triple fan designs, using custom 77mm fans that have an integrated rim. We saw the same style of fan on many of the RTX 30-series GPUs, and it looks like the engineers have discovered a better way to direct airflow. Both cards have a Radeon logo that lights up in red, but it looks like the 6800 XT might have an RGB logo — it’s not exposed in software yet, but maybe that will come.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
Otherwise, you get dual 8-pin PEG power connections, which might seem a bit overkill on the 6800 — it’s a 250W card, after all, why should it need the potential for up to 375W of power? But we’ll get into the power stuff later. If you’re into collecting hardware boxes, the 6800 XT box is also larger and a bit nicer, but there’s no real benefit otherwise.
The one potential concern with AMD’s reference design is the video ports. There are two DisplayPort outputs, a single HDMI 2.1 connector, and a USB Type-C port. It’s possible to use four displays with the cards, but the most popular gaming displays still use DisplayPort, and very few options exist for the Type-C connector. There also aren’t any HDMI 2.1 monitors that I’m aware of, unless you want to use a TV for your monitor. But those will eventually come. Anyway, if you want a different port selection, keep an eye on the third party cards, as I’m sure they’ll cover other configurations.
And now, on to the benchmarks.
Radeon RX 6800 Series Test Systems
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
It seems AMD is having a microprocessor renaissance of sorts right now. First, it has Zen 3 coming out and basically demolishing Intel in every meaningful way in the CPU realm. Sure, Intel can compete on a per-core basis … but only up to 10-core chips without moving into HEDT territory. The new RX 6800 cards might just be the equivalent of AMD’s Ryzen CPU launch. This time, AMD isn’t making any apologies. It intends to go up against Nvidia’s best. And of course, if we’re going to test the best GPUs, maybe we ought to look at the best CPUs as well?
For this launch, we have three test systems. First is our old and reliable Core i9-9900K setup, which we still use as the baseline and for power testing. We’re adding both AMD Ryzen 9 5900X and Intel Core i9-10900K builds to flesh things out. In retrospect, trying to do two new testbeds may have been a bit too ambitious, as we have to test each GPU on each testbed. We had to cut a bunch of previous-gen cards from our testing, and the hardware varies a bit among the PCs.
For the AMD build, we’ve got an MSI X570 Godlike motherboard, which is one of only a handful that supports AMD’s new Smart Memory Access technology. Patriot supplied us with two kits of single bank DDR4-4000 memory, which means we have 4x8GB instead of our normal 2x16GB configuration. We also have the Patriot Viper VP4100 2TB SSD holding all of our games. Remember when 1TB used to feel like a huge amount of SSD storage? And then Call of Duty: Modern Warfare (2019) happened, sucking down over 200GB. Which is why we need 2TB drives.
Meanwhile, the Intel LGA1200 PC has an Asus Maximum XII Extreme motherboard, 2x16GB DDR4-3600 HyperX memory, and a 2TB XPG SX8200 Pro SSD. (I’m not sure if it’s the old ‘fast’ version or the revised ‘slow’ variant, but it shouldn’t matter for these GPU tests.) Full specs are in the table below.
Anyway, the slightly slower RAM might be a bit of a handicap on the Intel PCs, but this isn’t a CPU review — we just wanted to use the two fastest CPUs, and time constraints and lack of duplicate hardware prevented us from going full apples-to-apples. The internal comparisons among GPUs on each testbed will still be consistent. Frankly, there’s not a huge difference between the CPUs when it comes to gaming performance, especially at 1440p and 4K.
Besides the testbeds, I’ve also got a bunch of additional gaming tests. First is the suite of nine games we’ve used on recent GPU reviews like the RTX 30-series launch. We’ve done some ‘bonus’ tests on each of the Founders Edition reviews, but we’re shifting gears this round. We’re adding four new/recent games that will be tested on each of the CPU testbeds: Assassin’s Creed Valhalla, Dirt 5, Horizon Zero Dawn, and Watch Dogs Legion — and we’ve enabled DirectX Raytracing (DXR) on Dirt 5 and Watch Dogs Legion.
There are some definite caveats, however. First, the beta DXR support in Dirt 5 doesn’t look all that different from the regular mode, and it’s an AMD promoted game. Coincidence? Maybe, but it’s probably more likely that AMD is working with Codemasters to ensure it runs suitably on the RX 6800 cards. The other problem is probably just a bug, but AMD’s RX 6800 cards seem to render the reflections in Watch Dogs Legion with a bit less fidelity.
Besides the above, we have a third suite of ray tracing tests: nine games (or benchmarks of future games) and 3DMark Port Royal. Of note, Wolfenstein Youngblood with ray tracing (which uses Nvidia’s pre-VulkanRT extensions) wouldn’t work on the AMD cards, and neither would the Bright Memory Infinite benchmark. Also, Crysis Remastered had some rendering errors with ray tracing enabled (on the nanosuits). Again, that’s a known bug.
Radeon RX 6800 Gaming Performance
We’ve retested all of the RTX 30-series cards on our Core i9-9900K testbed … but we didn’t have time to retest the RTX 20-series or RX 5700 series GPUs. The system has been updated with the latest 457.30 Nvidia drivers and AMD’s pre-launch RX 6800 drivers, as well as Windows 10 20H2 (the October 2020 update to Windows). It looks like the combination of drivers and/or Windows updates may have dropped performance by about 1-2 percent overall, though there are other variables in play. Anyway, the older GPUs are included mostly as a point of reference.
We have 1080p, 1440p, and 4K ultra results for each of the games, as well as the combined average of the nine titles. We’re going to dispense with the commentary for individual games right now (because of a time crunch), but we’ll discuss the overall trends below.
9 Game Average
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Borderlands 3
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
The Division 2
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Far Cry 5
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Final Fantasy XIV
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Forza Horizon 4
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Metro Exodus
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Red Dead Redemption 2
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Shadow Of The TombRaider
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Strange Brigade
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
AMD’s new GPUs definitely make a good showing in traditional rasterization games. At 4K, Nvidia’s 3080 leads the 6800 XT by three percent, but it’s not a clean sweep — AMD comes out on top in Borderlands 3, Far Cry 5, and Forza Horizon 4. Meanwhile, Nvidia gets modest wins in The Division 2, Final Fantasy XIV, Metro Exodus, Red Dead Redemption 2, Shadow of the Tomb Raider, and the largest lead is in Strange Brigade. But that’s only at the highest resolution, where AMD’s Infinity Cache may not be quite as effective.
Dropping to 1440p, the RTX 3080 and 6800 XT are effectively tied — again, AMD wins several games, Nvidia wins others, but the average performance is the same. At 1080p, AMD even pulls ahead by two percent overall. Not that we really expect most gamers forking over $650 or $700 or more on a graphics card to stick with a 1080p display, unless it’s a 240Hz or 360Hz model.
Flipping over to the vanilla RX 6800 and the RTX 3070, AMD does even better. On average, the RX 6800 leads by 11 percent at 4K ultra, nine percent at 1440p ultra, and seven percent at 1080p ultra. Here the 8GB of GDDR6 memory on the RTX 3070 simply can’t keep pace with the 16GB of higher clocked memory — and the Infinity Cache — that AMD brings to the party. The best Nvidia can do is one or two minor wins (e.g., Far Cry 5 at 1080p, where the GPUs are more CPU limited) and slightly higher minimum fps in FFXIV and Strange Brigade.
But as good as the RX 6800 looks against the RTX 3070, we prefer the RX 6800 XT from AMD. It only costs $70 more, which is basically the cost of one game and a fast food lunch. Or put another way, it’s 12 percent more money, for 12 percent more performance at 1080p, 14 percent more performance at 1440p, and 16 percent better 4K performance. You also get AMD’s Rage Mode pseudo-overclocking (really just increased power limits).
Radeon RX 6800 CPU Scaling and Overclocking
Our traditional gaming suite is due for retirement, but we didn’t want to toss it out at the same time as a major GPU launch — it might look suspicious. We didn’t have time to do a full suite of CPU scaling tests, but we did run 13 games on the five most recent high-end/extreme GPUs on our three test PCs. Here’s the next series of charts, again with commentary below.
13-Game Average
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Assassin’s Creed Valhalla
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Borderlands 3
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
The Division 2
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Dirt 5
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Far Cry 5
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Final Fantasy XIV
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Forza Horizon 4
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Horizon Zero Dawn
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Metro Exodus
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Red Dead Redemption 2
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Shadow of the Tomb Raider
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Strange Brigade
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Watch Dogs Legion
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
These charts are a bit busy, perhaps, with five GPUs and three CPUs each, plus overclocking. Take your time. We won’t judge. Nine of the games are from the existing suite, and the trends noted earlier basically continue.
Looking just at the four new games, AMD gets a big win in Assassin’s Creed Valhalla (it’s an AMD promotional title, so future updates may change the standings). Dirt 5 is also a bit of an odd duck for Nvidia, with the RTX 3090 actually doing quite badly on the Ryzen 9 5900X and Core i9-10900K for some reason. Horizon Zero Dawn ends up favoring Nvidia quite a bit (but not the 3070), and lastly, we have Watch Dogs Legion, which favors Nvidia a bit (more at 4K), but it might have some bugs that are currently helping AMD’s performance.
Overall, the 3090 still maintains its (gold-plated) crown, which you’d sort of expect from a $1,500 graphics card that you can’t even buy right now. Meanwhile, the RX 6800 XT mixes it up with the RTX 3080, coming out slightly ahead overall at 1080p and 1440p but barely trailing at 4K. Meanwhile, the RX 6800 easily outperforms the RTX 3070 across the suite, though a few games and/or lower resolutions do go the other way.
Oddly, my test systems ended up with the Core i9-10900K and even the Core i9-9900K often leading the Ryzen 9 5900X. The 3090 did best with the 5900X at 1080p, but then went to the 10900K at 1440p and both the 9900K and 10900K at 4K. The other GPUs also swap places, though usually the difference between CPU is pretty negligible (and a few results just look a bit buggy).
It may be that the beta BIOS for the MSI X570 board (which enables Smart Memory Access) still needs more tuning, or that the differences in memory came into play. I didn’t have time to check performance without enabling the large PCIe BAR feature either. But these are mostly very small differences, and any of the three CPUs tested here are sufficient for gaming.
As for overclocking, it’s pretty much what you’d expect. Increase the power limit, GPU core clocks, and GDDR6 clocks and you get more performance. It’s not a huge improvement, though. Overall, the RX 6800 XT was 4-6 percent faster when overclocked (the higher results were at 4K). The RX 6800 did slightly better, improving by 6 percent at 1080p and 1440p, and 8 percent at 4K. GPU clocks were also above 2.5GHz for most of the testing of the RX 6800, and it’s default lower boost clock gave it a bit more room for improvement.
Radeon RX 6800 Series Ray Tracing Performance
So far, most of the games haven’t had ray tracing enabled. But that’s the big new feature for RDNA2 and the Radeon RX 6000 series, so we definitely wanted to look into ray tracing performance more. Here’s where things take a turn for the worse because ray tracing is very demanding, and Nvidia has DLSS to help overcome some of the difficulty by doing AI-enhanced upscaling. AMD can’t do DLSS since it’s Nvidia proprietary tech, which means to do apples-to-apples comparisons, we have to turn off DLSS on the Nvidia cards.
That’s not really fair because DLSS 2.0 and later actually look quite nice, particularly when using the Balanced or Quality modes. What’s more, native 4K gaming with ray tracing enabled is going to be a stretch for just about any current GPU, including the RTX 3090 — unless you’re playing a lighter game like Pumpkin Jack. Anyway, we’ve looked at ray tracing performance with DLSS in a bunch of these games, and performance improves by anywhere from 20 percent to as much as 80 percent (or more) in some cases. DLSS may not always look better, but a slight drop in visual fidelity for a big boost in framerates is usually hard to pass up.
We’ll have to see if AMD’s FidelityFX Super Resolution can match DLSS in the future, and how many developers make use of it. Considering AMD’s RDNA2 GPUs are also in the PlayStation 5 and Xbox Series S/X, we wouldn’t count AMD out, but for now, Nvidia has the technology lead. Which brings us to native ray tracing performance.
10-game DXR Average
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
3DMark Port Royal
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Boundary Benchmark
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Call of Duty Black Ops Cold War
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Control
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Crysis Remastered
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Dirt 5
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Fortnite
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Metro Exodus
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Shadow of the Tomb Raider
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
WatchDogs
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Well. So much for AMD’s comparable performance. AMD’s RX 6800 series can definitely hold its own against Nvidia’s RTX 30-series GPUs in traditional rasterization modes. Turn on ray tracing, even without DLSS, and things can get ugly. AMD’s RX 6800 XT does tend to come out ahead of the RTX 3070, but then it should — it costs more, and it has twice the VRAM. But again, DLSS (which is supported in seven of the ten games/tests we used) would turn the tables, and even the DLSS quality mode usually improves performance by 20-40 percent (provided the game isn’t bottlenecked elsewhere).
Ignoring the often-too-low framerates, overall, the RTX 3080 is nearly 25 percent faster than the RX 6800 XT at 1080p, and that lead only grows at 1440p (26 percent) and 4K (30 percent). The RTX 3090 is another 10-15 percent ahead of the 3080, which is very much out of AMD’s reach if you care at all about ray tracing performance — ignoring price, of course.
The RTX 3070 comes out with a 10-15 percent lead over the RX 6800, but individual games can behave quite differently. Take the new Call of Duty: Black Ops Cold War. It supports multiple ray tracing effects, and even the RTX 3070 holds a significant 30 percent lead over the 6800 XT at 1080p and 1440p. Boundary, Control, Crysis Remastered, and (to a lesser extent) Fortnite also have the 3070 leading the AMD cards. But Dirt 5, Metro Exodus, Shadow of the Tomb Raider, and Watch Dogs Legion have the 3070 falling behind the 6800 XT at least, and sometimes the RX 6800 as well.
There is a real question about whether the GPUs are doing the same work, though. We haven’t had time to really dig into the image quality, but Watch Dogs Legion for sure doesn’t look the same on AMD compared to Nvidia with ray tracing enabled. Check out these comparisons:
Apparently Ubisoft knows about the problem. In a statement to us, it said, “We are aware of the issue and are working to address it in a patch in December.” But right now, there’s a good chance that AMD’s performance in Watch Dogs Legion at least is higher than it should be with ray tracing enabled.
Overall, AMD’s ray tracing performance looks more like Nvidia’s RTX 20-series GPUs than the new Ampere GPUs, which was sort of what we expected. This is first gen ray tracing for AMD, after all, while Nvidia is on round two. Frankly, looking at games like Fortnite, where ray traced shadows, reflections, global illumination, and ambient occlusion are available, we probably need fourth gen ray tracing hardware before we’ll be hitting playable framerates with all the bells and whistles. And we’ll likely still need DLSS, or AMD’s Super Resolution, to hit acceptable frame rates at 4K.
Radeon RX 6800 Series: Power, Temps, Clocks, and Fans
We’ve got our usual collection of power, temperature, clock speed, and fan speed testing using Metro Exodus running at 1440p, and FurMark running at 1600×900 in stress test mode. While Metro is generally indicative of how other games behave, we loop the benchmark five times, and there are dips where the test restarts and the GPU gets to rest for a few seconds. FurMark, on the other hand, is basically a worst-case scenario for power and thermals. We collect the power data using Powenetics software and hardware, which uses GPU-Z to monitor GPU temperatures, clocks, and fan speeds.
GPU Total Power
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
AMD basically sticks to the advertised 300W TBP on the 6800 XT with Metro Exodus, and even comes in slightly below the 250W TBP on the RX 6800. Enabling Rage Mode on the 6800 XT obviously changes things, and you can also see our power figures for the manual overclocks. Basically, Big Navi can match RTX 3080 when it comes to power if you relax increase the power limits.
FurMark pushes power on both cards a bit higher, which is pretty typical. If you check the line graphs, you can see our 6800 XT OC starts off at nearly 360W in FurMark before it throttles down a bit and ends up at closer to 350W. There are some transient power spikes that can go a bit higher as well, which we’ll discuss more later.
GPU Core Clocks
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Looking at the GPU clocks, AMD is pushing some serious MHz for a change. This is now easily the highest clocked GPU we’ve ever seen, and when we manually overclocked the RX 6800, we were able to hit a relatively stable 2550 MHz. That’s pretty damn impressive, especially considering power use isn’t higher than Nvidia’s GPUs. Both cards also clear their respective Game Clocks and Boost Clocks, which is a nice change of pace.
GPU Core Temp
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
GPU Fan Speed
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Temperatures and fan speeds are directly related to each other. Ramp of fan speed — which we did for the overclocked 6800 cards — and you can get lower temperatures, at the cost of noise levels. We’re still investigating overclocking as well, as there’s a bit of odd behavior so far. The cards will run fine for a while, and then suddenly drop into a weak performance mode where performance might be half the normal level, or even worse. That’s probably due to the lack of overclocking support in MSI Afterburner for the time being. By default, though, the cards have a good balance of cooling with noise. We’ll get exact SPL readings later (still benchmarking a few other bits), but it’s interesting that all of the new GPUs (RTX 30-series and RX 6000) have lower fan speeds than the previous generation.
Image 1 of 2
Image 2 of 2
We observed some larger-than-expected transient power spikes with the RX 6800 XT, but to be absolutely clear, these transient power spikes shouldn’t be an issue — particularly if you don’t plan on overclocking. However, it is important to keep these peak power measurements in mind when you spec out your power supply.
Transient power spikes are common but are usually of such short duration (in the millisecond range) that our power measurement gear, which records measurements at roughly a 100ms granularity, can’t catch them. Typically you’d need a quality oscilloscope to measure transient power spikes accurately, but we did record several spikes even with our comparatively relaxed polling.
The charts above show total power consumption of the RX 6800XT at stock settings, overclocked, and with Rage Mode enabled. In terms of transient power spikes, we don’t see any issues at all with Metro Exodus, but we see brief peaks during Furmark of 425W with the manually overclocked config, 373W with Rage Mode, and 366W with the stock setup. Again, these peaks were measured within one 100ms polling cycle, which means they could certainly trip a PSU’s over power protection if you’re running at close to max power delivery.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
To drill down on the topic, we split out our power measurements from each power source, which you’ll see above. The RX 6800 XT draws power from the PCIe slot and two eight-pin PCIe connectors (PEG1/PEG2).
Power consumption over the PCIe slot is well managed during all the tests (as a general rule of thumb, this value shouldn’t exceed 71W, and the 6800 XT is well below that mark). We also didn’t catch any notable transient spikes during our real-world Metro Exodus gaming test at either stock or overclocked settings.
However, during our FurMark test at stock settings, we see a power consumption spike to 206W on one of the PCIe cables for a very brief period (we picked up a single measurement of the spike during each run). After overclocking, we measured a simultaneous spike of 231W on one cable and 206W on the other for a period of one measurement taken at a 100ms polling rate. Naturally, those same spikes are much less pronounced with Rage Mode overclocking, measuring only 210W and 173W. A PCIe cable can easily deliver ~225W safely (even with 18AWG), so these transient power spikes aren’t going to melt connectors, wires, or harm the GPU in any way — they would need to be of much longer duration to have that type of impact.
But the transient spikes are noteworthy because some CPUs, like the Intel Core i9-9900K and i9-10900K, can consume more than 300W, adding to the total system power draw. If you plan on overclocking, it would be best to factor the RX6800 XT’s transient power consumption into the total system power.
Power spikes of 5-10ms can trip the overcurrent protection (OCP) on some multi-rail power supplies because they tend to have relatively low OCP thresholds. As usual, a PSU with a single 12V rail tends to be the best solution because they have much better OCP mechanisms, and you’re also better off using dedicated PCIe cables for each 8-pin connector.
Radeon RX 6800 Series: Prioritizing Rasterization Over Ray Tracing
It’s been a long time since AMD had a legitimate contender for the GPU throne. The last time AMD was this close … well, maybe Hawaii (Radeon R9 290X) was competitive in performance at least, while using quite a bit more power. That’s sort of been the standard disclaimer for AMD GPUs for quite a few years. Yes, AMD has some fast GPUs, but they tend to use a lot of power. The other alternative was best illustrated by one of the best budget GPUs of the past couple of years: AMD isn’t the fastest, but dang, look how cheap the RX 570 is! With the Radeon RX 6800 series, AMD is mostly able to put questions of power and performance behind it. Mostly.
The RX 6800 XT ends up just a bit slower than the RTX 3080 overall in traditional rendering, but it costs less, and it uses a bit less power (unless you kick on Rage Mode, in which case it’s a tie). There are enough games where AMD comes out ahead that you can make a legitimate case for AMD having the better card. Plus, 16GB of VRAM is definitely helpful in a few of the games we tested — or at least, 8GB isn’t enough in some cases. The RX 6800 does even better against the RTX 3070, generally winning most benchmarks by a decent margin. Of course, it costs more, but if you have to pick between the 6800 and 3070, we’d spend the extra $80.
The problem is, that’s a slippery slope. At that point, we’d also spend an extra $70 to go to the RX 6800 XT … and $50 more for the RTX 3080, with its superior ray tracing and support for DLSS, is easy enough to justify. Now we’re looking at a $700 graphics card instead of a $500 graphics card, but at least it’s a decent jump in performance.
Of course, you can’t buy any of the Nvidia RTX 30-series GPUs right now. Well, you can, if you get lucky. It’s not that Nvidia isn’t producing cards; it’s just not producing enough cards to satisfy the demand. And, let’s be real for a moment: There’s not a chance in hell AMD’s RX 6800 series are going to do any better. Sorry to be the bearer of bad news, but these cards are going to sell out. You know, just like every other high-end GPU and CPU launched in the past couple of months. (Update: Yup, every RX 6800 series GPU sold out within minutes.)
What’s more, AMD is better off producing more Ryzen 5000 series CPUs than Radeon RX 6000 GPUs. Just look at the chip sizes and other components. A Ryzen 9 5900X has two 80mm square compute die with a 12nm IO die in a relatively compact package, and AMD is currently selling every single one of those CPUs for $550 — or $800 for the 5950X. The Navi 21 GPU, by comparison, is made on the same TSMC N7 wafers, and it uses 519mm square, plus it needs GDDR6 memory, a beefy cooler and fan, and all sorts of other components. And it still only sells for roughly the same price as the 5900X.
Which isn’t to say you shouldn’t want to buy an RX 6800 card. It’s really going to come down to personal opinions on how important ray tracing will become in the coming years. The consoles now support the technology, but even the Xbox Series X can’t keep up with an RX 6800, never mind an RTX 3080. Plus, while some games like Control make great use of ray tracing effects, in many other games, the ray tracing could be disabled, and most people wouldn’t really miss it. We’re still quite a ways off from anything approaching Hollywood levels of fidelity rendered in real time.
In terms of features, Nvidia still comes out ahead. Faster ray tracing, plus DLSS — and whatever else those Tensor cores might be used for in the future — seems like the safer bet long term. But there are still a lot of games forgoing ray tracing effects, or games where ray tracing doesn’t make a lot of sense considering how it causes frame rates to plummet. Fortnite in creative mode might be fine for ray tracing, but I can’t imagine many competitive players being willing to tank performance just for some eye candy. The same goes for Call of Duty. But then there’s Cyberpunk 2077 looming, which could be the killer game that ray tracing hardware needs.
We asked earlier if Big Navi, aka RDNA2, was AMD’s Ryzen moment for its GPUs. In a lot of ways, it’s exactly that. The first generation Ryzen CPUs brought 8-core CPUs to mainstream platforms, with aggressive prices that Intel had avoided. But the first generation Zen CPUs and motherboards were raw and had some issues, and it wasn’t until Zen 2 that AMD really started winning key matchups, and Zen 3 finally has AMD in the lead. Perhaps it’s better to say that Navi, in general, is AMD trying to repeat what it did on the CPU side of things.
RX 6800 (Navi 21) is literally a bigger, enhanced version of last year’s Navi 10 GPUs. It’s up to twice the CUs, twice the memory, and is at least a big step closer to feature parity with Nvidia now. If you can find a Radeon RX 6800 or RX 6800 XT in stock any time before 2021, it’s definitely worth considering. RX 6800 and Big Navi aren’t priced particularly aggressively, but they do slot in nicely just above and below Nvidia’s competing RTX 3070 and 3080.
The first images corresponding to the PowerColor RX 6800 XT Red Dragon , one of the Few cards based on AMD Radeon RX 6800 and RX 2020 XT whose design was not yet officially known, as well as key specifications such as operating frequencies are not known.
As we see in Videocardz, the filtered cards correspond to the variants Red Dragon and Red Devi l, These are the nomenclatures for the high performance cards of the manufacturer in question.
Both cards share a very similar layout for general purposes, with three DisplayPort ports and an HDMi port on the back, the absence of the USB-C port that we can find on reference graphics cards and three-fan heatsinks that will allow cooling suitably newer AMD graphics cards.
Regarding the design, although they are very similar, the Red Dragon seems to have a more rectilinear design entirely in black While the Red Devil appears to have a brushed metal design with colors that are slightly reminiscent of military hardware. Be that as it may, the data that is unknown about these cards will probably not be released until their official launch, so we will have to wait before knowing everything about them.
End of Article. Tell us something in the Comments or come to our Forum!
Jordi Bercial
Avid enthusiast of technology and electronics. I messed around with computer components almost since I learned to ride. I started working at Geeknetic after winning a contest on their forum for writing hardware articles. Drift, mechanics and photography lover. Don’t be shy and leave a comment on my articles if you have any questions.
When the system requirements of Cyberpunk 2077 are announced, the Radeon graphics cards from AMD naturally play a role for the standard settings Role. If you look at the system requirements for ray tracing, only graphics cards with NVIDIA GPU are listed here.
Cyberpunk 2077 is a title that is being supported by NVIDIA in development. But of course you can ask yourself why the developers do not recommend any Radeon hardware with ray tracing when Big Navi has the ray tracing accelerator and has already proven in the tests that the two cards are quite appealing, depending on the game Offer performance.
According to the developers, there will be a patch for Radeon 6000 graphics cards that will also activate ray tracing. Buyer or owner of a Radeon RX 6800, Radeon RX 6800 XT and from the beginning of December also a Radeon RX 6900 XT will start on 10. December, however, came out empty The developers cite the lack of necessary optimization for the RDNA-2 architecture as the reason for the lack of support.
Also for the new consoles Playstation 5, Xbox Series X and Xbox Series S, these should not be added until later, although this was known for a little longer. It’s a bit surprising that the new Radeon cards are being completely excluded for the time being.
Raytracing via DXR is not proprietary
Of course, there are once again calls that NVIDIA has created a proprietary interface and is now throwing obstacles in the way of AMD. It is not so. From the very beginning, NVIDIA has an interface developed by Microsoft called DXR as part of DirectX 12 and now DirectX 12 Ultimate addressed, or a hardware acceleration of the calculations implemented. Only VulkanRT, as it is used in Wolfenstein: Young Blood or QuakeRTX, is not yet part of a standard and thus an extension that has not been implemented by every manufacturer.
All games, which Address Microsoft’s DXR interfaces, worked in our tests with the Big Navi cards. However, here and there either major performance problems occur or individual effects are not yet displayed, which is more of an error / bug than an intentional omission.
A few in particular are noticeable Bugs and Watch Dogs Legion and Deliver us the Moon, where raytracing effects are calculated but not displayed. So the performance is what you would expect with ray tracing, but it is missing in the display. Basically, the Big Navi cards based on the RDNA-2 architecture apparently offer raytracing performance that should be roughly on par with NVIDIA’s Turing and Ampere GPUs. Due to a lack of optimizations, games like Battlefield V and Control don’t run that well.
For Cyberpunk 2077, the lack of optimizations means that Players first have to do without the ray tracing effects. A chicane intended by NVIDIA cannot be derived from this.
The Polish game company has the system requirements for the upcoming action role-playing game Cyberpunk 2077 updated. The development studio announced details of the requirements for the first time in September. As can be seen from the new specifications, there will be no support for AMD’s Radeon RX 6800 (XT) for the time being. CD Projekt Red has already announced that it intends to do this in the future. If you want to play cyberpunk 2060 with ray tracing, you need a resolution of 1080 p at least one GeForce RTX – 2060 – graphics card with 6 GB VRAM. In addition, 16 GB RAM and an Intel Core i7 4790 or an AMD Ryzen 3 3200 G required.
Anyone who enters the world of cyberpunk with a resolution of 1440 p and the GFX-Settings RT Ultra would like to immerse, a GeForce RTX 3070 with 8 GB VRAM installed in his computer. The CPU is an Intel Core i7 6700 or an AMD Ryzen 5 3600 provided. For the main memory 16 GB are required. In order to display the premier class RT Ultra with 2160 p, all gamers must have at least one NVIDIA GeForce RTX 3080 – GPU with 10 call GB VRAM your own. The processor here is an Intel Core i7 6700 or an AMD Ryzen 5 3600 Obligatory.
Data protection notice for Twitter
At this point we would like to show you a Twitter feed. Protecting your data is important to us: By integrating the applet, Twitter sets cookies on your computer, with which you can possibly be tracked. If you want to allow that, just click on this feed. The content is then loaded and displayed to you.
Your Hardwareluxx-Team
Show tweets directly from now on
Both for the minimum requirements and for RT Ultra, the Polish game manufacturer recommends an SSD with at least 70 GB of free space and a Windows 10 With 64-bit. If you are currently still using Windows 7 for your gaming rig, you will be able to use Cyberpunk 2077 to play with minimal settings. However, using the operating system, which is now over eleven years old, is not recommended.
CD Projekt Red also released a new episode of the cyberpunk format Night City Wire. The episode began with the unveiling of a trailer focused on Johnny Silverhand and the connection he shares with V – the game’s protagonist. This was followed by a behind-the-scenes video in which Keanu Reeves talks about his transformation into the rebel rocker boy of Night City, including voice and motion capture recordings.
Data protection notice for Youtube
At this point we would like to show you a YouTube video. Protecting your data is important to us: YouTube sets cookies on your computer by embedding and playing them, with which you can possibly be tracked. If you want to allow this, just hit the play button. The video will then be loaded and then played.
During the past week, once again, numerous exciting articles went online on Hardwareluxx.de. We not only have the Gigabyte AORUS 15 P subjected to an extensive practical test in everyday gaming, but also the Seasonic SYNCRO Q 704, the be quiet! Silent Base 802 and the MSI GeForce RTX 2080 Suprim X tested. The highlight of the week was of course the market launch of the new AMD Radeon RX 6800 and RX 6800 XT, with which AMD was able to catch up strongly in the high-end segment, and those of the Sony PlayStation 5. We had also built a high-performance high-end NAS that we had built ourselves and with us the gigabyte M 27 F employed.
At this point we have summarized all the articles from the last week and provided them with a small reading sample. With this in mind: Have fun reading!
Friday, 13. November 2020: Gigabyte AORUS 15 P WB in the test: sensibly slimmed down
The Gigabyte AORUS 15 P is another gaming notebook in the compact class that is below the AORUS 15 G family sorted, but in In terms of keyboard, connections and the variety of models, it has to accept minor compromises. How the slim and light gaming machine with its Intel Core i7 – 10750 H, the NVIDIA GeForce RTX 2070 Max-Q as well as the fast 15, 6-inch display with 144 – Hz technology beats, you can find out in this Hardwareluxx article on the following pages …
Saturday, 14. November 2020: Seasonic SYNCRO Q 704 with CONNECT in the test: A PC case for a power supply unit
Seasonic stands for power supply units – and has so far only offered products in the power supply segment. It is all the more surprising that we can now test a Seasonic PC case for the first time. The SYNCRO Q 704 was developed around a power supply – namely around Seasonics SYNCRO CONNECT. This series of power supplies is intended to ensure massively optimized cable management. And of course we want to find out how well the SYNCRO Q 704 really works. .
Monday, 16. November 2020: Self-made NAS tried out: A story of suffering in several Files
A self-made NAS (Network Attached Storage) is for some Users the perfect alternative to the ready-made solutions from manufacturers such as Synology, QNAP or ASUSTOR. The countless possibilities of adaptability to your own needs and “full access” to the system create the impression that there is really only one way: to build your own personal cloud. Let’s take a closer look at the problems we encountered in our NAS tinkering project …
Tuesday, 17. November 2020: be quiet! Silent Base 802 in the test: one housing, two faces
With the Silent Base 802 brings be quiet! a case on the market that should make two very different user groups happy. Thanks to exchangeable covers for the front and lid, it can be used either as a silent or as an airflow housing. But how does this balancing act succeed in practice …
Wednesday, 18. November 2020: The next generation is here: PlayStation 5 tried
With the PlayStation 5, Sony released on 19. November 2020 a new generation of his console series after more than seven years. Even if the PS4 was released a few years ago, there have been numerous improvements in the past with the Slim and Pro versions compared to the release variant. With the next generation of consoles, however, the manufacturer intends to set completely new standards. Hardwareluxx took this as an opportunity to take a close look at the new Sony console …
Wednesday, 18. November 2020: Finally another duel at eye level: Radeon RX 6800 and Radeon RX 6800 XT in the test
Today it is finally like that far! The Radeon RX – 3090 – introduced a few weeks ago starts with two Big Navi models. So we look at the reference versions of the Radeon RX 3090 and Radeon RX 6800 XT. After the test, we will also know whether AMD can keep its own promises and whether it is actually an alternative to NVIDIA’s current generation in all market segments …
Thursday, 19. November 2020: Gigabyte M 27 F: Fast gaming display with KVM switch [Anzeige]
The gigabyte M 27 F is the first gaming monitor with a built-in KVM switch, so you can quickly switch between different sources. But that is not the only highlight of the fast 27 – Zöllers .. .
Friday, 20. November 2020: Massive performance: MSI GeForce RTX 3080 Suprim X in the test
For each of the new ampere variants, we looked at a model from MSI. So far there has been no sign of a Lightning variant, which has a reason. With the Suprim X, MSI is introducing a new series, which today includes the GeForce RTX 3080 and GeForce RTX 3090 begins. We took a look at the MSI GeForce RTX 3080 Suprim X at the start …
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.