AMD Big Navi and RDNA 2 GPUs: Release Date, Specs, Everything We Know

Source: Tom's Hardware added 21st Oct 2020

  • amd-big-navi-and-rdna-2-gpus:-release-date,-specs,-everything-we-know



(Image credit: AMD)

AMD Big Navi, RX 6000, Navi 2x, RDNA 2. Whatever the name, AMD’s next-generation GPUs are promising big performance and efficiency gains, along with feature parity with Nvidia in terms of ray tracing support. Will Team Red finally take the pole position in our GPU hierarchy and lay claim to the crown for the best graphics card, or will the Nvidia Ampere architecture cards keep the top spots? It’s too soon to say, but here’s everything we know about AMD Big Navi, including the RDNA 2 architecture, potential specifications, performance, release date and pricing.

With Nvidia’s GeForce RTX 3090, GeForce RTX 3080, and GeForce RTX 3070 now revealed, and the first two officially launched, the ball is in AMD’s court. There are various ways of looking at the Nvidia Ampere launch. It’s Nvidia doing its best to bury AMD before Big Navi even steps out the door, or Nvidia is scared of what AMD is doing with RDNA 2, or Nvidia rushed the launch to get ahead of the holiday shopping spree, or … you get the point. The RTX 3080 and 3070 appear to be priced reasonably (relative to the Turing launch at least), and demand right now is very high. Frankly, AMD would have likely benefitted if it could have launched Big Navi already, but it has a lot of other balls it’s juggling (like Zen 3).

We’ve done our best to sort fact from fiction, but even without hard numbers from AMD, we have a good idea of what to expect. The Xbox Series X and PlayStation 5 hardware are basically a marriage of Big Navi with a Zen 2 CPU, giving us clues as to where Big Navi is likely to land in the PC world. If AMD plays its cards right, perhaps Big Navi will finally put AMD’s high graphics card power consumption behind it. Nvidia’s RTX 30-series cards leave plenty of room for AMD to catch up, considering the 3080 and 3090 have the highest Nvidia TDPs for single GPUs ever. Let’s start at the top, with the new RDNA 2 architecture that powers RX 6000 / Big Navi / Navi 2x. Here’s what we know, expect, and occasionally guess for the AMD’s upcoming GPUs.

Big Navi / RDNA 2 at a Glance

  • Up to 80 CUs / 5120 shaders
  • 50% better performance per watt
  • Coming October 28 (confirmed)
  • Pricing of $549-$599 for RX 6900 XT (rumor, big spoonful of salt)

(Image credit: AMD)

The RDNA 2 Architecture in Big Navi 

Every generation of GPUs is built from a core architecture, and each architecture offers improvements over the previous generation. It’s an iterative and additive process that never really ends. AMD’s GCN architecture went from first generation for its HD 7000 cards in 2012 up through fifth gen in the Vega and Radeon VII cards in 2017-2019. The RDNA architecture that powers the RX 5000 series of AMD GPUs arrived in mid 2019, bringing major improvements to efficiency and overall performance. RDNA 2 looks to double down on those improvements in late 2020.

First, a quick recap of RDNA 1 is in order. The biggest changes with RDNA 1 over GCN involve a redistribution of resources and a change in how instructions are handled. In some ways, RDNA doesn’t appear to be all that different from GCN. The instruction set is the same, but how those instructions are dispatched and executed has been improved. RDNA also adds working support for primitive shaders, something present in the Vega GCN architecture that never got turned on due to complications.

Perhaps the most noteworthy update is that the wavefronts—the core unit of work that gets executed—have been changed from being 64 threads wide with four SIMD16 execution units, to being 32 threads wide with a single SIMD32 execution unit. SIMD stands for Single Instruction, Multiple Data; it’s a vector processing element that optimizes workloads where the same instruction needs to be run on large chunks of data, which is common in graphics workloads.

This matching of the wavefront size to the SIMD size helps improve efficiency. GCN issued one instruction per wave every four cycles; RDNA issues an instruction every cycle. GCN used a wavefront of 64 threads (work items); RDNA supports 32- and 64-thread wavefronts. GCN has a Compute Unit (CU) with 64 GPU cores, 4 TMUs (Texture Mapping Units) and memory access logic. RDNA implements a new Workgroup Processor (WGP) that consists of two CUs, with each CU still providing the same 64 GPU cores and 4 TMUs plus memory access logic.

How much do these changes matter when it comes to actual performance and efficiency? It’s perhaps best illustrated by looking at the Radeon VII, AMD’s last GCN GPU, and comparing it with the RX 5700 XT. Radeon VII has 60 CUs, 3840 GPU cores, 16GB of HBM2 memory with 1 TBps of bandwidth, a GPU clock speed of up to 1750 MHz, and a theoretical peak performance rating of 13.8 TFLOPS. The RX 5700 XT has 40 CUs, 2560 GPU cores, 8GB of GDDR6 memory with 448 GBps of bandwidth, and clocks at up to 1905 MHz with peak performance of 9.75 TFLOPS.

On paper, Radeon VII looks like it should come out with an easy victory. In practice, across a dozen games that we’ve tested, the RX 5700 XT is slightly faster at 1080p gaming and slightly slower at 1440p. Only at 4K is the Radeon VII able to manage a 7% lead, helped no doubt by its memory bandwidth. Overall, the Radeon VII only has a 1-2% performance advantage, but it uses 300W compared to the RX 5700 XT’s 225W.

In short, AMD is able to deliver roughly the same performance as the previous generation, with a third fewer cores, less than half the memory bandwidth and using 25% less power. That’s a very impressive showing, and while TSMC’s 7nm FinFET manufacturing process certainly warrants some of the credit (especially in regards to power), the performance uplift is mostly thanks to the RDNA architecture.

(Image credit: AMD)

That’s a lot of RDNA discussion, but it’s important because RDNA 2 appears to carry over all of that, with one major new addition: Support for ray tracing. It also supports Variable Rate Shading (VRS), which is part of the DirectX 12 Ultimate spec. There will certainly be other tweaks to the architecture, as AMD is making some big claims about Big Navi / RDNA 2 / Navi 2x when it comes to performance per watt. Specifically, AMD says RDNA 2 will offer 50% more performance per watt than RDNA 1, which is frankly a huge jump—the same large jump RDNA 1 saw relative to GCN.

It means AMD claims RDNA 2 will deliver either the same performance while using 33% less power, or 50% higher performance with the same power, or most likely some in between solution with higher performance and lower power requirements. Of course, there’s another way to read things. RDNA 2 could be up to 1.5X performance per watt, if you restrict it to the same performance level as RDNA 1. That’s pretty much what Nvidia is saying with its 1.9X efficiency increase on Ampere. Again, #salt.

The one thing we know for certain is that RDNA 2 / Big Navi / RX 6000 GPUs will all support ray tracing. That will bring AMD up to feature parity with Nvidia. There was some question as to whether AMD would use the same BVH approach to ray tracing calculations as Nvidia, and with the PlayStation 5 and Xbox Series X announcements out of the way, the answer appears to be yes.

If you’re not familiar with the term BVH, it stands for Bounding Volume Hierarchy and is used to efficiently find ray and triangle intersections; you can read more about it in our discussion of Nvidia’s Turing architecture and its ray tracing algorithm. While AMD didn’t provide much detail on its BVH hardware, BVH as a core aspect of ray tracing was definitely mentioned, and we heard similar talk about ray tracing and BVH with the VulkanRT and DirectX 12 Ultimate announcements.

We don’t know how much ray tracing hardware is present, or how fast will it be. Even if AMD takes the same approach as Nvidia and puts one RT core (or whatever AMD wants to call it) into each CU, the comparison between AMD and Nvidia isn’t clear cut. Nvidia for example says it roughly doubled the performance of its RT cores in Ampere. Will AMD’s RT cores be like Nvidia’s RT Gen1, RT Gen2, or something else? There are at least a few rumors or hints that Big Navi might not even have RT cores as such, but will instead use some optimized shader logic and large caches to boost RT shader calculations. The fact is, we don’t know yet and won’t know until AMD says more.

Note that Nvidia also has Tensor cores in its Turing architecture, which are used for deep learning and AI computations, as well as DLSS (Deep Learning Super Sampling), which has now been generalized with DLSS 2.0 (and DLSS 2.1) to improve performance and image quality and make it easier for games to implement DLSS. So far, AMD has said nothing about RDNA 2 / Navi 2x including Tensor cores or an equivalent to DLSS, though AMD’s CAS (Contrast Aware Sharpening) and RIS (Radeon Image Sharpening) do overlap with DLSS in some ways. Recently, Sony patents detailed a DLSS-like technique for image reconstruction, presumably for the PlayStation 5. It may be possible to do that without any Tensor cores, using just the FP16 or INT8 capabilities of Navi 2x.

We also know that AMD is planning multiple Navi 2x products, and we expect to see extreme, high-end and mainstream options—though budget Navi 2x seems unlikely in the near term, given RX 5500 XT launched in early 2020. AMD could launch multiple GPUs in a relatively short period of time, but more likely we’ll see the highest performance options first, followed by high-end and eventually mid-range solutions. Some of those may not happen until 2021, however. 

(Image credit: AMD)

Potential Big Navi / Navi 2x Specifications 

What does all of this mean for RX 6000 / Big Navi / RDNA 2 desktop GPUs? Based on the Xbox Series X, AMD is fully capable of building an RDNA 2 / Big Navi GPU with at least 52 CUs, and very likely can and will go much higher. AMD is also using two completely different GPU configurations for the Xbox Series X and PlayStation 5, and a third configuration for Xbox Series S, though likely none of those precise configurations will actually end up in a PC graphics card. Regardless, the upcoming consoles give us a minimum baseline for what AMD can do with Big Navi.

AMD has a lot of options available. The PC Navi 2x GPUs are focused purely on graphics, unlike the consoles. AMD also doesn’t benefit from the console sales or subsidies from Sony and Microsoft—each of the new consoles will likely ship close to 100 million units over the coming years, and Sony and MS can take a loss on the hardware because they make it back on software sales. There’s a balancing act between chip size, clock speed, and power, and every processor can prioritize things differently. Larger chips use more power and cost more to manufacture, and they typically run at lower clock speeds to compensate. Smaller chips have better yields, cost less, and use less power, but for GPUs there’s a lot of base functionality that has to be present, so a chip that’s half the performance usually isn’t half the size.

Looking at Navi 10 and RDNA 1, it’s not a stretch to imagine AMD shoving twice the number of GPU cores into a Navi 2x GPU. Navi 10 is relatively small at just 251mm square, and AMD has used much larger die sizes in the past. Anyway, let’s cut to the chase. There have been lots of rumors floating around, but with only weeks separating us from the official Big Navi launch, we’re relatively confident in many of the core specs. A GPU’s maximum CU count can’t be exceeded, but disabling parts of each GPU is common practice and has been for years.

The following table lists potential specs, based on our best information. The question marks indicate our own best guesses based on rumors, previous GPU launches, and the current graphics card market. We’ve run some numbers to help fill in the remaining data, though there’s still plenty of wiggle room for AMD. It’s unlikely AMD will go significantly higher or lower than these estimates, but anywhere within about 10% is feasible.

AMD RX 6000 / Big Navi / Navi 2x Rumored Specifications
Graphics Card RX 6900 XT RX 6800 XT RX 6700 XT RX 6500 XT?
GPU Navi 21 XT Navi 21 XL Navi 22? Navi 23?
Process (nm) 7 7 7 7
Transistors (billion) 23? 23? ? ?
Die size (mm^2) 536? (#salt) 536? ? 236?
CUs 80 60 40 32?
GPU cores 5120 4096 2560 2048?
Max Clock (MHz) 2100? 2100? 2000? 2000?
VRAM Speed (MT/s) 16000 14000 16000? 14000?
VRAM (GB) 16 16 12 8
Bus width 256 256 192 128
ROPs 64 64 64 32?
TMUs 320 256 160 128
TFLOPS (boost) 21.5 17.2 10.2 8.2
Bandwidth (GB/s) 512 448 384? 224?
TBP (watts) 320? 275? 225? 150?
Launch Date Nov 2020 Nov 2020 Nov 2020? Jan 2021?
Launch Price $599? $499? $399? $299?

The highest spec rumors point to a Navi 21 GPU with 80 CUs and 5120 GPU cores, and more than double the size (536mm square) of the current Navi 10 used in the RX 5700 XT. While there are very good sources on the CU and core counts, we’d take the die size with a scoop of salt. It’s entirely possible AMD has gone with a huge die for Navi 21, but if that figure is correct, it’s the biggest AMD GPU since 2015’s Fiji (R9 Fury X).

That also means it’s likely very power hungry, and while some leaks on TBP (Total Board Power) have come out, the way AMD calculates TBP vs. TGP (Total Graphics Power) is a bit muddy. Based on the TBP figures, it looks like AMD will likely have a chip that’s close to GeForce RTX 3080 in terms of power (give or take).

Big Navi / RDNA 2 has to add support for ray tracing and some other tech, which should require quite a few transistors. AMD may also go with very large caches, which would help overcome potential bandwidth limitations caused by the somewhat narrow 256-bit and 192-bit bus widths. Note that Nvidia has opted for a 320-bit bus on the 3080 and 384-bit on the 3090, plus faster GDDR6X memory.

The real question is whether AMD has tuned the shader cores similar to what Nvidia did with Turing, adding concurrent FP32 and INT32 pipelines. If so, performance on the biggest of the Big Navi chips could definitely give the RTX 3080 some needed competition. The ray tracing hardware may still not be up to Turing levels, however, never mind Ampere. Based on some of the information surrounding the Xbox Series X, it seems like the RT support will end up with lower ray/triangle intersection performance than Nvidia’s hardware.

Not surprisingly, clock speeds are all still unknown. So many ‘leaks’ have happened with maximum boost clocks going as high as 2.4GHz, or as low as 1.7GHz. It’s impossible to know for certain where AMD will land, but we’ve aimed at a medium/high value that will deliver the promised performance per Watt gains. TSMC’s N7 process is generally better than the Samsun 8N that Nvidia’s using for Ampere, but then AMD has generally lagged behind Nvidia when it comes to architecture designs (at least for the past seven years).

(Image credit: AMD)

Getting back to the memory side of things, AMD’s configurations are interesting but leave us with a lot of questions. Most rumors and leaks point to 16GB of GDDR6 for the top RX 6900 XT and RX 6800 XT, but only 12GB for the RX 6700 XT. The memory capacities look good, with all of the GPUs at least matching the RTX 3080, but bus widths and speeds could be a big problem.

If AMD uses 16GB and 256-bit as expected, even with the fastest 16Gbps GDDR6 that’s still only 512GBps of bandwidth. The 6900 XT potentially doubles compute performance of the current RX 5700 XT, with twice the VRAM capacity, and yet it would only have 14% more bandwidth. The 16GB 6800 XT with 14 Gbps GDDR6 would end up with the same bandwidth as the current RX 5700 series, while the RX 6500 would have 8GB and an even narrower bus. Depending on architecture, it could still come close to RX 5700 levels of performance, but we’ll have to wait and see.

This is why many expect Big Navi / RDNA 2 to come with massive L2 caches. Double the cache size and you can avoid hitting memory as hard, which might be sufficient to get around the GDDR6 bandwidth limitations. We’ve also heard ray tracing shader calculations can get a hefty performance boost by adding more cache, and as noted above there are hints this is what AMD is doing. We’ll know more by the end of the month.

Big Navi / Navi 2x Graphics Card Model Names 

(Image credit: AMD)

What will AMD call the retail products using Big Navi / Navi 2x GPUs? AMD has at least revealed that the Navi 2x family will be sold under the RX 6000 series, which is what most of us expected. Beyond that, there are still a few remaining questions.

AMD has said it will launch a whole series of Navi 2x GPUs. The Navi 1x family consists of RX 5700 XT, RX 5700, RX 5600 XT, and RX 5500 XT (in 4GB and 8GB models), along with RX 5600/5500/5300 models for the OEM market that lack the XT suffix. AMD could simply add 1000 points to the current models, but we expect there will be a few more options this round.

The top model will almost certainly be called RX 6900 XT, with the various performance tiers as RX 6800 XT, 6700 XT, etc. It also looks like AMD is moving into higher performance segments, going by the 6900 and 6800 model numbers, with the 5600 XT replacement ending up as the 6700 XT. The 5500 XT meanwhile will eventually be replaced by 6500 XT is our assumption, so there’s a 200 point gap between the high-end 6700 and the mainstream/budget 6500.

All of this could of course change, as model names are relatively easy to update (though packaging has to be complete as well). So, the above is what leaks and rumors currently indicate. We expect the consumer models will keep the XT suffix, and AMD can continue to do non-XT models for the OEM market. We think it would be great to have more consistent branding, but we’ll have to see what AMD decides to do.

RX 6000 / Big Navi / RDNA 2 Release Date 

AMD has reiterated many times this year that RDNA 2, aka Big Navi—which AMD is even using now in homage to the enthusiast community’s adoption of that moniker—will arrive before the end of 2020. AMD has now announced a Future of Radeon PC Gaming event that will take place on October 28.

AMD could potentially launch the RX 6000 GPUs at that time, but more likely is that it will first reveal the architecture, specs, and other details similar to what Nvidia did with it’s Ampere announcement. That means actual GPUs will probably arrive in November, just in time for the holiday shoppers.

While the impact of COVID-19 around the globe is immense, AMD still plans on launching at least some Navi 2x parts in 2020. However, given the late date of the event, it’s possible we will only see the top two products from RDNA 2 in 2020. It might be more than that, but most new GPU families roll out over a period of several months.

RX 6000 / Big Navi / Navi 2x Cost 

(Image credit: AMD)

We provided our own estimated pricing based on the potential performance and graphics card market in the table near the top. We’ve changed those estimates quite a bit since the Nvidia Ampere announcement, as AMD can’t hope to sell slower cards at equal or higher pricing. On the other hand, some rumors suggest RX 6900 XT won’t be far from RTX 3080 performance, so higher prices are certainly possible.

Officially, AMD hasn’t said anything in regards to pricing yet, and that will likely remain the case until the actual launch. Other factors, like the price of competing Nvidia (and maybe even Intel DG1) GPUs, will be considered as well. We can look back at the Navi 10 / RX 5700 XT launch for context.

Rumors came out more than six months before launch listing various prices. We saw everything from RTX 2080 performance for $250 to $500, or RTX 2060 performance for under $200. AMD officially revealed prices of $449 for the RX 5700 XT and $379 for the RX 5700 about a month before launch.

After the initial RX 5700 XT reveal, Nvidia (to the surprise of pretty much no one) launched its RTX 2070 Super and RTX 2060 Super, providing improved performance at lower prices. (The RTX 2080 Super was also announced, but it didn’t launch until two weeks after the RX 5700 series.) Just a few days before launch, AMD then dropped the prices of its RX 5700 XT to $399, and the RX 5700 to $349, making them far more appealing. (The RX 5600 XT arrived about six months later priced at $299.) AMD would later go on to state that this was all premeditated—gamesmanship to get Nvidia to reveal its hand early.

The bottom line is that no one, including AMD itself, knows what the final pricing will be on a new graphics card months before launch. There are plans with multiple contingencies, and ultimately the market will help determine the price. We now have Nvidia’s Ampere pricing of $1,499, $699, and $499 for the 3090, 3080, and 3070, respectively. Only AMD knows for sure how RX 6000 stacks up to RTX 30-series in performance, and it will tweak prices accordingly.

There are also multiple reports of a 500mm square or larger die size, and if that’s correct we have to assume Big Navi / Navi 2x graphics cards will go after the enthusiast segment—meaning, $600 or more. TSMC’s 7nm FinFET lithography is more expensive than its 12nm, and larger chips mean yields and dies per wafer are both going to be lower. Plus, 16GB and 12GB of GDDR6 will increase both the memory and board price. Big chips lead to big prices, in other words.

The only real advice we can give right now is to wait and see. AMD will do its best to deliver RDNA 2 and Navi 2x GPUs at compelling prices. That doesn’t mean we’ll get RTX 3080 performance for $500, sadly, but if Big Navi can give Nvidia some much-needed competition in the enthusiast graphics card segment, we should see bang-for-the-buck improvements across the entire spectrum of GPUs. And if AMD really does have an 80-CU monster Navi 21 GPU coming that will match the RTX 3080 in performance, we expect it will charge accordingly — just like it’s doing with Zen 3 CPUs now that it appears to have a clear lead over Intel.

Big Navi and RX 6000 Closing Thoughts

AMD has a lot riding on Big Navi, RDNA 2, and the Radeon RX 6000 series. Just like Nvidia’s Ampere, AMD has a lot to prove. This is the GPU architecture that powers the next generation of consoles, which tend to have much longer shelf lives than PC graphics cards. Look at the PS4 and Xbox One: both launched in late 2013 and are still in use today. There are also still PC gamers with GTX 700-series or R9 200-series graphics cards, but if you’re running such a GPU, we feel for you.

We’re very interested in finding out how Big Navi performs, with and without ray tracing. AMD’s RX 6000 performance teaser only serves to whet our appetites. 50% better performance per watt can mean a lot of different things, and AMD hasn’t shied away from 300W GPUs for the past several generations of hardware. A 300W part with 50% better performance per watt would basically be double the performance of the current RX 5700 XT, and that’s enough to potentially compete with whatever Nvidia has to offer.

Realistically, AMD’s 50% PPW improvements probably only occur in specific scenarios, just like Nvidia’s 90% PPW improvements on Ampere. Particularly for the higher performance parts, we’re skeptical of claims of 50% improvements, but we’ll withhold any final judgement for now. About all we can say is that Nvidia has left the door open for AMD to walk through.

We also hope AMD will manage to avoid the shortages that have plagued Nvidia’s RTX 3080 and 3090 cards. Part of that comes from demand for new levels of performance, so AMD will need to keep pace with Ampere if it hopes to see similar demand. Also, every Ampere GPU purchase prior to Big Navi’s launch means one less buyer for AMD’s GPUs. Still, TSMC can only produce so many N7 wafers per month, and AMD has Navi 10 chips still in production, along with Zen 2 and Zen 3 CPUs, and now Navi 2x. Add in wafers from other companies (Apple, Nvidia, and Intel are all using TSMC N7) and we could see Big Navi shortages until 2021.

Without actual hardware in hand, running actual gaming benchmarks, we can’t declare a victor. Give it another month and we should have all the final details and data in place. The last months of 2020 are shaping up to be very exciting in the GPU world, which is good as the first part of 2020 sucked. Considering it’s been more than a year since AMD’s Navi architecture launched, we’re definitely ready for the next-gen GPUs.