A Lenovo product manager has published the first pictures of its upcoming Radeon RX 6900 XT Legion graphics card. The board looks as monumental as the renders released a few months ago, and the very emergence of the photos may indicate that the product is close to release.
Lenovo’s Radeon RX 6900 XT Legion carries AMD’s flagship Navi 21 GPU in its maximum configuration with 5120 stream processors as well as 16 GB of GDDR6 memory connected to the chip using a 256-bit interface. Just like AMD’s reference cards, it has two eight-pin auxiliary PCIe power connectors. That means Lenovo isn’t going after extreme factory overclocking for this board, unlike traditional add-in-board (AIB) makers that install three eight-pin power connectors on their custom Radeon RX 6900 XT products.
The AIB presumably uses a proprietary printed circuit board (PCB) and comes equipped with a massive triple-slot cooling system featuring three fans that resemble a cooler used on reference AMD Radeon VII graphics cards several years ago.
Meanwhile, to appeal to modern enthusiasts, Lenovo equipped its cooling system with RGB LEDs that highlight the Radeon RX 6900 XT model on top and the Legion brand on the back. Also, there is a highlighted ‘R’ located on one edge of the card.
The graphics card was pictured by WolStame, who happens to be a Lenovo China Gaming Desktop Product Planning Manager (according to VideoCardz), and published on his Weibo page. WolStame said that the AIB is an engineering sample, though it looks rather solid.
At this point, it is still unclear whether Lenovo will use its Radeon RX 6900 XT graphics cards exclusively with its Savior Blade 7000P 2021 gaming PCs or will also sell them separately, just like it does with its Legion-branded monitors and other gear. After all, what’s the point of developing an exclusive graphics card for just one PC?
In recent years a number of PC makers have introduced small form-factor (SFF) and ultra-compact form-factor (UCFF) computers based on AMD’s latest accelerated processing units (APUs), but none of those systems are as tiny as Intel’s NUCs. Asus is one of a few manufacturers to actually offer its Mini PC PN-series AMD-powered UCFF machines that are just as compact as NUCs and they have just updated them to feature AMD Ryzen 5000 series APUs.
Asus freshly introduced Mini PC PN51 packs AMD’s Ryzen 5000U-series mobile processors with up to eight Zen 3 cores as well as up to Radeon Vega 7 graphics. The APU can be paired with a maximum of 32 GB of DDR4-3200 memory. Storage is via an M.2-2280 SSD with up to 1TB capacity, and a 2.5-inch 1 TB 7200-RPM hard drive. Meanwhile, the system still measures 115×115×49 mm has a 0.62-liter volume.
The diminutive size of the Asus Mini PC PN51 does not impact the choice of ports on offer. The desktop computer is equipped with an Intel Wi-Fi 6 + Bluetooth 5.0 module (or a Wi-Fi 5 + BT 4.2), a 2.5 GbE or a GbE connector, three USB 3.2 Gen 1 Type-A ports, two USB 3.2 Gen 2 Type-C receptacles (supporting DisplayPort mode), an HDMI output with CEC support, a 3-in-1 card reader, a configurable port (which can be used for Ethernet, DisplayPort, D-Sub or COM ports), and a combo audio jack.
Asus seems to position the Mini PC PN51 as a universal PC suitable for both home and office. The configurable I/O port that can add an Ethernet connector or a COM header is obviously aimed at corporate and business users. In addition, the PC has a TPM module on board. Meanwhile, the system also has an IR receiver (something that many modern UCFF and SFF PCs lost following Apple’s Mac Mini) and a Microsoft Cortana-compatible microphone array that will be particularly beneficial for home users that will use the PN51 as an HTPC.
As far as power consumption and noise levels are concerned, the PN51 consumes as little as 9W at idle, produces 21.9 dBA of noise at idle, and 34.7 dBA at full load.
The Asus Mini PC PN51 will be available shortly. Pricing will depend on configuration as Asus plans to offer numerous models based on the AMD Ryzen 3 5300U, AMD Ryzen 5 5500U, and AMD Ryzen 7 5700U processors with various memory and storage configurations.
An unidentified Radeon Pro graphics card has emerged overseas in China. The graphics card, which appeared on the Chiphell forums, could be one of AMD’s forthcoming Big Navi Radeon Pro offerings. But take this leak with a grain of salt.
Like AMD’s other Radeon Pro SKUs, the mysterious graphics card retains the dual-slot design and a shroud with the characteristic blue and silver theme. Given the silver stripe in the middle of the shroud, it should be a Radeon Pro W-series card as opposed to a Radeon Pro WX-series model. The cooler itself doesn’t resemble the existing designs on the Radeon Pro W5700 or W5500. Therefore, it’s safe for us to assume that the enigmatic graphics card may be a next-generation RDNA 2 Radeon Pro graphics card. The sticker clearly states that this particular unit is an engineering sample so the final design could be completely different.
The Chiphell forum user covered the serial number for obvious reasons, and the barcode is too small to decrypt. However, AMD’s Radeon Pro graphics card typically take after their mainstream counterparts. So far the chipmaker has released the Radeon RX 6900 XT, RX 6800 XT and RX 6700 XT so the graphics card in question is possibly based on one of the three launched models. If we had to take a guess, the graphics card is probably the Radeon Pro W6800. If so, then it should leverage the Navi 21 silicon, which is the die that AMD utilizes for the current Radeon RX 6900 XT and Radeon RX 6800 (XT).
Image 1 of 2
Image 2 of 2
Another sticker on the back of the graphics card points to Samsung 16GB, which alludes to the memory chips that are inside the graphics card. Sadly, this tiny bit of information doesn’t help us decode the exact silicon that the graphics card is based around. The Radeon RX 6900 XT, RX 6800 XT and RX 6700 XT all employ Samsung 16 Gbps GDDR6 memory chips. There is also mention of “Full Secure TT GLXL A1 ASIC,” which we haven’t been able to decipher.
The user’s photographs don’t reveal the graphics card’s power connectors or display outputs. AMD has been known to mix things up so the graphics card may offer standard DisplayPort or mini-DisplayPort outputs. However, the photographs show that AMD has finally endowed the Radeon Pro graphics card with a backplate. Unfortunately for us, it also blocks the back of the PCB so we can’t dig deeper into the memory chips.
Because the world desperately needed another GPU people can’t actually buy, AMD decided to release a limited-edition “Midnight Black” version of its Radeon RX 6800 XT early Wednesday morning, where it sold out before most people even saw the card was there.
You might be thinking: “Why would you release a limited-edition version of something that’s practically a collector’s piece even without a new coat of paint?” (AMD’s recent GPUs are even rarer than Nvidia’s, though both currently command two to three times their retail price on eBay.)
Or you might be thinking: “When, exactly, did AMD announce a new GPU? I don’t remember that.” That’s because the company didn’t formally announce it: according to VideoCardz, AMD quietly told its “AMD Red Team community” fanbase by email at around midnight that they should watch for the card at 6AM PT, an announcement that did not stay under wraps, to put it mildly.
Personally, I’m just wondering: If AMD actually wanted to put video cards in the hands of its fans, why not verify their emails, or email out unique, non-transferrable passwords, or raffle them off, or do basically anything other than put them on the same website where bots, scalpers, and everyone else already knows to look — a website that some people have their browser set to refresh all day long?
There does seem to have been a special “Red Team link,” but a bunch of would-be buyers reported it didn’t work — while a few others claimed they were able to buy one on the main store page by hammering the refresh button or by using a Javascript shortcut to trick the website.
#GameOnAMD Sums up the experience. Red Team link was to a page you can’t buy from, only way I saw was from main store, and add to cart never worked. Cool. I feel so included in this exclusive offer. pic.twitter.com/YomensLwQY
— Jon VR Viking (@Bounty_V) April 7, 2021
There are potential solutions to these issues, but the gaming industry does not seem to be terribly interested in finding them. Still, gotta give credit to AMD for selling it at the original retail price of $649 instead of charging more.
It’s not clear how many Midnight Black cards were produced; at press time, two buyers were trying to hawk their confirmed orders on eBay, while a third had pulled their listing due to an unspecified error.
AMD tells The Verge it’s “continuing to focus on delivering the latest Radeon graphics cards to as many gamers as possible at SEP,” and counts the brief appearance of the AMD Radeon RX 6800 XT Midnight Black as part of that.
“We continue to make reference cards available on AMD.com and will continue to replenish supply for the foreseeable future,” the company says, something it had not managed to do the last time I wrote about a similar claim. Since then, however, AMD has (very briefly) delivered supplies of GPUs at least three times that I’m aware of.
According to a leak reported at VideoCardz, at 6am PST AMD plans to release a special black edition of its Radeon RX 6800 XT.
AMD’s Radeon RX 6800 XT Midnight Black edition graphics board is based on the Navi 21 GPU featuring 4608 stream processors, 288 texture units (TUs), and 128 render output units (ROPs) that is paired with 16 GB of GDDR6 memory. As the name suggests, the Midnight Black edition is supposed to be all black, so expect it to look different from AMD’s usual Radeon RX 6800 XT reference design.
There is a catch about AMD’s Radeon RX 6800 XT Midnight Black though: it will be available only from AMD.com to members of the AMD Red Team community for a limited time and while supplies last. The product will be available starting from 6am PST/9am EST April 7, 2021.
“Based on community feedback and popular demand, we have created a select quantity of AMD Radeon RX 6800 XT Midnight Black graphics cards featuring the same great performance of the widely popular AMD Radeon RX 6800 XT,” a statement by AMD published by VideoCardz reads. “This is an exclusive advance notice to members of the AMD Red Team community and this offer has limited availability, while supplies last.”
At this point it is unclear whether the Radeon RX 6800 XT Midnight Black will cost $649, like other reference design RX 6800 XT boards, or will cost more since it is an exclusive product. Furthermore, it is unknown how many of such graphics cards will be made available.
We have with us the ASRock Radeon RX 6700 XT Phantom Gaming D 12 GB OC graphics card. The latest entrant to the custom-design graphics card space debuting with the RX 5000 series, ASRock has established itself as a serious design-house for premium custom Radeon RX graphics cards. The Phantom Gaming D is the company’s top RX 6700 XT product so far. A successor to the Radeon RX 5700 XT, which stirred things up in the sub-$500 segment last year; the new RX 6700 XT is based on the RDNA2 graphics architecture and provides full DirectX 12 Ultimate support, including real-time raytracing. It’s being offered as a 1440p maxed out gaming product, and AMD claims competitiveness with not only the GeForce RTX 3060 Ti, but also the RTX 3070.
The new RDNA2 graphics architecture powers not just AMD’s Radeon RX 6000 series, but also next-generation game consoles. This makes it easier for game developers to optimize for the RX 6000. AMD’s approach to real-time raytracing involves Ray Accelerators, special hardware for ray intersection computation, while compute shaders are used for almost every other raytracing aspect, including denoising. To achieve this, AMD has had to significantly increase the SIMD performance of its new generation GPUs through not just higher IPC for the new RDNA2 compute units, but also significantly higher engine clocks. A side-effect of this approach is that these GPUs offer high levels of performance on the majority of conventional raster 3D games.
With the RX 6700 XT, AMD has increased the standard memory amount for this segment to 12 GB, up from 8 GB on the RX 5700 XT, but the memory bus is narrower, at 192-bit. AMD has attempted to shore up the memory bus width deficit by using the fastest JEDEC-standard 16 Gbps memory chips and Infinity Cache, a fast 96 MB on-die cache that speeds up the memory sub-system.
The ASRock RX 6700 XT Phantom Gaming D comes with a powerful triple-slot, triple-fan cooling solution that doesn’t shy away from copious amounts of RGB LED bling. It also has an ARGB header you may use to synchronize the rest of your lighting to the card. ASRock is also packing a factory overclock of up to 2548 MHz Game Clock (vs. 2424 MHz reference).
for the PC version on Steam. The sequel follows the saga of Ethan Winters, this time with some apparently very large vampire ladies. Based on what we’ve seen, you’ll benefit from having one of the
best graphics cards
along with something from our list of the
best CPUs for gaming
when the game arrives on May 7.
The eighth entry in the series (VIII from Village), this will be the first Resident Evil to feature ray tracing technology. The developers have tapped AMD to help with the ray tracing implementation, however, so it’s not clear whether it will run on Nvidia’s RTX cards at launch, or if it will require a patch — and it’s unlikely to get DLSS support, though it could make for a stunning showcase for AMD’s FidelityFX Super Resolution if AMD can pull some strings.
We’ve got about a month to wait before the official launch. In the meantime, here are the official system requirements.
Minimum System Requirements for Resident Evil Village
Capcom notes that in either case, the game targets 1080p at 60 fps, though the framerate “might drop in graphics-intensive scenes.” While the minimum requirements specify using the “Prioritize Performance” setting, it’s not clear what settings are used for the recommended system.
The Resident Evil Village minimum system requirements are also for running the game without ray tracing, with a minimum requirement of an RTX 2060 (and likely future AMD GPUs like Navi 23), and a recommendation of at least an RTX 2070 or RX 6700 XT if you want to enable ray tracing. There’s no mention of installation size yet, so we’ll have to wait and see just how much of our SSD the game wants to soak up.
The CPU specs are pretty tame, and it’s very likely you can use lower spec processors. For example, the Ryzen 3 1200 is the absolute bottom of the entire Ryzen family stack, with a 4-core/4-thread configuration running at up to 3.4GHz. The Core i5-7500 also has a 4-core/4-thread configuration, but runs at up to 3.8GHz, and it’s generally higher in IPC than first generation Ryzen.
You should be able to run the game on even older/slower CPUs, though perhaps not at 60 fps. The recommended settings are a decent step up in performance potential, moving to 6-core/12-thread CPUs for both AMD and Intel, which are fairly comparable processors.
The graphics card will almost certainly play a bigger role in performance than the CPU, and while the baseline GTX 1050 Ti and RX 560 4GB are relatively attainable (the game apparently requires, maybe, 4GB or more VRAM), we wouldn’t be surprised if that’s with some form of dynamic resolution scaling enabled. Crank up the settings and the GTX 1070 and RX 5700 are still pretty modest cards, though the AMD card is significantly faster — not that you can find either in stock at acceptable prices these days, as we show in our
GPU pricing index
. But if you want to run the full-fat version of Resident Evil Village, with all the DXR bells and whistles at 1440p or 4K, you’re almost certainly going to need something far more potent.
Full size images: RE Village RT On / RE Village RT OffAMD showed a preview of the game running with and without ray tracing during its
Where Gaming Begins, Episode 3
presentation in early March. The pertinent section of the video starts at the 9:43 mark, though we’ve snipped the comparison images above for reference. The improved lighting and reflections are clearly visible in the RT enabled version, but critically we don’t know how well the game runs with RT enabled.
We’re looking forward to testing Resident Evil Village on a variety of GPUs and CPUs next month when it launches on PC, Xbox, and PlayStation. Based on what we’ve seen from other RT-enabled games promoted by AMD (e.g. Dirt 5), we expect frame rates will take a significant hit.
But like we said, this may also be the debut title for FidelityFX Super Resolution, and if so, that’s certainly something we’re eager to test. What we’d really like to see is a game that supports both FidelityFX Super Resolution and DLSS, just so we could do some apples-to-apples comparisons, but it may be a while before such a game appears.
Asus is apparently preparing what could be the ultimate AMD gaming laptop. According to a CPU-Z posting, the new iteration of the ROG Strix G15 will arrive with a lethal combination of AMD’s Ryzen 9 5900HX (Cezanne) processor and Radeon RX 6800M graphics cards.
The unreleased laptop (via Tum_Apisak) sports the G513QY model number. Unless Asus is working on a new gaming laptop, the G513 corresponds to the brand’s ROG Strix G15 G513, which was previously only available with discrete graphics options from Nvidia.
For starters, the G513QY will be based on the flagship Ryzen 9 5900HX processor. Asus already offers the ROG Strix G15 with the aforementioned processor, though. The octa-core Zen 3 chip features a 3.3 GHz base clock and a 4.6 GHz boost clock. However, the Ryzen 9 5900HX supports overclocking and a cTDP up to 54W, so there is enough wiggle room for overclocking.
In terms of discrete graphics, the G513QY will rely on the forthcoming Radeon RX 6800M, which is the mobile version of the Radeon RX 6800. AMD hasn’t officially announced the mobile RDNA 2 (Big Navi) graphics cards yet, so the specifications are unknown. However, the CPU-Z submission points to the Radeon RX 6800M having up to 12GB of GDDR6 memory, only 4GB less than the desktop counterpart.
Having an AMD processor and graphics card in the same device obviously brings benefits. The fusion will enable the G513QY to leverage AMD’s SmartShift technology that balances the power between the processor and graphics card according to the workload. AMD touts a performance boost of up to 14% with SmartShift enabled. The technology debuted with Dell’s G5 15 SE, so it’s good to see other vendors going all-in with AMD.
The Radeon RX 6800M will logically be the bell cow of the mobile RDNA 2 army. Assuming that AMD will replace every mobile RDNA 1 part with an equivalent, we could be seeing up to three more SKUs, such as the Radeon RX 6700M, RX 6600M and maybe even a RX 6500M. AMD hasn’t given any clues when it will unleash its mobile RDNA 2 offerings though.
use? It’s an important question, and while the performance we show in our
GPU benchmarks
hierarchy is useful, one of the true measures of a GPU is how efficient it is. To determine GPU power efficiency, we need to know both performance and power use. Measuring performance is relatively easy, but measuring power can be complex. We’re here to press the reset button on GPU power measurements and do things the right way.
There are various ways to determine power use, with varying levels of difficulty and accuracy. The easiest approach is via software like
GPU-Z
, which will tell you what the hardware reports. Alternatively, you can measure power at the outlet using something like a
Kill-A-Watt
power meter, but that only captures total system power, including PSU inefficiencies. The best and most accurate means of measuring the power use of a graphics card is to measure power draw in between the power supply (PSU) and the card, but it requires a lot more work.
We’ve used GPU-Z in the past, but it had some clear inaccuracies. Depending on the GPU, it can be off by anywhere from a few watts to potentially 50W or more. Thankfully, the latest generation AMD Big Navi and Nvidia Ampere GPUs tend to report relatively accurate data, but we’re doing things the right way. And by “right way,” we mean measuring in-line power consumption using hardware devices. Specifically, we’re using Powenetics software in combination with various monitors from TinkerForge. You can read our Powenetics project overview for additional details.
Image 1 of 2
Image 2 of 2
Tom’s Hardware GPU Testbed
After assembling the necessary bits and pieces — some soldering required — the testing process is relatively straightforward. Plug in a graphics card and the power leads, boot the PC, and run some tests that put a load on the GPU while logging power use.
We’ve done that with all the legacy GPUs we have from the past six years or so, and we do the same for every new GPU launch. We’ve updated this article with the latest data from the GeForce RTX 3090, RTX 3080, RTX 3070, RTX 3060 Ti, and RTX 3060 12GB from Nvidia; and the Radeon RX 6900 XT, RX 6800 XT, RX 6800, and RX 6700 XT from AMD. We use the reference models whenever possible, which means only the EVGA RTX 3060 is a custom card.
If you want to see power use and other metrics for custom cards, all of our graphics card reviews include power testing. So for example, the RX 6800 XT roundup shows that many custom cards use about 40W more power than the reference designs, thanks to factory overclocks.
Test Setup
We’re using our standard graphics card testbed for these power measurements, and it’s what we’ll use on graphics card reviews. It consists of an MSI MEG Z390 Ace motherboard,
Intel Core i9-9900K CPU
, NZXT Z73 cooler, 32GB Corsair DDR4-3200 RAM, a fast M.2 SSD, and the other various bits and pieces you see to the right. This is an open test bed, because the Powenetics equipment essentially requires one.
There’s a PCIe x16 riser card (which is where the soldering came into play) that slots into the motherboard, and then the graphics cards slot into that. This is how we accurately capture actual PCIe slot power draw, from both the 12V and 3.3V rails. There are also 12V kits measuring power draw for each of the PCIe Graphics (PEG) power connectors — we cut the PEG power harnesses in half and run the cables through the power blocks. RIP, PSU cable.
Powenetics equipment in hand, we set about testing and retesting all of the current and previous generation GPUs we could get our hands on. You can see the full list of everything we’ve tested in the list to the right.
From AMD, all of the latest generation Big Navi / RDNA2 GPUs use reference designs, as do the previous gen RX 5700 XT, RX 5700 cards,
Radeon VII
,
Vega 64
and
Vega 56
. AMD doesn’t do ‘reference’ models on most other GPUs, so we’ve used third party designs to fill in the blanks.
For Nvidia, all of the Ampere GPUs are Founders Edition models, except for the EVGA RTX 3060 card. With Turing, everything from the
RTX 2060
and above is a Founders Edition card — which includes the 90 MHz overclock and slightly higher TDP on the non-Super models — while the other Turing cards are all AIB partner cards. Older GTX 10-series and GTX 900-series cards use reference designs as well, except where indicated.
Note that all of the cards are running ‘factory stock,’ meaning there’s no manual
overclocking
or
undervolting
is involved. Yes, the various cards might run better with some tuning and tweaking, but this is the way the cards will behave if you just pull them out of their box and install them in your PC. (RX Vega cards in particular benefit from tuning, in our experience.)
Our testing uses the Metro Exodus benchmark looped five times at 1440p ultra (except on cards with 4GB or less VRAM, where we loop 1080p ultra — that uses a bit more power). We also run Furmark for ten minutes. These are both demanding tests, and Furmark can push some GPUs beyond their normal limits, though the latest models from AMD and Nvidia both tend to cope with it just fine. We’re only focusing on power draw for this article, as the temperature, fan speed, and GPU clock results continue to use GPU-Z to gather that data.
GPU Power Use While Gaming: Metro Exodus
Due to the number of cards being tested, we have multiple charts. The average power use charts show average power consumption during the approximately 10 minute long test. These charts do not include the time in between test runs, where power use dips for about 9 seconds, so it’s a realistic view of the sort of power use you’ll see when playing a game for hours on end.
Besides the bar chart, we have separate line charts segregated into groups of up to 12 GPUs, and we’ve grouped cards from similar generations into each chart. These show real-time power draw over the course of the benchmark using data from Powenetics. The 12 GPUs per chart limit is to try and keep the charts mostly legible, and the division of what GPU goes on which chart is somewhat arbitrary.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
Kicking things off with the latest generation GPUs, the overall power use is relatively similar. The 3090 and 3080 use the most power (for the reference models), followed by the three Navi 10 cards. The RTX 3070, RX 3060 Ti, and RX 6700 XT are all pretty close, with the RTX 3060 dropping power use by around 35W. AMD does lead Nvidia in pure power use when looking at the RX 6800 XT and RX 6900 XT compared to the RTX 3080 and RTX 3090, but then Nvidia’s GPUs are a bit faster so it mostly equals out.
Step back one generation to the Turing GPUs and Navi 1x, and Nvidia had far more GPU models available than AMD. There were 15 Turing variants — six GTX 16-series and nine RTX 20-series — while AMD only had five RX 5000-series GPUs. Comparing similar performance levels, Nvidia Turing generally comes in ahead of AMD, despite using a 12nm process compared to 7nm. That’s particularly true when looking at the GTX 1660 Super and below versus the RX 5500 XT cards, though the RTX models are closer to their AMD counterparts (while offering extra features).
It’s pretty obvious how far AMD fell behind Nvidia prior to the Navi generation GPUs. The various Vega and Polaris AMD cards use significantly more power than their Nvidia counterparts. RX Vega 64 was particularly egregious, with the reference card using nearly 300W. If you’re still running an older generation AMD card, this is one good reason to upgrade. The same is true of the legacy cards, though we’re missing many models from these generations of GPU. Perhaps the less said, the better, so let’s move on.
GPU Power with FurMark
FurMark, as we’ve frequently pointed out, is basically a worst-case scenario for power use. Some of the GPUs tend to be more aggressive about throttling with FurMark, while others go hog wild and dramatically exceed official TDPs. Few if any games can tax a GPU quite like FurMark, though things like cryptocurrency mining can come close with some algorithms (but not Ehterium’s Ethash, which tends to be limited by memory bandwidth). The chart setup is the same as above, with average power use charts followed by detailed line charts.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
The latest Ampere and RDNA2 GPUs are relatively evenly matched, with all of the cards using a bit more power in FurMark than in Metro Exodus. One thing we’re not showing here is average GPU clocks, which tend to be far lower than in gaming scenarios — you can see that data, along with fan speeds and temperatures, in our graphics card reviews.
The Navi / RDNA1 and Turing GPUs start to separate a bit more, particularly in the budget and midrange segments. AMD didn’t really have anything to compete against Nvidia’s top GPUs, as the RX 5700 XT only matched the RTX 2070 Super at best. Note the gap in power use between the RTX 2060 and RX 5600 XT, though. In gaming, the two GPUs were pretty similar, but in FurMark the AMD chip uses nearly 30W more power. Actually, the 5600 XT used more power than the RX 5700, but that’s probably because the Sapphire Pulse we used for testing has a modest factory overclock. The RX 5500 XT cards also draw more power than any of the GTX 16-series cards.
With the Pascal, Polaris, and Vega GPUs, AMD’s GPUs fall toward the bottom. The Vega 64 and Radeon VII both use nearly 300W, and considering the Vega 64 competes with the GTX 1080 in performance, that’s pretty awful. The RX 570 4GB (an MSI Gaming X model) actually exceeds the official power spec for an 8-pin PEG connector with FurMark, pulling nearly 180W. That’s thankfully the only GPU to go above spec, for the PEG connector(s) or the PCIe slot, but it does illustrate just how bad things can get in a worst-case workload.
The legacy charts are even worse for AMD. The R9 Fury X and R9 390 go well over 300W with FurMark, though perhaps that’s more of an issue with the hardware not throttling to stay within spec. Anyway, it’s great to see that AMD no longer trails Nvidia as badly as it did five or six years ago!
Analyzing GPU Power Use and Efficiency
It’s worth noting that we’re not showing or discussing GPU clocks, fan speeds or GPU temperatures in this article. Power, performance, temperature and fan speed are all interrelated, so a higher fan speed can drop temperatures and allow for higher performance and power consumption. Alternatively, a card can drop GPU clocks in order to reduce power consumption and temperature. We dig into this in our individual GPU and graphics card reviews, but we just wanted to focus on the power charts here. If you see discrepancies between previous and future GPU reviews, this is why.
The good news is that, using these testing procedures, we can properly measure the real graphics card power use and not be left to the whims of the various companies when it comes to power information. It’s not that power is the most important metric when looking at graphics cards, but if other aspects like performance, features and price are the same, getting the card that uses less power is a good idea. Now bring on the new GPUs!
Here’s the final high-level overview of our GPU power testing, showing relative efficiency in terms of performance per watt. The power data listed is a weighted geometric mean of the Metro Exodus and FurMark power consumption, while the FPS comes from our GPU benchmarks hierarchy and uses the geometric mean of nine games tested at six different settings and resolution combinations (so 54 results, summarized into a single fps score).
This table combines the performance data for all of the tested GPUs with the power use data discussed above, sorts by performance per watt, and then scales all of the scores relative to the most efficient GPU (currently the RX 6800). It’s a telling look at how far behind AMD was, and how far it’s come with the latest Big Navi architecture.
Efficiency isn’t the only important metric for a GPU, and performance definitely matters. Also of note is that all of the performance data does not include newer technology like ray tracing and DLSS.
The most efficient GPUs are a mix of AMD’s Big Navi GPUs and Nvidia’s Ampere cards, along with some first generation Navi and Nvidia Turing chips. AMD claims the top spot with the Navi 21-based RX 6800, and Nvidia takes second place with the RTX 3070. Seven of the top ten spots are occupied by either RDNA2 or Ampere cards. However, Nvidia’s GDDR6X-equipped GPUs, the RTX 3080 and 3090, rank 17 and 20, respectively.
Given the current GPU shortages, finding a new graphics card in stock is difficult at best. By the time things settle down, we might even have RDNA3 and Hopper GPUs on the shelves. If you’re still hanging on to an older generation GPU, upgrading might be problematic, but at some point it will be the smart move, considering the added performance and efficiency available by more recent offerings.
The first benchmark (via Tum_Apisak) of Intel’s Iris Xe DG1 is out. The graphics card’s performance is in the same ballpark as AMD’s four-year-old Radeon RX 550 – at least in the Basemark GPU benchmark.
If we compare manufacturing processes, the DG1 is obviously the more advanced offering. The DG1 is based on Intel’s latest 10nm SuperFin process node, and the Radeon RX 550 utilizes the Lexa die, which was built with GlobalFoundries’ 14nm process. Both the DG1 and Radeon RX 550 hail from Asus’ camp. The Asus DG1-4G features a passive heatsink, while the Asus Radeon RX 550 4G does require active cooling in the form of a single fan. The Radeon RX 550 is rated for 50W and the DG1 for 30W, which is why the latter can get away with a passive cooler.
The Asus DG1-4G features a cut-down variant of the Iris Xe Max GPU, meaning the graphics cards only has 80 execution units (EUs) at its disposal. This configuration amounts to 640 shading units with a peak clock of 1,500 MHz. On the memory side, the Asus DG1-4G features 4GB of LPDDR4X-4266 memory across a 128-bit memory interface.
On the other side of the ring, the Asus Radeon RX 550 4G comes equipped with 512 shading units with a 1,100 MHz base clock and 1,183 MHz boost clock. The graphics card’s 4GB of 7 Gbps GDDR5 memory that communicates through a 128-bit memory bus to pump out a memory bandwidth up to 112 GBps.
In terms of FP32 performance, the DG1 delivers up to 2.11 TFLOPs whereas the Radeon RX 550 offers up to 1.21 TFLOPs. On paper, the DG1 should be superior, but we know that FP32 performance isn’t the most important metric.
Both systems from the Basemark GPU submissions were based on the same processor, the Intel Core i3-10100F. Therefore, the DG1 and Radeon RX 550 were on equal grounds as far as the processor is concerned. Let’s not forget that the DG1 is picky when it comes to platforms. The graphics card is only compatible with the 9th and 10th Generation Core processors and B460, H410, B365 and H310C motherboards. Even then, a special firmware is necessary to get the DG1 working.
The DG1 puts up a Vulkan score of 17,289 points, while the Radeon RX 550 scored 17,619 points. Therefore, the Radeon RX 550 was up to 1.9% faster than the DG1. Of course, this is just one benchmark so it’s too soon to declare a definite winner without more thorough tests.
Intel never intended for the DG1 to be a strong performer, but rather an entry-level graphics card that can hang with the competition. Thus far, the DG1 seems to trade blows with the Radeon RX 550.
Through its GeForce 465 driver update, NVIDIA formally introduced the PCI-Express Resizable BAR feature to its GeForce RTX 30-series “Ampere” graphics cards. This feature was invented by PCI-SIG, custodians of the PCI-Express bus, but only became relevant for the PC when AMD decided to re-brand it as “AMD Smart Access Memory” (which we separately reviewed here) and introduce it with the Radeon RX 6000 series RDNA2 graphics cards. That’s probably when NVIDIA realized they too could implement the feature to gain additional performance for GeForce.
How Resizable BAR Works
Until now, your CPU could only see your graphics card’s memory through 256 MB apertures (that’s 256 MB at a time). Imagine you’re in a dark room and have a tiny pocket flashlight that can only illuminate a small part of a page from a book to you. You can still read the whole page, but you’ll have to move the flashlight to where you’re looking. Resizable BAR is the equivalent of illuminating the whole room with a lamp.
This becomes even more important if you consider that with modern APIs, multiple CPU-to-GPU memory transfers can be active at the same time. With only a single, small aperture, these transfers have to be executed in sequence—if the whole VRAM is mapped, they can operate in parallel. Going back to our reading in the dark example, we now assume that there are multiple people trying to read a book, but they only have one flashlight. Everyone has to wait their turn, illuminate the book, read a bit of text and then pass the flashlight on to the next person. With Resizable BAR enabled, everybody can read the book at the same time.
The 256 MB size of the aperture is arbitrary and dates back to the 32-bit era when address space was at a premium. Even with the transition to x86-64, the limit stayed as newer 3D graphics APIs such as DirectX 11 relied less on mirroring data between the system memory and the video memory. Perhaps the main reason nobody bothered to implement Resizable BAR until now was that modern GPUs come with such enormous video memory bandwidths that the act of reading memory through apertures had minimal performance impact, and it’s only now that both NVIDIA and AMD feel the number-crunching power of their GPUs has far outpaced their memory bandwidth requirements.
To use Resizable BAR, a handful of conditions should be fulfilled. For starters, you need a modern processor that supports it. From the AMD camp, Ryzen 3000 “Zen 2” and Ryzen 5000 “Zen 3” processors support it. On the Intel camp, hardware support technically dates back to the 4th Gen “Haswell,” but most motherboard vendors for whatever reason restricted their Resizable BAR enabling BIOS updates to the 300-series chipset, or 8th Gen “Coffee Lake” (and later) architectures, along with X299, or 7th Gen “Skylake-X” HEDT (and later). You’ll also need a compatible graphics card—NVIDIA RTX 30-series or AMD RX 6000 series. Lastly, your PC must boot in UEFI mode with CSM disabled for UEFI GOP support. With these conditions met, you’ll need to enable Resizable BAR in your motherboard’s UEFI setup program.
There are multiple methods to check if Resizable BAR is enabled. The easiest is to use GPU-Z, which now shows the Resizable BAR status on its main screen. The other options are using NVIDIA’s Control Panel and Windows Device Manager.
In this review, we will be testing four NVIDIA GeForce RTX 30-series Ampere models—RTX 3090, RTX 3080, RTX 3070, and RTX 3060 Ti, all Founders Edition cards. Each of these will have Resizable BAR enabled and disabled, across our entire test-suite of 22 games with a rich diversity of game engines and APIs.
Yesterday, AMD release a new Adrenalin driver to the public, version 21.3.2 with support for several new titles including Dirt 5, along with several bug fixes. Specifically, driver 21.3.2 adds support for Dirt 5‘s new DirectX Raytracing (DXR) update.
Dirt 5 originally launched late last year, and CodeMasters worked with AMD on the title. Not long after launch, AMD provided the press with early access to a beta DXR branch of the game, with the promise that DXR support would eventually get rolled into the public build. It took longer than expected, but with the latest update you can now try Dirt 5‘s ray tracing feature on AMD’s current RX 6000 series GPUs. (It also works with Nvidia RTX GPUs.) We’re planning a more extensive look at the state of ray tracing in games in the coming weeks, both to see how much DXR and ray tracing impact performance, as well as how much ray tracing improves the look of various games.
AMD added support for the new Outriders RPG and Evil Genius 2: World Domination as well. There’s no indication of major performance improvements or bug fixes for those games, but the latest drivers are game ready.
Bug Fixes
Besides the above, here are the five bugs squashed in this update:
The Radeon RX 6700 will no longer report incorrect clock values in AMD’s software.
Shadows corruption is fixed in Insurgency: Sandstorm when running on RX 6000 series hardware.
There is no longer an issue where the desktop resolution in Windows may change when turning a monitor off then back on again.
The start and cancel buttons should no longer disappear when resizing the Radeon Software.
You should no longer get a black screen when enabling Radeon FreeSync and setting a game to borderless fullscreen/windowed mode on RX 6000 series GPUs.
AMD (via Kepler_L2) released a new Linux patch that exposes the cache configuration for its Navi 21, Navi 22 and Navi 23 silicon. The last is rumored to power the chipmaker’s upcoming Radeon RX 6600 series (or maybe RX 6500 series).
The description for the patch reads: “The L1 cache information has been updated and the L2/L3 information has been added. The changes have been made for Vega10 and newer ASICs. There are no changes for the older ASICs before Vega10.” Therefore, it holds a ton of valuable information on both existing and future AMD products.
Introduced with RDNA 2, Infinity Cache basically acts as a big L3 cache that’s accessible by the GPU. It’s there to help improve performance since AMD’s RDNA 2 graphics cards employ relatively narrow memory interfaces. The Radeon RX 6800 XT for example uses a 256-bit bus, but manages to mostly keep pace with the GeForce RTX 3080’s 320-bit bus that also includes higher clocked GDDR6X memory.
Navi 21 (Sienna Cichlid) and Navi 22 (Navy Flounder) sport 128MB and 96MB of Infinity Cache, respectively. According to the new information, Navi 23 will wield 32MB of Infinity Cache. In comparison to Navi 22, we’re looking at a 66.7% reduction on Navi 23. That should also help cut down the die size, though at the cost of performance.
The jury is still out on whether AMD will use Navi 23 for the Radeon RX 6600 series, though. Some think that Navi 23 could find its way into the Radeon RX 6500 series instead. Regardless, AIDA64, a popular monitoring and diagnostics tool, recently received support for Radeon RX 6600 series. Assuming that the software’s release notes are accurate, the Radeon RX 6600 XT and RX 6600 will indeed be based around the Navi 23 die.
ASRock registered a couple of Radeon RX 6600 XT models with the Eurasian Economic Commission (EEC) not so long ago. It’s important to highlight that not every product makes it to the market, but if what ASRock submitted is true, the Radeon RX 6600 XT may feature 12GB of GDDR6 memory. Realistically, it makes more sense for the Radeon RX 6600 XT to have 8GB of GDDR6 memory across a 128-bit memory interface.
The fact that AIDA64 already supports the Radeon RX 6600 series hints that a potential launch may not be too far around the corner. We’re still waiting for a trimmed down Radeon RX 6700 using Navi 22, which we expect to see some time in April.
As tweeted by @momomo_us; it appears that ASRock has teased a brand new RX 6900XT model in Asia called the Formula OCF 16G. We don’t know much, but we can assume it comes with 16GB of GDDR6. Presumably this will be ASRock’s flagship model of the RX 6900 XT, built specifically for overclocking.
If you are unfamiliar with the “Formula” branding, it’s something ASRock came up with years ago for its motherboard lineup. These boards were targeted specifically towards overclockers, with excellent power delivery systems, and extra features targeted towards bringing users the best overclocking experience possible from the company.
From what we can see, the RX 6900 XT Formula OCF 16G is a beefy triple slot card with a triple-fan cooler and a heatsink that covers the full length of the card. Aesthetically the card is rather neutral in color, with a grey and black theme, but there are yellow accents on the side of the card, showing off this is a Formula product. The only RGB we can see is a small light bar on the side of the card, right next to the Radeon branding.
Looking at the PCB, we can see what seems to be a BIOS switch, so hopefully, this means the Formula will be packing multiple BIOS. We will probably see one BIOS optimized for quiet operation and the other for pure performance, like other dual BIOS graphics cards.
Unfortunately, we don’t know actual specs for clock speed and things such as power delivery. So hopefully ASRock will release more info on this card soon. But like all graphics cards currently, good luck trying to purchase one of these cards at all.
(Pocket-lint) – The 13-inch Intel MacBook Pro was upgraded with the latest specs in early 2020 to bring it in line with 2019’s MacBook Pro 16-inch.
However, there’s also a version – released in November 2020 – with Apple’s own M1 processor. However, we’re only looking at Intel machines in this guide. If you want to think about an Apple Silicon Mac instead, check out our bigger MacBook guide.
All these Macs run Apple’s latest version of its Mac operating system – macOS 11 Big Sur.
So which is the model for you – the larger 16-incher or the more manageable 13-inch? Let’s find out!
Intel MacBook Pro 16-inch vs MacBook Pro 13-inch: Design and build
All models have Touch Bar and Touch ID
New style keyboard – dubbed the Magic Keyboard
Both sizes of MacBook Pro are available in silver and space grey and have the Touch Bar and Touch ID for fingerprint. Every MacBook Pro now has a Touch Bar.
The 13-inch models measure 304.1 x 212.4 x 15.6mm and weighs 1.4kg. That means it’s slightly thicker and heavier than the older model which was 14.9mm thick/1.37kg.
The larger 16-inch models all measure 358 x 246 x 16.2mm and weigh 2kg. Despite the larger screen size, the new 16-incher is only marginally bigger than the 15-inch it replaced.
The keyboard has been completely redesigned on both models after mass criticism of Apple’s previous Butterfly design (that was present on now end-of-life 15-inch models and pre-2020 13-inch MacBook Pros). That older keyboard design remains the subject of an ongoing recall program.
The Magic Keyboard is designed to be much more durable and with better travel for more comfortable typing. The physical Escape key has also returned.
You’ll get two USB-C/Thunderbolt 3 ports on the two lower end 13-inch models, and four on the top-end pair of models. Yep, there are four standard models of the 13-inch MacBook Pro.
The 16-inch models all have four. Every MacBook Pro retains its 3.5mm headphone jack and there’s the Force Touch trackpad, too.
Intel MacBook Pro 16-inch vs MacBook Pro 13-inch: Display
New MacBooks offer True Tone display
13-inch size and resolutions remain the same
The 16-inch model has a resolution of 3,072 x 1,920 pixels (226ppi), with almost six million pixels on board. The 13-inch model has a resolution of 2,560 x 1,600 pixels (227ppi), the same as older 13-inch MacBook Pros.
All MacBook Pro displays boast True Tone, 500 nits of brightness and a wide P3 colour gamut. True Tone is a tech that was first introduced on the iPad Pro, adjusting the screen to match the colour temperature of the lighting in the room.
What is Apple’s True Tone display?
Intel MacBook Pro 16-inch vs MacBook Pro 13-inch: Processor, graphics and storage
8th and 9th generation Intel Core processors for 16-inch
10th generation Intel Core processors for 13-inch
Radeon Pro graphics for 15-inch
15-inch gets 6-core i9 option
The 16-inch MacBook Pro has 8th generation Intel Core processors and adds some 9th generation options. Everything on the bigger model has either 6 or 8 cores. It has the ‘basic’ option of a 2.6Ghz Intel Core i7 with six cores, but there are two Core i9 processors you can get in the range, too, clocked at 2.3Ghz or 2.4Ghz with Turbo Boost speeds of 4.5 or 5Ghz respectively.
The 13-inch MacBook Pro boasts a quad-core Core i5 processor as standard – a 10th generation chip clocked at 2.0 or 2.3Ghz. You can also configure up to 10th generation Core i7 at 2.3GHz with maximum Turbo Boost speed of 4.1Ghz.
The MacBook Pro 16-inch uses AMD Radeon graphics with the AMD Radeon Pro 5300M or 5500M with 4GB of GDDR6 memory and automatic graphics switching between that and the integrated Intel graphics.
Unfortunately, there’s no discrete graphics option on the 13-inch, but Intel’s Iris Plus graphics chips are no slouch and are way better than the integrated graphics of yesteryear. They aren’t a patch on the 16-inch, however.
The 16-inch model can be topped up to 64GB of memory, while the 13-inch can have up to 32GB. 8GB of 2133MHz LPDDR3 memory is standard on the 13-inch and 16GB of 2666MHz DDR4 memory is standard on the 16-inch.
squirrel_widget_237735
The storage tops out at a whopping 8TB on the 16-inch and 4TB on the 13-inch but starting at 512GB. Adding more internal storage at the time of purchase ups the cost significantly.
The MacBook Pro lineup all has Apple’s own T2 chip. This is a chip dedicated to security that handles Touch ID and some other capabilities such as powering Siri.
squirrel_widget_171234
Conclusions
The 16-inch MacBook Pro is a real step up in terms of the power and options it offers, but you have to really need the larger screen, dedicated graphics and sheer power to justify the expense. It’s a machine for people who edit video, photos, chop between projects and need a do-anything machine with the power to match.
The 13-inch model is still our pick for most users but with the MacBook Air now much more powerful, it’s always worth seeing if that’s actually the 13-inch laptop that you need.
Remember that Apple is transitioning all its laptops over to Apple Silicon, so Intel versions won’t be available too much longer.
Writing by Dan Grabham.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.