How much power do the
best graphics cards
use? It’s an important question, and while the performance we show in our
GPU benchmarks
hierarchy is useful, one of the true measures of a GPU is how efficient it is. To determine GPU power efficiency, we need to know both performance and power use. Measuring performance is relatively easy, but measuring power can be complex. We’re here to press the reset button on GPU power measurements and do things the right way.
There are various ways to determine power use, with varying levels of difficulty and accuracy. The easiest approach is via software like
GPU-Z
, which will tell you what the hardware reports. Alternatively, you can measure power at the outlet using something like a
Kill-A-Watt
power meter, but that only captures total system power, including PSU inefficiencies. The best and most accurate means of measuring the power use of a graphics card is to measure power draw in between the power supply (PSU) and the card, but it requires a lot more work.
We’ve used GPU-Z in the past, but it had some clear inaccuracies. Depending on the GPU, it can be off by anywhere from a few watts to potentially 50W or more. Thankfully, the latest generation AMD Big Navi and Nvidia Ampere GPUs tend to report relatively accurate data, but we’re doing things the right way. And by “right way,” we mean measuring in-line power consumption using hardware devices. Specifically, we’re using Powenetics software in combination with various monitors from TinkerForge. You can read our Powenetics project overview for additional details.
Image 1 of 2
Image 2 of 2
Tom’s Hardware GPU Testbed
After assembling the necessary bits and pieces — some soldering required — the testing process is relatively straightforward. Plug in a graphics card and the power leads, boot the PC, and run some tests that put a load on the GPU while logging power use.
We’ve done that with all the legacy GPUs we have from the past six years or so, and we do the same for every new GPU launch. We’ve updated this article with the latest data from the GeForce RTX 3090, RTX 3080, RTX 3070, RTX 3060 Ti, and RTX 3060 12GB from Nvidia; and the Radeon RX 6900 XT, RX 6800 XT, RX 6800, and RX 6700 XT from AMD. We use the reference models whenever possible, which means only the EVGA RTX 3060 is a custom card.
If you want to see power use and other metrics for custom cards, all of our graphics card reviews include power testing. So for example, the RX 6800 XT roundup shows that many custom cards use about 40W more power than the reference designs, thanks to factory overclocks.
Test Setup
We’re using our standard graphics card testbed for these power measurements, and it’s what we’ll use on graphics card reviews. It consists of an MSI MEG Z390 Ace motherboard,
Intel Core i9-9900K CPU
, NZXT Z73 cooler, 32GB Corsair DDR4-3200 RAM, a fast M.2 SSD, and the other various bits and pieces you see to the right. This is an open test bed, because the Powenetics equipment essentially requires one.
There’s a PCIe x16 riser card (which is where the soldering came into play) that slots into the motherboard, and then the graphics cards slot into that. This is how we accurately capture actual PCIe slot power draw, from both the 12V and 3.3V rails. There are also 12V kits measuring power draw for each of the PCIe Graphics (PEG) power connectors — we cut the PEG power harnesses in half and run the cables through the power blocks. RIP, PSU cable.
Powenetics equipment in hand, we set about testing and retesting all of the current and previous generation GPUs we could get our hands on. You can see the full list of everything we’ve tested in the list to the right.
From AMD, all of the latest generation Big Navi / RDNA2 GPUs use reference designs, as do the previous gen RX 5700 XT, RX 5700 cards,
Radeon VII
,
Vega 64
and
Vega 56
. AMD doesn’t do ‘reference’ models on most other GPUs, so we’ve used third party designs to fill in the blanks.
For Nvidia, all of the Ampere GPUs are Founders Edition models, except for the EVGA RTX 3060 card. With Turing, everything from the
RTX 2060
and above is a Founders Edition card — which includes the 90 MHz overclock and slightly higher TDP on the non-Super models — while the other Turing cards are all AIB partner cards. Older GTX 10-series and GTX 900-series cards use reference designs as well, except where indicated.
Note that all of the cards are running ‘factory stock,’ meaning there’s no manual
overclocking
or
undervolting
is involved. Yes, the various cards might run better with some tuning and tweaking, but this is the way the cards will behave if you just pull them out of their box and install them in your PC. (RX Vega cards in particular benefit from tuning, in our experience.)
Our testing uses the Metro Exodus benchmark looped five times at 1440p ultra (except on cards with 4GB or less VRAM, where we loop 1080p ultra — that uses a bit more power). We also run Furmark for ten minutes. These are both demanding tests, and Furmark can push some GPUs beyond their normal limits, though the latest models from AMD and Nvidia both tend to cope with it just fine. We’re only focusing on power draw for this article, as the temperature, fan speed, and GPU clock results continue to use GPU-Z to gather that data.
GPU Power Use While Gaming: Metro Exodus
Due to the number of cards being tested, we have multiple charts. The average power use charts show average power consumption during the approximately 10 minute long test. These charts do not include the time in between test runs, where power use dips for about 9 seconds, so it’s a realistic view of the sort of power use you’ll see when playing a game for hours on end.
Besides the bar chart, we have separate line charts segregated into groups of up to 12 GPUs, and we’ve grouped cards from similar generations into each chart. These show real-time power draw over the course of the benchmark using data from Powenetics. The 12 GPUs per chart limit is to try and keep the charts mostly legible, and the division of what GPU goes on which chart is somewhat arbitrary.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
Kicking things off with the latest generation GPUs, the overall power use is relatively similar. The 3090 and 3080 use the most power (for the reference models), followed by the three Navi 10 cards. The RTX 3070, RX 3060 Ti, and RX 6700 XT are all pretty close, with the RTX 3060 dropping power use by around 35W. AMD does lead Nvidia in pure power use when looking at the RX 6800 XT and RX 6900 XT compared to the RTX 3080 and RTX 3090, but then Nvidia’s GPUs are a bit faster so it mostly equals out.
Step back one generation to the Turing GPUs and Navi 1x, and Nvidia had far more GPU models available than AMD. There were 15 Turing variants — six GTX 16-series and nine RTX 20-series — while AMD only had five RX 5000-series GPUs. Comparing similar performance levels, Nvidia Turing generally comes in ahead of AMD, despite using a 12nm process compared to 7nm. That’s particularly true when looking at the GTX 1660 Super and below versus the RX 5500 XT cards, though the RTX models are closer to their AMD counterparts (while offering extra features).
It’s pretty obvious how far AMD fell behind Nvidia prior to the Navi generation GPUs. The various Vega and Polaris AMD cards use significantly more power than their Nvidia counterparts. RX Vega 64 was particularly egregious, with the reference card using nearly 300W. If you’re still running an older generation AMD card, this is one good reason to upgrade. The same is true of the legacy cards, though we’re missing many models from these generations of GPU. Perhaps the less said, the better, so let’s move on.
GPU Power with FurMark
FurMark, as we’ve frequently pointed out, is basically a worst-case scenario for power use. Some of the GPUs tend to be more aggressive about throttling with FurMark, while others go hog wild and dramatically exceed official TDPs. Few if any games can tax a GPU quite like FurMark, though things like cryptocurrency mining can come close with some algorithms (but not Ehterium’s Ethash, which tends to be limited by memory bandwidth). The chart setup is the same as above, with average power use charts followed by detailed line charts.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
The latest Ampere and RDNA2 GPUs are relatively evenly matched, with all of the cards using a bit more power in FurMark than in Metro Exodus. One thing we’re not showing here is average GPU clocks, which tend to be far lower than in gaming scenarios — you can see that data, along with fan speeds and temperatures, in our graphics card reviews.
The Navi / RDNA1 and Turing GPUs start to separate a bit more, particularly in the budget and midrange segments. AMD didn’t really have anything to compete against Nvidia’s top GPUs, as the RX 5700 XT only matched the RTX 2070 Super at best. Note the gap in power use between the RTX 2060 and RX 5600 XT, though. In gaming, the two GPUs were pretty similar, but in FurMark the AMD chip uses nearly 30W more power. Actually, the 5600 XT used more power than the RX 5700, but that’s probably because the Sapphire Pulse we used for testing has a modest factory overclock. The RX 5500 XT cards also draw more power than any of the GTX 16-series cards.
With the Pascal, Polaris, and Vega GPUs, AMD’s GPUs fall toward the bottom. The Vega 64 and Radeon VII both use nearly 300W, and considering the Vega 64 competes with the GTX 1080 in performance, that’s pretty awful. The RX 570 4GB (an MSI Gaming X model) actually exceeds the official power spec for an 8-pin PEG connector with FurMark, pulling nearly 180W. That’s thankfully the only GPU to go above spec, for the PEG connector(s) or the PCIe slot, but it does illustrate just how bad things can get in a worst-case workload.
The legacy charts are even worse for AMD. The R9 Fury X and R9 390 go well over 300W with FurMark, though perhaps that’s more of an issue with the hardware not throttling to stay within spec. Anyway, it’s great to see that AMD no longer trails Nvidia as badly as it did five or six years ago!
Analyzing GPU Power Use and Efficiency
It’s worth noting that we’re not showing or discussing GPU clocks, fan speeds or GPU temperatures in this article. Power, performance, temperature and fan speed are all interrelated, so a higher fan speed can drop temperatures and allow for higher performance and power consumption. Alternatively, a card can drop GPU clocks in order to reduce power consumption and temperature. We dig into this in our individual GPU and graphics card reviews, but we just wanted to focus on the power charts here. If you see discrepancies between previous and future GPU reviews, this is why.
The good news is that, using these testing procedures, we can properly measure the real graphics card power use and not be left to the whims of the various companies when it comes to power information. It’s not that power is the most important metric when looking at graphics cards, but if other aspects like performance, features and price are the same, getting the card that uses less power is a good idea. Now bring on the new GPUs!
Here’s the final high-level overview of our GPU power testing, showing relative efficiency in terms of performance per watt. The power data listed is a weighted geometric mean of the Metro Exodus and FurMark power consumption, while the FPS comes from our GPU benchmarks hierarchy and uses the geometric mean of nine games tested at six different settings and resolution combinations (so 54 results, summarized into a single fps score).
This table combines the performance data for all of the tested GPUs with the power use data discussed above, sorts by performance per watt, and then scales all of the scores relative to the most efficient GPU (currently the RX 6800). It’s a telling look at how far behind AMD was, and how far it’s come with the latest Big Navi architecture.
Efficiency isn’t the only important metric for a GPU, and performance definitely matters. Also of note is that all of the performance data does not include newer technology like ray tracing and DLSS.
The most efficient GPUs are a mix of AMD’s Big Navi GPUs and Nvidia’s Ampere cards, along with some first generation Navi and Nvidia Turing chips. AMD claims the top spot with the Navi 21-based RX 6800, and Nvidia takes second place with the RTX 3070. Seven of the top ten spots are occupied by either RDNA2 or Ampere cards. However, Nvidia’s GDDR6X-equipped GPUs, the RTX 3080 and 3090, rank 17 and 20, respectively.
Given the current GPU shortages, finding a new graphics card in stock is difficult at best. By the time things settle down, we might even have RDNA3 and Hopper GPUs on the shelves. If you’re still hanging on to an older generation GPU, upgrading might be problematic, but at some point it will be the smart move, considering the added performance and efficiency available by more recent offerings.