A new member of the Rocket Lake family has been spotted on Geekbench, this time an ultra-low powered variant called the Core i9-11900T. With 8 cores and a 35W TDP, this chip is the most power-efficient Rocket Lake SKU to date. But with performance that might surprise you. As these are unverified results, best to take the data with a pinch of salt.
From what we can tell on Geekbench’s spec sheet, the Core i9-11900T features a very low 1.51 GHz base frequency but maintains a surprisingly high 4.9 GHz maximum boost frequency. While 35W may not sound like a lot of power, it seems that Rocket Lake’s cores are power efficient enough to run 1 or maybe 2 cores at a boost frequency typically found on higher wattage SKUs.
Looking at the results, the 11900T managed a score of 1717 points in the single-threaded test and 8349 points in the multi-threaded score. The single-threaded score, in particular, is impressive. For comparison, the 11900T stomps on the — soon to be — previous gen, Core i9-10900K (with a 1402 score) by a whopping 20%.
Switching over to Intel’s main competitor, AMD, the Ryzen 7 5800X managed to get very close to the 11900T, with the AMD chip being just 2.5% slower and a score of 1674 points.
However, in multi-threaded tests, the 11900T’s 35W TDP really hampers performance. Comparing Intel’s Core i7-10700K from last-gen (not to mention the 10900K); the 10700K managed to be 7% faster than the 11900T. Then compared to the Ryzen 7 5800X, the lead stretches to a 22% difference in performance.
Overall, the Core i9-11900T is an impressive chip, even constrained to just 35W, it can outpace the best Comet Lake-S chips in the single-threaded department and get close to Comet Lake-S’ best 8 core CPU, the 10700K in multi-threaded performance.
This will really help expand the chip’s capabilities for users that require low powered CPUs. Typically with lower wattage chips like this, you expect major performance penalties. But if these performance numbers are true, then the i9-11900T could legitimately make a nice gaming CPU for ultra-compact/portable gaming systems with its excellent single-threaded numbers.
Hopefully, this kind of performance will be the same once the chip goes live and then we can benchmark the chip for ourselves. We still don’t know when the 11900T will be released, usually, Intel delays the launch of its ultra low powered SKUs until well after the launch of its vanilla and overclockable CPUs.
Intel’s 11th Generation Rocket Lake-S processors aren’t in stores yet, but engineering and qualification samples of the chips are evidently going around the black market. Romanian news outlet Lab501 and a Chinese YouTuber have released early reviews of the Core i7-11700K and Core i9-11900K, respectively. Since these are not retail samples, we recommend caution when approaching the results.
By now, Rocket Lake-S shouldn’t require any introductions. The forthcoming chips are still on Intel’s 14nm process node but wield the new Cypress Cove cores, which Intel claims will bring IPC uplifts up to 19%. AMD’s Ryzen 5000 (codename Vermeer) chips have dethroned Intel as the best gaming processor on the market, and the Blue Team is keen to recover its title. On the graphics end, Rocket Lake-S comes equipped with Intel’s 12th Generation Xe LP graphics with a maximum configuration of up to 32 Execution Units (EUs).
The Core i7-11700K and Core i9-11900K are reportedly eight-core, 16-thread processors with a 125W TDP. Intel usually differentiats its Core i7 and i9 lineups by adding more cores (or threading) to the i9 series, but given a hard cap of eight cores for Rocket Lake, it appears that clock rates are the only difference between the two families.
The Core i7-11700K has been rumored to feature a 3.6 GHz base clock, 5 GHz boost clock, and a 4.6 GHz all-core boost clock. Being the flagship part, the Core i9-11900K appears to have a 3.5 GHz base clock, 5.3 GHz boost clock, and a 4.8 GHz all-core boost clock.
Intel Core i7-11700K Benchmarks
Processor
3Ds Studio Max 2020*
Blender*
DaVinci Resolve 15*
HandBrake 1.2.2*
WinRAR 5.91
7-Zip 19
Cinebench R20
POV-Ray 3.7
PCMark 10
Power Consumption*
Ryzen 7 5800X
859
575
133
47
32,588
94,765
6.035
5.422
8,325
224
Core i7-11700K
917
631
154
48
28,072
76,816
5,615
4,505
7,927
286
*Lower is better.
AMD’s Ryzen 7 5800X simply dominated the Core i7-11700K across the board in terms of application performance. In some benchmarks, the margins were less than 10%, while in others, like WinRAR or 7-Zip, the Ryzen 7 5800X delivered up to 16.1% and 23.3% higher performance.
The Core i7-11700K’s power consumption also stood out, and not in a good way. With a Prime95-induced load, the Core i7-11700K drew up to 286W. Unfortunately, Lab501 didn’t include the Core i7-10700K to get an idea of the generation-over-generation power consumption. However, the Core i7-11700K pulled up to 27.7% more power than the Ryzen 7 5800X. Therefore, the Core i7-11700K wasn’t just slower than the Ryzen 7 5800X, but it was more power hungry as well.
Processor
Average
4K
WQHD
FHD
Ryzen 7 5800X
132.76
89.20
136.80
163.15
Core i7-11700K
131.27
89.80
133.90
161.15
According to Lab501’s results, the Ryzen 7 5800X was, on average, up to 1.1% faster than the Core i7-11700K. If we look at it individually, the Ryzen 7 5800X was marginally better than the Core i7-11700K in WQHD and FHD with differences of 2.2% and 1.2%, respectively.
Obviously, gaming is important for Intel, but the Core i7-11700K failed to help the chipmaker recover the lost ground. However, with such slim performance deltas, pricing could define the winner – we just don’t know pricing for Rocket Lake yet.
Intel Core i9-11900K Benchmarks
Processor
PCMark 10
Blender
X264 FHD Benchmark
V-Ray
Cinebench R15
CPU-Z Single Thread
CPU-Z Multi Thread
Core i9-11900K
14,536
142.06
72.8
17,181
2,526
719.6
7035.5
Ryzen 7 5800X
14,062
164.49
64.2
16,317
2,354
657.0
6366.0
The Core i9-11900K, on the other hand, had no problems outperforming the Ryzen 7 5800X in application workloads. Intel’s chip pumped out between 3% to 13% more performance than the Ryzen 7 5800X.
Processor
Wolfenstein: Youngblood
Total War: Three Kingdoms
PlayerUnknown’s Battlegrounds
Cyberpunk 2077
Hitman 3
League of Legends
Assassin’s Creed Valhalla
Ryzen 7 5800X
366
117
215
113
156
473
123
Core i9-11900K
353
117
215
110
158
361
122
It would seem that even the Core i9-11900K had trouble beating the Ryzen 7 5800X in gaming. Out of the seven titles, the Ryzen 7 5800X outpaced the Core i9-11900K in four of them. Both chips tied in two games, and the Core i9-11900K only managed to defeat the Ryzen 7 5800X in Hitman 3.
From what we’ve seen so far, the Core i7-11700K is no match for the Ryzen 7 5800X in either application or gaming workloads. Intel redeemed itself with the Core i9-11900K as it offers better application performance over the Ryzen 7 5800X.
Gaming, which Intel is big on, still seems to be on the Ryzen 7 5800X’s side though. Of course, we can’t pass judgment until proper reviews come out.
Although it’s hard to find any Zen 3 chips nowadays, the Ryzen 7 5800X retails for $449 when in stock. We can’t be certain of Rocket Lake’s pricing until the processors officialy come out. However, if preliminary retailer listings are even remotely accurate, the Core i7-11700K and Core i9-11900K may well end up with official price tags in the $450 and $600 range, respectively.
What’s the best mining GPU, and is it worth getting into the whole cryptocurrency craze? Bitcoin and Ethereum mining are making headlines again; prices and mining profitability are way up compared to the last couple of years. Everyone who didn’t start mining last time is kicking themselves for their lack of foresight. Not surprisingly, the best graphics cards and those chips at the top of our GPU benchmarks hierarchy end up being very good options for mining as well. How good? That’s what we’re here to discuss, as we’ve got hard numbers on hashing performance, prices, power, and more.
We’re not here to encourage people to start mining, and we’re definitely not suggesting you should mortgage your house or take out a big loan to try and become the next big mining sensation. Mostly, we’re looking at the hard data based on current market conditions. Predicting where cryptocurrencies will go next is even more difficult than predicting the weather, politics, or the next big meme. Chances are, if you don’t already have the hardware required to get started on mining today (or really, about two months ago), you’re already late and won’t see the big gains that others are talking about. Like the old gold rush, the ones most likely to strike it rich are those selling equipment to the miners rather than the miners themselves.
If you’ve looked for a new (or used) graphics card lately, the current going prices probably caused at least a raised eyebrow, maybe even two or three! We’ve heard from people who have said, in effect, “I figured with the Ampere and RDNA2 launches, it was finally time to retire my old GTX 1070/1080 or RX Vega 56/64. Then I looked at prices and realized my old card is selling for as much as I paid over three years ago!” They’re not wrong. Pascal and Vega cards from three or four years ago are currently selling at close to their original launch prices — sometimes more. If you’ve got an old graphics card sitting around, you might even consider selling it yourself (though finding a replacement could prove difficult).
Ultimately, we know many gamers and PC enthusiasts are upset at the lack of availability for graphics cards (and Zen 3 CPUs), but we cover all aspects of hardware — not just gaming. We’ve looked at GPU mining many times over the years, including back in 2011, 2014, and 2017. Those are all times when the price of Bitcoin shot up, driving interest and demand. 2021 is just the latest in the crypto coin mining cycle. About the only prediction we’re willing to make is that prices on Bitcoin and Ethereum will change in the months and years ahead — sometimes up, and sometimes down. And just like we’ve seen so many times before, the impact on graphics card pricing and availability will continue to exist. You should also be aware that, based on past personal experience that some of us have running consumer graphics cards 24/7, it is absolutely possible to burn out the fans, VRMs, or other elements on your card. Proceed at your own risk.
The Best Mining GPUs Benchmarked, Tested and Ranked
With that preamble out of the way, let’s get to the main point: What are the best mining GPUs? This is somewhat on a theoretical level, as you can’t actually buy the cards at retail for the most part, but we have a solution for that as well. We’re going to use eBay pricing — on sold listings — and take the data from the past seven days (for prices). We’ll also provide some charts showing pricing information from the past three months (90 days) from eBay, where most GPUs show a clear upward trend. How much can you make by mining Ethereum with a graphics card, and how long will it take to recover the cost of the card using the currently inflated eBay prices? Let’s take a look.
For this chart, we’ve used the current difficulty and price of Ethereum — because nothing else is coming close to GPU Ethereum for mining profitability right now. We’ve tested all of these GPUs on our standard test PC, which uses a Core i9-9900K, MSI MEG Z390 ACE motherboard, 2x16GB Corsair DDR4-3600 RAM, a 2TB XPG M.2 SSD, and a SeaSonic 850W 80 Plus Platinum certified PSU. We’ve tuned mining performance using either NBminer or PhoenixMiner, depending on the GPU, with an eye toward minimizing power consumption while maximizing hash rates. We’ve used $0.10 per kWh for power costs, which is much lower than some areas of the world but also higher than others. Then we’ve used the approximate eBay price divided by the current daily profits to come up with a time to repay the cost of the graphics card.
It’s rather surprising to see older GPUs at the very top of the list, but that’s largely based on the current going prices. GTX 1060 6GB and RX 590 can both hit modest hash rates, and they’re the two least expensive GPUs in the list. Power use isn’t bad either, meaning it’s feasible to potentially run six GPUs off a single PC — though then you’d need PCIe riser cards and other extras that would add to the total cost.
Note that the power figures for all GPUs are before taking PSU efficiency into account. That means actual power use (not counting the CPU, motherboard, and other PC components) will be higher. For the RTX 3080 as an example, total wall outlet power for a single GPU on our test PC is about 60W more than what we’ve listed in the chart. If you’re running multiple GPUs off a single PC, total waste power would be somewhat lower, though it really doesn’t impact things that much. (If you take the worst-case scenario and add 60W to every GPU, the time to break even only increases by 4-5 days.)
It’s also fair to say that our test results are not representative of all graphics cards of a particular model. RTX 3090 and RTX 3080 can run high GDDR6X temperatures without some tweaking, but if you do make the effort, the 3090 can potentially do 120-125MH/s. That would still only put the 3090 at third from the bottom in terms of time to break even, but it’s quite good in terms of power efficiency, and it’s the fastest GPU around. There’s certainly something to be said for mining with fewer higher efficiency GPUs if you can acquire them.
Here’s the real problem: None of the above table has any way of predicting the price of Ethereum or the mining difficulty. Guessing at the price is like guessing at the value of any other commodity: It may go up or down, and Ethereum, Bitcoin, and other cryptocurrencies are generally more volatile than even the most volatile of stocks. On the other hand, mining difficulty tends to increase over time and rarely goes down, as the rate of increased difficulty is directly tied to how many people (PCs, GPUs, ASICs, etc.) are mining.
So, the above is really a best-case scenario for when you’d break even on the cost of a GPU. Actually, that’s not true. The best-case scenario is that the price of Ethereum doubles or triples or whatever, and then everyone holding Ethereum makes a bunch of money. Until people start to cash out and the price drops, triggering panic sells and a plummeting price. That happened in 2018 with Ethereum, and it’s happened at least three times during the history of Bitcoin. Like we said: Volatile. But here we are at record highs, so everyone is happy and nothing could possibly ever go wrong this time. Until it does.
Still, there are obviously plenty of people who believe in the potential of Ethereum, Bitcoin, and blockchain technologies. Even at today’s inflated GPU prices, which are often double the MSRPs for the latest cards, and higher than MSRP for just about everything, the worst cards on the chart (RTX 3090 and RX 6900 XT) would still theoretically pay for themselves in less than seven months. And even if the value of the coins drops, you still have the hardware that’s at least worth something (provided the card doesn’t prematurely die due to heavy mining use). Which means, despite the overall rankings (in terms of time to break even), you’re generally better off buying newer hardware if possible.
Here’s a look at what has happened with GPU pricing during the past 90 days, using tweaked code from:
GeForce RTX 3060 Ti: The newest and least expensive of the Ampere GPUs, it’s just as fast as the RTX 3070 and sometimes costs less. After tuning, it’s also the most efficient GPU for Ethereum right now, using under 120W while breaking 60MH/s.
Radeon RX 5700: AMD’s previous generation Navi GPUs are very good at mining, and can break 50MH/s while using about 135W of power. The vanilla 5700 is as fast as the 5700 XT and costs less, making it a great overall choice.
GeForce RTX 2060 Super: Ethereum mining needs a lot of memory bandwidth, and all of the RTX 20-series GPUs with 8GB end up at around 44MH/s and 130W of power, meaning you should buy whichever is cheapest. That’s usually the RTX 2060 Super.
Radeon RX 590: All the Polaris GPUs with 8GB of GDDR5 memory (including the RX 580 8GB, RX 570 8GB, RX 480 8GB, and RX 470 8GB) end up with relatively similar performance, depending on how well your card’s memory overclocks. The RX 590 is currently the cheapest (theoretically), but all of the Polaris 10/20 GPUs remain viable. Just don’t get the 4GB models!
Radeon RX Vega 56: Overall performance is good, and some cards can perform much better — our reference models used for testing are more of a worst-case choice for most of the GPUs. After tuning, some Vega 56 cards might even hit 45-50MH/s, which would put this at the top of the chart.
Radeon RX 6800: Big Navi is potent when it comes to hashing, and all of the cards we’ve tested hit similar hash rates of around 65MH/s and 170W power use. The RX 6800 is generally several hundred dollars cheaper than the others and used a bit less power, making it the clear winner. Plus, when you’re not mining, it’s a very capable gaming GPU.
GeForce RTX 3080: This is the second-fastest graphics card right now, for mining and gaming purposes. The time to break even is only slightly worse than the other GPUs, after which profitability ends up being better overall. And if you ever decide to stop mining, this is the best graphics card for gaming — especially if it paid for itself! At around 95MH/s, it will also earn money faster after you recover the cost of the hardware (if you break even, of course).
What About Ethereum ASICs?
One final topic worth discussing is ASIC mining. Bitcoin (SHA256), Litecoin (Scrypt), and many other popular cryptocurrencies have reached the point where companies have put in the time and effort to create dedicated ASICs — Application Specific Integrated Circuits. Just like GPUs were originally ASICs designed for graphics workloads, ASICs designed for mining are generally only good at one specific thing. Bitcoin ASICs do SHA256 hashing really, really fast (some can do around 25TH/s while using 1000W — that’s trillions of hashes per second), Litecoin ASICs do Scrypt hashing fast, and there are X11, Equihash, and even Ethereum ASICs.
The interesting thing with hashing is that many crypto coins and hashing algorithms have been created over the years, some specifically designed to thwart ASIC mining. Usually, that means creating an algorithm that requires more memory, and Ethereum falls into that category. Still, it’s possible to optimize hardware to hash faster while using less power than a GPU. Some of the fastest Ethereum ASICs (e.g. Innosilicon A10 Pro) can reportedly do around 500MH/s while using only 1000W. That’s about ten times more efficient than the best GPUs. Naturally, the cost of such ASICs is prohibitively expensive, and every big miner and their dog wants a bunch of them. They’re all sold out, in other words, just like GPUs.
Ethereum has actually tried to deemphasize mining, but obviously that didn’t quite work out. Ethereum 2.0 was supposed to put an end to proof of work hashing, transitioning to a proof of stake model. We won’t get into the complexities of the situation, other than to note that Ethereum mining very much remains a hot item, and there are other non-Ethereum coins that use the same hashing algorithm (though none are as popular / profitable as ETH). Eventually, the biggest cryptocurrencies inevitably end up being supported by ASICs rather than GPUs — or CPUs or FPGAs. But we’re not at that point for Ethereum yet.
Intel just spilled the beans on an Intel Core i9-10900KS CPU that could be coming soon. An Intel Software Advantage Program qualifying list shows the SKU, leaving us wondering what it may offer over the standard Intel Core i9-10900K.
The document also states that buying the Core i9-10900KS or one of the (many) other qualifying CPUs will net you a free copy of Crysis Remastered. However, the 10900KS isn’t out yet. In fact, this is the first time we’ve heard of the CPU.
This is not Intel’s first rodeo with special edition SKUs. Processors such as the Core i7-8086K and Core i9-9900KS were both limited/special edition products that offered the highest binned Intel silicon you could buy, as well as the highest stock core frequencies possible at the time.
For instance, the Core i9-9900KS features a beefy all-core turbo of 5 GHz flat on all 8 CPU cores, even under AVX workloads. The vanilla i9-9900K could boost to 5 GHz but only on a few cores. As you loaded up more cores, the boost frequency would gradually drop, until you hit the CPUs all-core turbo of 4.7 GHz, (unless you enabled multi-core enhancement which would auto-overclock all cores to 5 GHz flat).
The 10900KS could end up being a similar offering, though it seems difficult to fathom a 5.0 or 5.1GHz all-core-turbo frequency on 10 14nm cores without encountering serious power and heat issues.
Also possible is a higher all-core turbo clock, with Intel still using a turbo core hierarchy, where some cores boost higher than others.
Again though, this is purely speculation. This is the first we’ve heard of the 10900KS so we have no idea when it will come out or what it will offer. But with Intel’s past two generations of Core microarchitectures featuring a “Special Edition” SKU, it seems reasonable that Intel would continue the tradition with Comet Lake-S.
João Silva 1 day ago CPU, Featured Announcement, Featured Tech News
We’ve already had one Intel Core i9-11900K benchmark leak, but an updated test run this week shows improved performance. In a new Geekbench 5 run, the Core i9-11900K scored over 1,900 points in the single-core benchmark and 11,000 points in the multi-core benchmark.
With a top single-core score of 1905 and a multi-core score of 11048, the 8C/16T Intel Core i9-11900K beats its predecessor by about 35% in the single-core test, while scoring a similar result in the multi-core test, despite featuring 2 cores and 4 threads less. These scores are also improved over the 11900K benchmark we saw in January, where a sample scored 1892 in the single-core test and 10934 in the multi-core benchmark.
The Core i9 processor in these tests was running on a Gigabyte Z490 Aorus Master and 32GB of DDR4-3600 memory. It’s unclear if the processor was overclocked or not, but during both runs, it was often running at +5.2GHz, with a few operating frequency drops.
In the Geekbench 5 single-core test, it beat the fastest AMD Ryzen processor (Ryzen 9 5950X) by about 13%, but it still far behind in the multi-core test due to the inferior core and thread count. In the multi-core test, the AMD Ryzen 9 5950X scores about 52% more than the Intel Core i9-11900K.
The fairest comparison with an AMD processor would be with the Ryzen 7 5800X, which scores 1682 in the single-core test and 10427 in the multi-core test. Compared to the Core i9-11900K’s scores, the Intel processor is 14% faster in single-core and 5.5% in multi-core.
At the moment, Intel is expected to launch its 11th Gen Core series processors in March 2021.
KitGuru says: Are you impressed with these scores? How do you think the Intel core i9-11900K will fare in real-world applications?
Become a Patron!
Check Also
ESA planning Digital E3 in June, needs publisher backing for keynotes
We’ve known for a while now that the ESA is planning a digital version of …
A company called Expanscape has created the most Inspector Gadget-like device that I’ve ever seen. It’s a laptop prototype called the Aurora 7 (a working title), and attached to its humongous black box of a chassis are six extra displays that extend out in every direction away from the main screen, each showing its own windows and applications.
If you’re like me, the first thought that comes to mind is “that poor hinge!” Yeah, poor hinge, indeed. Many laptop hinges don’t gracefully handle having one screen attached, let alone seven. Piggybacking on the main 17.3-inch 4K display are three other screens of the same size and resolution. Above the left and right displays is a single seven-inch 1200p monitor. You’ll also find one more seven-inch 1200p touchscreen display mounted into the wrist rest. This prototype weighs about 26 pounds and is 4.3 inches thick. It has an imposing, intimidating presence, and I haven’t even seen it in person.
What GPU is responsible for powering its four 4K displays? None other than the midrange Nvidia GTX 1060, which isn’t exactly a powerhouse. It also has an Intel Core i9-9900K processor and 64GB of RAM. You can find more specs here. In future revisions, Expanscape wants to use the Nvidia RTX 2070 instead, with options for the AMD Ryzen 9 3950x processor or Intel’s i9-10900K.
Even though it’s built primarily to be a mobile security operations station (and stay plugged pretty much all the time), maybe it’ll be able to run some games, too. Gizmodo noticed in its write-up of this gadget that its current prototype can last for just one hour before the battery cries for more power, which is frankly longer than I expected. It uses a secondary 148Wh battery just to power its additional displays, and that’s over the FAA’s legal limit to fly in a plane. Expanscape says it’s working to remedy this in future prototypes. In other words, the company is committed to letting you bring a seven-screen laptop onto a plane. You’d probably have to buy a whole row of seats for the necessary space to use it, though. (If you’re reading this in the future, please take a picture of one of these if you see it on your plane.)
Sure, the Aurora 7 looks more rough around the edges than Razer’s triple-screened Project Valerie laptop from a few years ago. But nevertheless, Expanscape claims it’s willing to actually sell this thing, which is more than Razer can say about its Valerie concept. If you want to buy one, Expanscape says it can help interested parties in reserving a prototype of its upcoming revision. As for the price, the company will ask you to sign a nondisclosure agreement, prohibiting you from publicly sharing the cost. That doesn’t bode well for the bank account.
I look forward to hearing more about future revisions of the Aurora 7, especially if it gets a button that makes all of the displays pop open in a comical fashion. Currently, it seems like an extremely manual process.
Gigabyte’s Z590 Aorus Tachyon has yet to debut in the U.S. market, but the overclocking-oriented motherboard has already emerged at overseas stores. Tto Austrian retailers, (as spotted via momomo_us), have listed the Z590 Aorus Tachyon for €509 (~$612.59).
Deducting the 20% VAT rate would bring the price down to $510.49, which is around the pricing that we can expect in the U.S. market.
Gigabyte hasn’t uploaded the product page for the Z590 Aorus Tachyon, so a bit of mystery still engulfs the motherboard. Built with overclocking in mind, the Z590 Aorus Tachyon flaunts an overpowered 12+1-phase power delivery subsystem. Gigabyte has confirmed that each phase offer up to 100A, bringing the total power delivery to a whopping 1,300A. The design in addition to the twin 8-pin EPS power connectors will feed even the most power-hungry Intel Rocket Lake-S processors, including the flagship Core i9-11900K.
Adhering to the E-ATX form factor, the Z590 Aorus Tachyon has more than enough landscape to accommodate the tools needed to help elite overclockers break world records. There’s an array of buttons and switches to directly modify the installed processor’s operating frequencies, as well as voltage readouts to get precise measurements.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The Z590 Aorus Tachyon, like other offerings seeking to compete with the best motherboards for overclockers, only comes equipped with two DDR4 RAM slots. However, the slots are placed as close as possible to the CPU socket, allowing for minimum signal noise and interference. This should allow overclockers to hit higher memory frequencies.
The available storage options on the Z590 Aorus Tachyon include six SATA III ports and one or two M.2 ports. We’re unsure about the latter because the huge passive PCH heatsink covers the ports in photos.
As for expansion, the motherboard offers four PCIe x16 expansion slots. Without the specification sheet, however, it’s unknown of they all adhere to the PCIe 4.0 standard, which is one of selling points for Rocket Lake-S CPUs.
The Z590 Aorus Tachyon’s rear panel exposes two buttons. One’s for flashing the motherboard’s firmware, but the function of the other button is unknown.
There are seven USB ports in total, including on USB Type-C port. The motherboard also provides one Ethernet port, as well as wireless connectivity. Keeping it old school, the Z590 Aorus Tachyon even supplies two PS/2 ports for ancient motherboards and mice. For audio, the motherboard has six 3.5mm audio jacks and one optical S/PDIF out connector.
Today we’re looking at our first custom 3070 card, the Asus GeForce RTX 3070 TUF Gaming OC. Like all the other recent GPUs, Nvidia’s GeForce RTX 3070 continues to be highly sought after — by gamers and miners alike. Originally revealed with a $500 base price, with performance relatively close to the previous generation RTX 2080 Ti, the GPU looked to land right in the sweet spot. The theoretical price easily earns the card a place on our best graphics cards list, and it sits in seventh place in our GPU benchmarks hierarchy (not including the Titan RTX). What does the Asus card bring to the table? Less and more, depending on your perspective.
Here’s a quick comparison of the reference 3070 Founders Edition with the Asus 3070 TUF Gaming. All of the core features and specs are the same, so the only real change is in clock speeds and the card’s design.
Nvidia GeForce RTX 3070 Specifications Comparison
Asus RTX 3070 TUF Gaming
RTX 3070 Founders Edition
Architecture
GA104
GA104
Process Technology
Samsung 8N
Samsung 8N
Transistors (Billion)
17.4
17.4
Die size (mm^2)
392.5
392.5
SMs / CUs
46
46
GPU Cores
5888
5888
Tensor Cores
184
184
RT Cores
46
46
Boost Clock (MHz)
1845
1725
VRAM Speed (Gbps)
14
14
VRAM (GB)
8
8
VRAM Bus Width
256
256
ROPs
96
96
TMUs
184
184
GFLOPS FP32 (CUDA)
21.7
20.3
TFLOPS FP16 (Tensor)
87 (174)
81 (163)
RT TFLOPS
42.4
39.7
Bandwidth (GBps)
448
448
TDP (watts)
275
220
Dimensions (LxHxW mm)
300x127x51.7
242x112x38
Weight (g)
1096
1034
Launch Price
$549 ($649)
$499
As with most Asus graphics cards, the RTX 3070 TUF Gaming OC has multiple clock speed options. A switch on the top of the card can toggle between ‘quiet’ and ‘performance’ modes (reboot required), but that’s not the full story. The OC Mode has a boost clock of 1845 MHz, compared to 1815 MHz in the default Gaming mode, and 1785 MHz in Quiet Mode. However, you can only use the OC Mode if you install the Asus GPU Tweak II software (see below) — otherwise, you’ll get the slightly lower Gaming Mode clocks.
Asus is basically straddling the fence with this approach. It gets to claim higher boost clocks, but we suspect a lot of users won’t bother installing GPU Tweak and will end up with (slightly) lower performance — and lower power draw as well. Realistically, most people won’t notice the difference either way, but cutting power use by 25W and dropping temperatures a bit are both desirable things with PC hardware. We’ve opted to run the performance tests with OC Mode engaged, but we also collected power and temperature data running in Gaming Mode.
Image 1 of 9
Image 2 of 9
Image 3 of 9
Image 4 of 9
Image 5 of 9
Image 6 of 9
Image 7 of 9
Image 8 of 9
Image 9 of 9
The RTX 3070 TUF’s design is nearly identical to that of the Asus RTX 3080 TUF, with a few minor adjustments. The 3070 has the same dimensions as the more potent 3080 and 3090 cards, but it weighs around 300g less. That’s because the GPU and GDDR6 memory won’t run as hot, so the heatsink isn’t quite as bulky. The overall appearance is nearly the same as the higher-end Asus TUF models as well, though there are a few small differences in the backplate (there are a few extra cutouts on the 3070).
While the reference 3070 has an official TGP (Total Graphics Power) of 220W, Asus doesn’t explicitly list a TGP and instead recommends at least a 750W power supply. The 3070 TUF still requires dual 8-pin power connectors, just like the 3080 and 3090 variants, which is a bit interesting to see. Based on our power testing, it will be challenging to push the card beyond 300W, and an 8-pin plus 6-pin setup would have been sufficient, but it was probably easier to just keep the dual 8-pin connections used on other models.
RGB lighting is present, but it’s very tame compared to other GPUs. The TUF logo on the top of the card lights up, and there’s a small RGB strip on the front edge of the card (linked to the same lights as the logo), and that’s it. If you’re after more bling, Asus has the Strix line for that. TUF is the more mainstream approach to design and aesthetics. Naturally, the Strix models cost more than the TUF models, with slightly higher factory overclocks and better cooling in addition to the extra RGB lighting.
The 3070 TUF has three 90mm fans, and they’re the new style with an integrated rim that increases static pressure and helps improve airflow at the same RPM. Considering we saw very good results from the cooling on the higher power RTX 3080 TUF, the fans should be more than sufficient for the 3070 card. Asus also rotates the center fan clockwise, with the side fans spinning counterclockwise, which it says reduces turbulence and noise. Our testing (see below) generally confirms these claims.
Image 1 of 3
Image 2 of 3
Image 3 of 3
We used GPU Tweak II during testing, setting it to OC Mode. We also did some manual overclocking, which showed similar results to what we’ve seen with other Ampere GPUs. We maxed out the power limit and managed to add 750 MHz to the GDDR6 base clock (15.5Gbps effective speed), but we could only add 75 MHz to the GPU core clocks before we encountered instability. We also ramped up fan speeds quite a bit — using the stock fan profile, we could only get around 50 MHz extra on the GPU core and a 600 MHz memory overclock.
In other words, we consider our OC’d results to be closer to the maximum you should expect to achieve, and we’re being aggressive on fan speeds to get there. If you run one of these cards with the fans usually spinning at 50-75%, the bearings are likely to wear out quicker, and we feel you’re better off just sticking with the default OC Mode for long-term use. Redlining a card for an extra 5% performance isn’t really a great idea, but YMMV.
Asus RTX 3070 TUF Gaming: 1080p Gaming Performance
TOM’S HARDWARE GPU TEST PC
We’ve only tested one custom 3070 so far, which we’ll highlight in bright red, with the reference 3070 Founders Edition in a darker shade. We’ve included both ‘stock’ (using the OC Mode) results and performance running our maximum manual overclock in the charts (we didn’t run benchmarks using the Gaming Mode or Silent Mode). We didn’t run the same overclocking tests on the Founders Edition back when we first tested it, but in the tests that we did run, we saw performance slightly higher than the Asus card gets using the OC Mode.
Our test PC is the same Intel Core i9-9900K we’ve been using for over a year now, with full details to the right. The Core i9-10900K and Ryzen 9 5900X may be slightly faster, depending on the game used and other settings. However, we’ve enabled XMP memory profiles for our GPU testbed, which seems to narrow the gap quite a bit, particularly with the RTX 3070. We’re running the RAM at DDR4-3600 with 16-18-18 timings, compared to the officially supported DDR4-2666 memory speed.
Image 1 of 28
Image 2 of 28
Image 3 of 28
Image 4 of 28
Image 5 of 28
Image 6 of 28
Image 7 of 28
Image 8 of 28
Image 9 of 28
Image 10 of 28
Image 11 of 28
Image 12 of 28
Image 13 of 28
Image 14 of 28
Image 15 of 28
Image 16 of 28
Image 17 of 28
Image 18 of 28
Image 19 of 28
Image 20 of 28
Image 21 of 28
Image 22 of 28
Image 23 of 28
Image 24 of 28
Image 25 of 28
Image 26 of 28
Image 27 of 28
Image 28 of 28
1080p continues to be the most popular resolution, according to the Steam Hardware Survey, though we figure anyone buying an RTX 3070 likely has their sights set a bit higher. However, some people prefer running a higher refresh rate display over resolution, in which case 1080p results are still important.
Despite the low resolution, there’s still a fairly large gap between the RTX 3070 and RTX 3080, thanks to the game selection and ultra quality settings. Overall, the Asus 3070 TUF ends up beating the reference 3070 FE by just four percent, while the 3080 leads the Asus card by 15 percent. If you have a choice between a heavily factory-overclocked 3070 and a reference-clocked 3080 for roughly the same price, you’ll be better off with the 3080 in every case. Not that you can find either one in stock right now.
Our gaming selection also illustrates one of the pain points with chasing higher frame rates: At maximum quality, even top tier GPUs can struggle to break 144 fps in many games, and 240 fps is basically out of the question. Unless you play Strange Brigade or other lighter fare like CS:GO, Overwatch, and League of Legends, in which case a 240Hz or even 360Hz monitor might be useful. Alternatively, you can drop the quality settings to boost performance, though some games (e.g., Assassin’s Creed Valhalla) will never get much above 120 fps.
Interestingly, the manual overclock is just enough to put the Asus card on equal footing with AMD’s reference RX 6800 (which can, of course, be overclocked for an additional 5-10% boost in performance). Some games strongly favor AMD’s RX 6800 (Valhalla, Borderlands 3, Dirt 5, The Division 2, and Forza Horizon 4). In contrast, other games favor the RTX 3070 (Far Cry 5, FFXIV sort of, Metro Exodus, Strange Brigade, and Watch Dogs Legion — along with every game that supports DXR, aka DirectX Raytracing and DLSS). Still, overall it’s a relatively close match.
Asus RTX 3070 TUF Gaming: 1440p Gaming Performance
Image 1 of 28
Image 2 of 28
Image 3 of 28
Image 4 of 28
Image 5 of 28
Image 6 of 28
Image 7 of 28
Image 8 of 28
Image 9 of 28
Image 10 of 28
Image 11 of 28
Image 12 of 28
Image 13 of 28
Image 14 of 28
Image 15 of 28
Image 16 of 28
Image 17 of 28
Image 18 of 28
Image 19 of 28
Image 20 of 28
Image 21 of 28
Image 22 of 28
Image 23 of 28
Image 24 of 28
Image 25 of 28
Image 26 of 28
Image 27 of 28
Image 28 of 28
Running at 2560×1440 is generally the best balance between resolution and frame rate, especially since 144Hz 1440p displays are relatively affordable — you can even get FreeSync and G-Sync Compatible IPS displays for around $300-$400, which is what we recommend for most people. Performance drops on average by approximately 20 percent compared to 1080p, but all of the games continue to run at more than 60 fps, outside of the two games where we’ve enabled DXR (Dirt 5 and Watch Dogs Legion — though WDL does have the option to use DLSS, which we haven’t done here.)
The factory overclock on the Asus 3070 TUF Gaming gives it a 5 percent lead over the 3070 FE, which isn’t particularly significant. Manually overclocking the Asus card also puts it (barely) ahead of the stock RX 6800 again, with a similar set of wins in losses in the individual games. This is about as far as we’d recommend pushing the RTX 3070 for most gamers.
Technically (see below), you can run at 4K as well, and with the right combination of game and settings, you might even break 60 fps still. However, 1440p 144Hz gaming simply feels much smoother than 4K gaming, even if you have a high-end 4K monitor. But let’s see the actual numbers.
Asus RTX 3070 TUF Gaming: 4K Gaming Performance
Image 1 of 28
Image 2 of 28
Image 3 of 28
Image 4 of 28
Image 5 of 28
Image 6 of 28
Image 7 of 28
Image 8 of 28
Image 9 of 28
Image 10 of 28
Image 11 of 28
Image 12 of 28
Image 13 of 28
Image 14 of 28
Image 15 of 28
Image 16 of 28
Image 17 of 28
Image 18 of 28
Image 19 of 28
Image 20 of 28
Image 21 of 28
Image 22 of 28
Image 23 of 28
Image 24 of 28
Image 25 of 28
Image 26 of 28
Image 27 of 28
Image 28 of 28
As we noted in our RTX 3070 Founders Edition review, it’s basically as fast as the previous generation RTX 2080 Ti. That’s despite the 8GB VRAM limitation — and it’s definitely a limitation. For example, Watch Dogs Legion really doesn’t seem to care for 8GB cards when all the settings are maxed out. DLSS helps, but we had to run the benchmark numerous times for the results shown in the gallery, as often we’d get stuck with extremely low performance. Overall, 4K remains viable, but it’s just not the same experience as 1440p 144Hz.
The overall rankings don’t change compared to the lower resolutions, though individual games may show a few position swaps. The gap between the 3070 and 3080 meanwhile continues to grow. It was only 15 percent at 1080p, then 20 percent at 1440p, and now it’s 30 percent at 4K. Part of that is due to CPU bottlenecks at lower resolutions, but the extra 2GB definitely helps the 3080 in some games at 4K. We’re also curious to see whether Nvidia will actually do a 3070 Ti with 16GB (or 3070 Super or whatever it decides to call it). Still, considering the ongoing GDDR6 shortages, that may not happen for a while.
Anyway, about half of the 13 games we’ve tested (six) average 60 fps or more at 4K ultra. The other half ranges from just slightly below 60 fps, where G-Sync would still make them feel smooth (Borderlands 3, Division 2, Metro Exodus, and Red Dead Redemption 2) to games that are more like the 30-45 fps console experience (Dirt 5 and Assassin’s Creed Valhalla). And then there’s Watch Dogs Legion, which sits at sub-20 fps rates and only barely reaches 30 fps with DLSS in performance mode — at least with DXR enabled. So you can’t plan on running every game maxed out at 4K ultra on the 3070, but most games are easily playable at 4K with a judicious mix of settings.
Asus RTX 3070 TUF Gaming: Power, Clocks, Thermals, Fan Speeds and Noise
For our power, thermal, etc., testing, we’ve tested the Asus card in Gaming Mode, OC Mode, and with our manual overclock. We run Metro Exodus at 1440p ultra (no DLSS or DXR) and FurMark running at 1600×900 in stress test mode. Each one loops a test sequence for about 10 minutes, and we use Powenetics software for in-line power measurement, with GPU-Z tracking clocks, temps, and fan speeds. Unfortunately, while HWInfo64 now reports GDDR6X memory temperatures, that doesn’t apply to vanilla GDDR6 memory. Presumably, it runs cooler since it’s only clocked at 14Gbps, but we weren’t able to check.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Compared to the reference model, the Asus 3080 TUF consumes about 30-35W more power using its default Gaming Mode settings. Engage the OC Mode and power use jumps another 20W, while our manual overclocking only managed an additional 8-9W. That’s not too surprising, considering the maximum power limit in GPU Tweak is 108%. Using a baseline of 250W, that would give a maximum of 270W, and then adding a bit of extra leeway accounts for the last bit of power.
As far as where the power comes from, the card’s peak power was only a few watts higher than what you see in the charts, and all three power sources are easily within spec. Even at our maximum manual overclock, the PCIe slot only provided 62W, the first PEG connector provided 127W, and the second PEG connector provided 98W.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The Asus RTX 3070 TUF comes with a boost clock of 1845 MHz in OC mode, 120 MHz higher than the Founders Edition. As usual, we saw substantially higher clocks in games, with the card averaging 1.96GHz during our Metro Exodus test — note that we’re only averaging clock speeds when the GPU load is above 95%, so the dips you see in the line charts aren’t included. Our manual overclock pushed clock speeds even higher, to an average of 2.08GHz. On the other hand, Furmark hits much higher power use per MHz, so clocks drop about 250MHz (give or take).
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Temperatures are directly linked to fan speeds, so higher RPMs means lower temps. Even in OC Mode, the cooling on the 3070 TUF proves more than adequate, staying below 70C. More importantly, those temperatures come with the fans coasting along at their minimum speed of around 1000 RPM. (The average is a bit lower as the fans don’t even turn on until the GPU hits 50C.) Our maximum OC used a far more aggressive fan curve, which might be a bit overkill, but it allowed the testing to complete without issues. Thanks to the high fan speeds, temperatures while overclocked were actually lower than at stock, but some tuning to find a happy medium is possible.
Besides affecting thermals, fans also make noise — the higher the RPMs, the more noise. Like most modern GPUs, the Asus 3070 keeps the fans off until the GPU hits 50C, which means for office use and lighter tasks, the card doesn’t make any noise at all. Without manually overclocking, fan noise is only slightly above the noise floor of our test setup. At idle, with the SPL meter 15cm away from the GPU, the noise floor is 34 dB — 4 dB above the limit of the SPL meter, thanks to the CPU cooler fans and pump. During our stress testing, noise levels increased just a hair to 35.4 dB. That’s with an open testbed, so GPU noise will be even less noticeable if you have a decent case.
Asus RTX 3070 TUF Gaming Vulcan: A Solid Offering
Some people love flashy cards with tons of RGB lighting and other extras. The Asus RTX 3070 TUF Gaming skips most of that (other than a relatively subdued RGB logo) and instead focuses on delivering great performance and cooling. We haven’t tested Asus’s higher-end 3070 ROG Strix OC card, which boasts a boost clock of 1935MHz on the top model, but we have difficulty believing it will be much faster than the TUF Gaming OC. If you want to eke out the last few MHz from a GPU, that’s fine, but for most people, it’s better to get a more reasonably priced card rather than one packing all the bells and whistles.
With the Asus 3070 TUF, you get excellent cooling and better than reference performance, nothing more, nothing less. The original launch price was $550, just $50 more than the 3070 Founders Edition, and we can easily get behind that sort of offering. Unfortunately, in the current market, it’s more difficult to say how much you should be willing to spend. The Asus Store lists the 3070 TUF OC at $650 now, supposedly due to the increased tariffs on graphics card imports from China. Obviously, it’s also due to the extreme demand for any reasonably potent graphics card, and you won’t find the card readily available at anywhere close to $650 right now.
We feel like a broken record, but until things settle down, it’s not a great time for PC gaming enthusiasts hoping to pick up a new graphics card. And with Ethereum hitting all-time highs, coin miners are only exacerbating the situation. Nvidia, AMD, and all of their AIB partners are trying to get cards out to the market as quickly as possible, but we’ve gone from thinking it would only take a few months for things to get better to wondering if we’ll see cards at MSRP at all during 2021. Hopefully, but with shortages on many of the other components that go into a graphics card (GDDR6 memory and other materials), we don’t expect major improvements until June or July at best.
This isn’t to say you shouldn’t try to buy the Asus RTX 3070 TUF Gaming OC. If you can find one in stock — either via a waiting list or a Newegg lottery or just getting lucky at a brick and mortar store — and it’s priced reasonably (under $650), this is a great card. There are other GPUs slated to launch in the coming months as well, including the RTX 3060 12GB and the RX 6700 XT. The more options we get, the more likely we will start seeing less of a crush of people trying to buy the higher-end components. But until supply improves and coin mining profitability drops, it’s going to be a tough slog for anyone trying to buy a new high-end graphics card.
What if you need a mobile system with more than three screens? Well, Expanscape has developed a prototype of a laptop with as many as seven screens, and it is already selling the prototypes to interested customers. It also comes packed with an impressive amount of power on the compute side to match.
The Aurora 7 Prototype indeed comes with four 17.3-inch monitors featuring a 4K resolution (two working in landscape, two in portrait mode) as well as three auxiliary 7-inch screens featuring a 1920 x 1200 resolution. All the monitors fold or swivel out of the primary chassis, making it a transformer of sorts, and no on-site assembly before deployment is necessary. The whole system weighs around 12 kilograms, so it is not easy to carry, but it is naturally easier to transport than a laptop along with six extra displays.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
With more of us working from home due to the pandemic, multi-display setups are becoming the norm and are widely used for a variety of applications. Setting a multi-monitor configuration at home or in an office is easy, and while attaching two more displays to a laptop is also possible, it gets slightly more complicated.
Expanscape’s Aurora 7 Prototype laptop computer is built for very particular applications and audiences in mind (such as security operations centers, data scientists, content creators) that traditionally use multi-display PCs, but who at times need to transport and deploy them quickly. The creators wanted their seven-screen laptop computer to be portable, structurally rigid, and capable of running demanding programs.
As far as internal hardware is concerned, the Aurora 7 is powered by Intel’s Core i9-9900K processor that is accompanied by 64GB of DDR4-2666 memory, Nvidia’s GeForce GTX 1060 graphics card, two PCIe 3.0 x4 M.2 SSDs, one 2.5-inch MLC SSD, and a 2TB 7200RPM hard drive. The PC has all modern connectivity technologies, including Bluetooth, GbE, Wi-Fi, and USB. Since the Aurora 7 uses a fairly spacious chassis, the developer says that it can use different platforms, including AMD’s Ryzen 9 3950X or Intel’s Core i9-10900K.
Among the impressive peculiarities of the Expanscape’s Aurora 7 Prototype are two internal batteries. The primary internal battery features an 82Wh capacity and powers the system itself. The secondary internal battery has a 148Wh capacity and is used to power the screens. The battery life for the whole system is about 2 hours 20 minutes, but at high clocks under high loads, it will get lower.
Technically, all of Expanscape’s seven-screen Aurora 7 machines are just prototypes that do not look or feel like commercial products, yet the company can build them to order and sell to interested parties who agree so sign a contract and pay a hefty sum of money.
While we still don’t have an Intel Rocket Lake-S Core i9-11900K CPU to use for testing, the Intel Z590 motherboards are arriving in our labs and on store shelves. So while we await the ability to talk benchmarks, we’ll be walking in detail through the features of these brand-new boards. First up on our bench was the ASRock Z590 Steel Legend 6E Wi-Fi, and now we have the Gigabyte Z590 Aorus Master to dive into.
The latest version of this premium motherboard line includes an incredibly robust VRM, ultra-fast Wi-Fi and wired networking, premium audio, and more. While we don’t have exact pricing information at the time of this writing, the Z490 version came in just under $400, which is around where we expect the Z590 version to land, if not slightly higher.
Gigabyte’s current Z590 product stack consists of 13 models. There are familiar SKUs and a couple of new ones. Starting with the Aorus line, we have the Aorus Xtreme (and potentially a Waterforce version), Aorus Master, Aorus Ultra, and the Aorus Elite. Gigabyte brings back the Vision boards (for creators) and their familiar white shrouds. The Z590 Gaming X and a couple of boards from the budget Ultra Durable (UD) series are also listed. New for Z590 is the Pro AX board, which looks to slot somewhere in the mid-range. Gigabyte will also release the Z590 Aorus Tachyon, an overbuilt motherboard designed for extreme overclocking.
We’re not allowed to list any performance metrics for Rocket Lake (not that we have a CPU at this time) as the embargo wasn’t up when we wrote this article. All we’ve seen at this point are rumors and a claim from Intel of a significant increase to IPC, but the core count was lowered from 10 cores/20 threads in Comet Lake (i9-10900K) to 8 cores/16 threads in the yet-to-be-released i9-11900K. To that end, we’ll stick with specifications and features, adding a full review that includes benchmarking, overclocking and power consumption shortly.
The Z590 Aorus Master looks the part of a premium motherboard, with brushed-aluminum shrouds covering the PCIe/M.2/chipset area. The VRM heatsink and its NanoCarbon Fin-Array II provide a nice contrast against the smooth finish on the board’s bottom. Along with Wi-Fi 6E integration, it also includes an Aquantia based 10GbE, while most others use 2.5 GbE. The Aorus Master includes a premium Realtek ALC1220 audio solution with an integrated DAC, three M.2 sockets, reinforced PCIe and memory slots and 10 total USB ports, including a rear USB 3.2 Gen2x2 Type-C port. We’ll cover those features and much more in detail below. But first, here are full the specs from Gigabyte.
Specifications – Gigabyte Z590 Aorus Master
Socket
AM4
Chipset
Z590
Form Factor
ATX
Voltage Regulator
19 Phase (18+1, 90A MOSFETs)
Video Ports
(1) DisplayPort v1.2
USB Ports
(1) USB 3.2 Gen 2×2, Type-C (20 Gbps)
(5) USB 3.2 Gen 2, Type-A (10 Gbps)
(4) USB 3.2 Gen 1, Type-A (5 Gbps)
Network Jacks
(1) 10 GbE
Audio Jacks
(5) Analog + SPDIF
Legacy Ports/Jacks
✗
Other Ports/Jack
✗
PCIe x16
(2) v4.0 x16, (x16/x0 or x8/x8
(1) v3.0 x4
PCIe x8
✗
PCIe x4
✗
PCIe x1
✗
CrossFire/SLI
AMD Quad GPU Crossfire and 2-Way Crossfire
DIMM slots
(4) DDR4 5000+, 128GB Capacity
M.2 slots
(1) PCIe 4.0 x4 / PCIe (up to 110mm)
(1) PCIe 3.0 x4 / PCIe + SATA (up to 110mm)
(1) PCIe 3.0 x4 / PCIe + SATA (up to 110mm)
U.2 Ports
✗
SATA Ports
(6) SATA3 6 Gbps (RAID 0, 1, 5 and 10)
USB Headers
(1) USB v3.2 Gen 2 (Front Panel Type-C)
(2) USB v3.2 Gen 1
(2) USB v2.0
Fan/Pump Headers
(10) 4-Pin
RGB Headers
(2) aRGB (3-pin)
(2) RGB (4-pin)
Legacy Interfaces
✗
Other Interfaces
FP-Audio, TPM
Diagnostics Panel
Yes, 2-character debug LED, and 4-LED ‘Status LED’ display
Opening up the retail packaging, along with the board, you’re greeted by a slew of included accessories. The Aorus Master contains the basics (guides, driver CD, SATA cables) and a few other things that make this board a complete package. Below is a full list of all included accessories.
Installation Guide
User’s Manual
G-connector
Sticker sheet / Aorus badge
Wi-Fi Antenna
(4) SATA cables
(3) Screws for M.2 sockets
(2) Temperature probes
Microphone
RGB extension cable
Image 1 of 3
Image 2 of 3
Image 3 of 3
After taking the Z590 Aorus Master out of the box, its weight was immediately apparent, with the shrouds, heatsinks and backplate making up the majority of that weight. The board sports a matte-black PCB, with black and grey shrouds covering the PCIe/M.2 area and two VRM heatsinks with fins connected by a heatpipe. The chipset heatsink has the Aorus Eagle branding lit up, while the rear IO shroud arches over the left VRM bank with more RGB LED lighting. The Gigabyte RGB Fusion 2.0 application handles RGB control. Overall, the Aorus Master has a premium appearance and shouldn’t have much issue fitting in with most build themes.
Looking at the board’s top half, we’ll first focus on the VRM heatsinks. They are physically small compared to most boards, but don’t let that fool you. The fin array uses a louvered stacked-fin design Gigabyte says increases surface area by 300% and improves thermal efficiency with better airflow and heat exchange. An 8mm heat pipe also connects them to share the load. Additionally, a small fan located under the rear IO shroud actively keeps the VRMs cool. The fan here wasn’t loud, but was undoubtedly audible at default settings.
We saw a similar configuration in the previous generation, which worked out well with an i9-10900K, so it should do well with the Rocket Lake flagship, too. We’ve already seen reports indicating the i9-11900K has a similar power profile to its predecessor. Feeding power to the VRMs is two reinforced 8-pin EPS connectors (one required).
To the right of the socket, things start to get busy. We see four reinforced DRAM slots supporting up to 128GB of RAM. Oddly enough, the specifications only list support up to DDR4 3200 MHz, the platform’s limit. But further down the webpage, it lists DDR4 5000. I find it odd it is listed this way, though it does set up an expectation that anything above 3200 MHz is overclocking and not guaranteed to work.
Above the DRAM slots are eight voltage read points covering various relevant voltages. This includes read points for the CPU Vcore, VccSA, VccIO, DRAM, and a few others. When you’re pushing the limits and using sub-ambient cooling methods, knowing exactly what voltage the component is getting (software can be inaccurate) is quite helpful.
Above those on the top edge are four fan headers (next to the EPS connectors is a fifth) of 10. According to the manual, all CPU fan and pump headers support 2A/24W each. You shouldn’t have any issues powering fans and a water cooling pump. Gigabyte doesn’t mention if these headers use auto-sensing (for DC or PWM control), but they handled both when set to ‘auto’ in the BIOS. Both a PWM and DC controlled fan worked without intervention.
The first two (of four) RGB LED headers live to the fan headers’ right. The Z590 Aorus Master includes two 3-pin ARGB headers and two 4-pin RGB headers. Since this board takes a minimal approach to RGB lighting, you’ll need to use these to add more bling to your rig.
We find the power button and 2-character debug LED for troubleshooting POST issues on the right edge. Below is a reinforced 24-pin ATX connector for power to the board, another fan header and a 2-pin temperature probe header. Just below all of that are two USB 3.2 Gen1 headers and a single USB 3.2 Gen2x2 Type-C front-panel header for additional USB ports.
Gigabyte chose to go with a 19-phase setup for the Vcore and SOC on the power delivery front. Controlling power is an Intersil ISL6929 buck controller that manages up to 12 discrete channels. The controller then sends the power to ISL6617A phase doublers and the 19 90A ISL99390B MOSFETs. This is one of the more robust VRMs we’ve seen on a mid-range board allowing for a whopping 1,620A available for the CPU. You won’t have any trouble running any compatible CPU, including using sub-ambient overclocking.
The bottom half of the board is mostly covered in shrouds hiding all the unsightly but necessary bits. On the far left side, under the shrouds, you’ll find the Realtek ALC1220-VB codec along with an ESS Sabre ESS 9118 DAC and audiophile-grade WIMA and Nichicon Fine Gold capacitors. With the premium audio codec and DAC, an overwhelming majority of users will find the audio perfectly acceptable.
We’ll find the PCIe slots and M.2 sockets in the middle of the board. Starting with the PCIe sockets, there are a total of three full-length slots (all reinforced). The first and second slots are wired for PCIe 4.0, with the primary (top) slot wired for x16 and the bottom maxes out at x8. Gigabyte says this configuration supports AMD Quad-GPU Cand 2-Way Crossfire. We didn’t see a mention of SLI support even though the lane count supports it. The bottom full-length slot is fed from the chipset and runs at PCIe 3.0 x4 speeds. Since the board does without x1 slots, this is the only expansion slot available if you’re using a triple-slot video card. Anything less than that allows you to use the second slot.
Hidden under the shrouds around the PCIe slots are three M.2 sockets. Unique to this setup is the Aorus M.2 Thermal Guard II, which uses a double-sided heatsink design to help cool M.2 SSD devices with double-sided flash. With these devices’ capacities rising and more using flash on both sides, this is a good value-add.
The top socket (M2A_CPU) supports up to PCIe 4.0 x4 devices up to 110mm long. The second and third sockets, M2P_SB and M2M_SB, support both SATA and PCIe 3.0 x3 modules up to 110mm long. When using a SATA-based SSD on M2P_SB, SATA port 1 will be disabled. When M2M_SB (bottom socket) is in use, SATA ports 4/5 get disabled.
To the right of the PCIe area is the chipset heatsink with the Aorus falcon lit up with RGB LEDs from below. There’s a total of six SATA ports that support RAID0, 1, 5 and 10. Sitting on the right edge are two Thunderbolt headers (5-pin and 3-pin) to connect to a Gigabyte Thunderbolt add-in card. Finally, in the bottom-right corner is the Status LED display. The four LEDs labeled CPU, DRAM, BOOT and VGA light up during the POST process. If something hangs during that time, the LED where the problem resides stays lit, identifying the problem area. This is good to have, even with the debug LED at the top of the board.
Across the board’s bottom are several headers, including more USB ports, fan headers and more. Below is the full list, from left to right:
Front-panel audio
BIOS switch
Dual/Single BIOS switch
ARGB header
RGB header
TPM header
(2) USB 2.0 headers
Noise sensor header
Reset button
(3) Fan headers
Front panel header
Clear CMOS button
The Z590 Aorus Master comes with a pre-installed rear IO panel full of ports and buttons. To start, there are a total of 10 USB ports out back, which should be plenty for most users. You have a USB 3.2 Gen2x2 Type-C port, five USB 3.2 Gen2 Type-A ports and four USB 3.2 Gen1 Type-A ports. There is a single DisplayPort output for those who would like to use the CPU’s integrated graphics. The audio stack consists of five gold-plated analog jacks and a SPDIF out. On the networking side is the Aquantia 10 GbE port and the Wi-Fi antenna. Last but not least is a Clear CMOS button and a Q-Flash button, the latter designed for flashing the BIOS without a CPU.
Firmware
The Z590 Aorus Master BIOS theme doesn’t look any different from the Z490 versions. The Aorus board still uses the black and orange theme we’re familiar with. We’ve captured a majority of the BIOS screens to share with you. Like other board partners, Gigabyte includes an Easy Mode for high-level monitoring and adjustments, along with an Advanced section. The BIOS is well organized, with many of the more commonly used functions easily accessible without drilling down multiple levels to find them. In the end, the BIOS works well and is easy to navigate and read.
Image 1 of 17
Image 2 of 17
Image 3 of 17
Image 4 of 17
Image 5 of 17
Image 6 of 17
Image 7 of 17
Image 8 of 17
Image 9 of 17
Image 10 of 17
Image 11 of 17
Image 12 of 17
Image 13 of 17
Image 14 of 17
Image 15 of 17
Image 16 of 17
Image 17 of 17
Software
Gigabyte includes a few applications designed for various functions, including RGB lighting control, audio, system monitoring, and overclocking. Below, we’ve captured several screenshots of the App Center, @BIOS, SIV, RGB Fusion and Easy Tune.
Image 1 of 14
Image 2 of 14
Image 3 of 14
Image 4 of 14
Image 5 of 14
Image 6 of 14
Image 7 of 14
Image 8 of 14
Image 9 of 14
Image 10 of 14
Image 11 of 14
Image 12 of 14
Image 13 of 14
Image 14 of 14
Future Tests and Final Thoughts
With the release of Z590, we’re in a bit of a pickle in that we have boards in our hands, but not the Rocket Lake CPU designed for it. We know most of these boards should perform similarly to our previous Z490 motherboard reviews. And while there are exceptions, they are mostly at the bottom of the product stack. To that end, we’re posting these as detailed previews until we get data using a Rocket Lake processor.
Once we receive a Rocket Lake CPU and as soon as any embargos have expired, we’ll fill in the data points, including the benchmarking/performance results, as well as overclocking/power and VRM temperatures.
We’ll also be updating our test system hardware to include a PCIe 4.0 video card and storage. This way, we can utilize the platform to its fullest using the fastest protocols it supports. We will also update to the latest Windows 10 64-bit OS (20H2) with all threat mitigations applied, as well as updating the video card driver and use the newest release when we start this testing. We use the latest non-beta motherboard BIOS available to the public unless otherwise noted.
While we do not have performance results from the yet-to-be-released Rocket Lake CPU, we’re sure the 90A VRMs will handle the i9-11900K processor without issue. We quickly tested the i9-10900K and found the board quite capable with that CPU, easily allowing the 5.2 GHz overclock we set. For now, we’ll focus on features, price, and appearance until we gather performance data from the new CPU.
The Gigabyte Z590 Aorus Master is a well-rounded solution, bringing a lot of premium features to the table. Baked into the chipset is USB 3.2 Gen2x2 support, and on the network side, a 10 GbE port and Intel’s Wi-Fi 6E AX210 card are basically the best you can get out of the box. The 90A 18-phase VRM for the processor does not have any issues with an overclocked Comet-Lake CPU, so the new Rocket-Lake CPUs at the same TDP shouldn’t have a problem. This board can be used for sub-ambient overclocking (though the Gigabyte Z590 Tachyon is the purpose-built board by Gigabyte for such a thing).
Since Z590 added native PCIe 4.0 support (with Rocket Lake CPUs only) and additional PCIe lanes, we’ll see more boards with up to three M.2 sockets, just like the less-expensive Steel Legend has. The Aorus Master sports one PCIe 4.0 x4 (64 Gbps) slot and two PCIe 3.0 x4 (32 Gbps) slots. Add to that the six SATA ports and nearly everyone’s storage needs should be covered. The 10 USB ports on the rear IO include a USB 3.2 Gen2x2 Type-C port and should be plenty for most users.
If I had to pick out something that needs improvement, I would like to see more expansion slots. As it stands, there is only one full-length PCIe slot. The $400-plus price tag will also likely put off budget users. While Gigabyte hasn’t listed an exact price for the Aorus Master, the Z490 version came in at just under $400. We expect the Z590 version to be at that point or a little higher.
Compared to similarly priced peers (think ASRock Z590 Taichi, MSI MEG Z590 Unify and the Asus ROG Strix Z590-E Gaming WiFi ), the Gigabyte Aorus Z590 Master covers all the bases. If you prefer the latest audio codec and four M.2 sockets, instead of three, the Asus Z590-E Gaming has you taken care of. If you need ultra-fast networking, Gigabyte has you covered with its 10 GbE. All of the comparable boards are certainly capable and include quite a bit of features at this price point, so it comes down to the price, appearance, and features you need.
In the end, The Gigabyte Aorus Z590 Master is, like most Z590 motherboards, an iterative update from Z490. You get Rocket Lake support out of the box, superior power delivery, ultra-fast networking, and a premium appearance. If you’re looking for a Z590 motherboard around the $400 price point, The Z590 Aorus Master should be on your shortlist. Stay tuned for benchmarking, overclocking, and power results using the new Rocket Lake CPU short list.
Intel’s 12th-Gen Alder Lake chip will bring the company’s hybrid architecture, which combines a mix of larger high-performance cores paired with smaller high-efficiency cores, to desktop x86 PCs for the first time. That represents a massive strategic shift as Intel looks to regain the uncontested performance lead against AMD’s Ryzen 5000 series processors. AMD’s Zen 3 architecture has taken the lead in our Best CPUs and CPU Benchmarks hierarchy, partly on the strength of their higher core counts. That’s not to mention Apple’s M1 processors that feature a similar hybrid design and come with explosive performance improvements of their own.
Intel’s Alder Lake brings disruptive new architectures and reportedly supports features like PCIe 5.0 and DDR5 that leapfrog AMD and Apple in connectivity technology, but the new chips come with significant risks. It all starts with a new way of thinking, at least as far as x86 chips are concerned, of pairing high-performance and high-efficiency cores within a single chip. That well-traveled design philosophy powers billions of Arm chips, often referred to as Big.Little (Intel calls its implementation Big-Bigger), but it’s a first for x86 desktop PCs.
Intel has confirmed that its Golden Cove architecture powers Alder Lake’s ‘big’ high-performance cores, while the ‘small’ Atom efficiency cores come with the Gracemont architecture, making for a dizzying number of possible processor configurations. Intel will etch the cores on its 10nm Enhanced SuperFin process, marking the company’s first truly new node for the desktop since 14nm debuted six long years ago.
As with the launch of any new processor, Intel has a lot riding on Alder Lake. However, the move to a hybrid architecture is unquestionably riskier than prior technology transitions because it requires operating system and software optimizations to achieve maximum performance and efficiency. It’s unclear how unoptimized code will impact performance.
In either case, Intel is going all-in: Intel will reunify its desktop and mobile lines with Alder Lake, and we could even see the design come to the company’s high-end desktop (HEDT) lineup.
Intel might have a few tricks up its sleeve, though. Intel paved the way for hybrid x86 designs with its Lakefield chips, the first such chips to come to market, and established a beachhead in terms of both Windows and software support. Lakefield really wasn’t a performance stunner, though, due to a focus on lower-end mobile devices where power efficiency is key. In contrast, Intel says it will tune Alder Lake for high-performance, a must for desktop PCs and high-end notebooks. There are also signs that some models will come with only the big cores active, which should perform exceedingly well in gaming.
Meanwhile, Apple’s potent M1 processors with their Arm-based design have brought a step function improvement in both performance and power consumption over competing x86 chips. Much of that success comes from Arm’s long-standing support for hybrid architectures and the requisite software optimizations. Comparatively, Intel’s efforts to enable the same tightly-knit level of support are still in the opening stages.
Potent adversaries challenge Intel on both sides. Apple’s M1 processors have set a high bar for hybrid designs, outperforming all other processors in their class with the promise of more powerful designs to come. Meanwhile, AMD’s Ryzen 5000 chips have taken the lead in every metric that matters over Intel’s aging Skylake derivatives.
Intel certainly needs a come-from-behind design to thoroughly unseat its competitors, swinging the tables back in its favor like the Conroe chips did back in 2006 when the Core architecture debuted with a ~40% performance advantage that cemented Intel’s dominance for a decade. Intel’s Raja Koduri has already likened the transition to Alder Lake with the debut of Core, suggesting that Alder Lake could indeed be a Conroe-esque moment.
In the meantime, Intel’s Rocket Lake will arrive later this month, and all signs point to the new chips overtaking AMD in single-threaded performance. However, they’ll still trail in multi-core workloads due to Rocket Lake’s maximum of eight cores, while AMD has 16-core models for the mainstream desktop. That makes Alder Lake exceedingly important as Intel looks to regain its performance lead in the desktop PC and laptop markets.
While Intel hasn’t shared many of the details on the new chip, plenty of unofficial details have come to light over the last few months, giving us a broad indication of Intel’s vision for the future. Let’s dive in.
Intel’s 12th-Gen Alder Lake At a Glance
Qualification and production in the second half of 2021
Hybrid x86 design with a mix of big and small cores (Golden Cove/Gracemont)
10nm Enhanced SuperFin process
LGA1700 socket requires new motherboards
PCIe 5.0 and DDR5 support rumored
Four variants: -S for desktop PCs, -P for mobile, -M for low-power devices, -L Atom replacement
Gen12 Xe integrated graphics
New hardware-guided operating system scheduler tuned for high performance
Intel Alder Lake Release Date
Intel hasn’t given a specific date for Alder Lake’s debut, but it has said that the chips will be validated for production for desktop PCs and notebooks with the volume production ramp beginning in the second half of the year. That means the first salvo of chips could land in late 2021, though it might also end up being early 2022. Given the slew of benchmark submissions and operating system patches we’ve seen, early silicon is obviously already in the hands of OEMs and various ecosystem partners.
Intel and its partners also have plenty of incentive to get the new platform and CPUs out as soon as possible, and we could have a similar situation to 2015’s short-lived Broadwell desktop CPUs that were almost immediately replaced by Skylake. Rocket Lake seems competitive on performance, but the existing Comet Lake chips (e.g. i9-10900K) already use a lot of power, and i9-11900K doesn’t look to change that. With Enhanced SuperFIN, Intel could dramatically cut power requirements while improving performance.
Intel Alder Lake Specifications and Families
Intel hasn’t released the official specifications of the Alder Lake processors, but a recent update to the SiSoft Sandra benchmark software, along with listings to the open-source Coreboot (a lightweight motherboard firmware option), have given us plenty of clues to work with.
The Coreboot listing outlines various combinations of the big and little cores in different chip models, with some models even using only the larger cores (possibly for high-performance gaming models). The information suggests four configurations with -S, -P, and -M designators, and an -L variant has also emerged:
Alder Lake-S: Desktop PCs
Alder Lake-P: High-performance notebooks
Alder Lake-M: Low-power devices
Alder Lake-L: Listed as “Small Core” Processors (Atom)
Intel Alder Lake-S Desktop PC Specifications
Alder Lake-S*
Big + Small Cores
Cores / Threads
GPU
8 + 8
16 / 24
GT1 – Gen12 32EU
8 + 6
14 / 22
GT1 – Gen12 32EU
8 + 4
12 / 20
GT1 – Gen12 32EU
8 + 2
10 / 18
GT1 – Gen12 32EU
8 + 0
8 / 16
GT1 – Gen12 32EU
6 + 8
14 / 20
GT1 – Gen12 32EU
6 + 6
12 / 18
GT1 – Gen12 32EU
6 + 4
10 / 16
GT1 – Gen12 32EU
6 + 2
8 / 14
GT1 – Gen12 32EU
6 + 0
6 / 12
GT1 – Gen12 32EU
4 + 0
4 / 8
GT1 – Gen12 32EU
2 + 0
2 / 4
GT1 – Gen12 32EU
*Intel has not officially confirmed these configurations. Not all models may come to market. Listings assume all models have Hyper-Threading enabled on the large cores.
Intel’s 10nm Alder Lake combines large Golden Cove cores that support Hyper-Threading (Intel’s branded version of SMT, symmetric multi-threading, that allows two threads to run on a single core) with smaller single-threaded Atom cores. That means some models could come with seemingly-odd distributions of cores and threads. We’ll jump into the process technology a bit later.
As we can see above, a potential flagship model would come with eight Hyper-Threading enabled ‘big’ cores and eight single-threaded ‘small’ cores, for a total of 24 threads. Logically we could expect the 8 + 8 configuration to fall into the Core i9 classification, while 8 + 4 could land as Core i7, and 6 + 8 and 4 + 0 could fall into Core i5 and i3 families, respectively. Naturally, it’s impossible to know how Intel will carve up its product stack due to the completely new paradigm of the hybrid x86 design.
We’re still quite far from knowing particular model names, as recent submissions to public-facing benchmark databases list the chips as “Intel Corporation Alder Lake Client Platform” but use ‘0000’ identifier strings in place of the model name and number. This indicates the silicon is still in the early phases of testing, and newer steppings will eventually progress to production-class processors with identifiable model names.
Given that these engineering samples (ES) chips are still in the qualification stage, we can expect drastic alterations to clock rates and overall performance as Intel dials in the silicon. It’s best to use the test submissions for general information only, as they rarely represent final performance.
The 16-core desktop model has been spotted in benchmarks with a 1.8 GHz base and 4.0 GHz boost clock speed, but we can expect that to increase in the future. For example, a 14-core 20-thread Alder Lake-P model was recently spotted at 4.7 GHz. We would expect clock rates to be even higher for the desktop models, possibly even reaching or exceeding 5.0 GHz on the ‘big’ cores due to a higher thermal budget.
Meanwhile, it’s widely thought that the smaller efficiency cores will come with lower clock rates, but current benchmarks and utilities don’t enumerate the second set of cores with a separate frequency domain, meaning we’ll have to wait for proper software support before we can learn clock rates for the efficiency cores.
We do know from Coreboot patches that Alder Lake-S supports two eight-lane PCIe 5.0 connections and two four-lane PCIe 4.0 connections, for a total of 24 lanes. Conversely, Alder Lake-P dials back connectivity due to its more mobile-centric nature and has a single eight-lane PCIe 5.0 connection along with two four-lane PCIe 4.0 interfaces. There have also been concrete signs of support for DDR5 memory. There are some caveats, though, which you can read about in the motherboard section.
Intel Alder Lake-P and Alder Lake-M Mobile Processor Specifications
Alder Lake-P* Alder Lake-M*
Big + Small Cores
Cores / Threads
GPU
6 + 8
14 / 20
GT2 Gen12 96EU
6 + 4
10 / 14
GT2 Gen12 96EU
4 + 8
12 / 16
GT2 Gen12 96EU
2 + 8
10 / 12
GT2 Gen12 96EU
2 + 4
6 / 8
GT2 Gen12 96EU
2 + 0
2 / 4
GT2 Gen12 96EU
*Intel has not officially confirmed these configurations. Not all models may come to market. Listings assume all models have Hyper-Threading enabled on the large cores.
The Alder Lake-P processors are listed as laptop chips, so we’ll probably see those debut in a wide range of notebooks that range from thin-and-light form factors up to high-end gaming notebooks. As you’ll notice above, all of these processors purportedly come armed with Intel’s Gen 12 Xe architecture in a GT2 configuration, imparting 96 EUs across the range of chips. That’s a doubling of execution units over the desktop chips and could indicate a focus on reducing the need for discrete graphics chips.
There is precious little information available for the -M variants, but they’re thought to be destined for lower-power devices and serve as a replacement for Lakefield chips. We do know from recent patches that Alder Lake-M comes with reduced I/O support, which we’ll cover below.
Finally, an Alder Lake-L version has been added to the Linux kernel, classifying the chips as ‘”Small Core” Processors (Atom),’ but we haven’t seen other mentions of this configuration elsewhere.
Intel Alder Lake 600-Series Motherboards, LGA 1700 Socket, DDR5 and PCIe 5.0
Intel’s incessant motherboard upgrades, which require new sockets or restrict support within existing sockets, have earned the company plenty of criticism from the enthusiast community – especially given AMD’s long line of AM4-compatible processors. That trend will continue with a new requirement for LGA 1200 sockets and the 600-series chipset for Alder Lake. Still, if rumors hold true, Intel will stick to the new socket for at least the next generation of processors (7nm Meteor Lake) and possibly for an additional generation beyond that, rivaling AMD’s AM4 longevity.
Last year, an Intel document revealed an LGA 1700 interposer for its Alder Lake-S test platform, confirming that the rumored socket will likely house the new chips. Months later, an image surfaced at VideoCardz, showing an Alder Lake-S chip and the 37.5 x 45.0mm socket dimensions. That’s noticeably larger than the current-gen LGA 1200’s 37.5 x 37.5mm.
Because the LGA 2077 socket is bigger than the current sockets used in LGA 1151/LGA 1200 motherboards, existing coolers will be incompatible, but we expect that cooler conversion kits could accommodate the larger socket. Naturally, the larger socket is needed to accommodate 500 more pins than the LGA 1200 socket. Those pins are needed to support newer interfaces, like PCIe 5.0 and DDR5, among other purposes, like power delivery.
PCIe 5.0 and DDR5 support are both listed in patch notes, possibly giving Intel a connectivity advantage over competing chips, but there are a lot of considerations involved with these big technology transitions. As we saw with the move from PCIe 3.0 to 4.0, a step up to a faster PCIe interface requires thicker motherboards (more layers) to accommodate wider lane spacing, more robust materials, and retimers due to stricter trace length requirements. All of these factors conspire to increase cost.
We recently spoke with Microchip, which develops PCIe 5.0 switches, and the company tells us that, as a general statement, we can expect those same PCIe 4.0 requirements to become more arduous for motherboards with a PCIe 5.0 interface, particularly because they will require retimers for even shorter lane lengths and even thicker motherboards. That means we could see yet another jump in motherboard pricing over what the industry already absorbed with the move to PCIe 4.0. Additionally, PCIe 5.0 also consumes more power, which will present challenges in mobile form factors.
Both Microchip and the PCI-SIG standards body tell us that PCIe 5.0 adoption is expected to come to the high-performance server market and workstations first, largely because of the increased cost and power consumption. That isn’t a good fit for consumer devices considering the slim performance advantages in lighter workloads. That means that while Alder Lake may support PCIe 5.0, it’s possible that we could see the first implementations run at standard PCIe 4.0 signaling rates.
Intel took a similar tactic with its Tiger Lake processors – while the chips internal pathways are designed to accommodate the increased throughput of the DDR5 interface via a dual ring bus, they came to market with DDR4 memory controllers, with the option of swapping in new DDR5 controllers in the future. We could see a similar approach with PCIe 4.0, with the first devices using existing controller tech, or the PCIe 5.0 controllers merely defaulting to PCIe 4.0.
Benchmarks have surfaced that indicate that Alder Lake supports DDR5 memory, but like the PCIe 5.0 interface, but it also remains to be seen if Intel will enable it on the leading wave of processors. Notably, every transition to a newer memory interface has resulted in higher up-front DIMM pricing, which is concerning in the price-sensitive desktop PC market.
DDR5 is in the opening stages; some vendors, like Adata, TeamGroup, and Micron, have already begun shipping modules. The inaugural modules are expected to run in the DDR5-4800 to DDR5-6400 range. The JEDEC spec tops out at DDR5-8400, but as with DDR4, it will take some time before we see those peak speeds. Notably, several of these vendors have reported that they don’t expect the transition to DDR5 to happen until early 2022.
While the details are hazy around the separation of the Alder Lake-S, -P, -M, and -L variants, some details have emerged about the I/O allocations via Coreboot patches:
Alder Lake-P
Alder Lake-M
Alder Lake-S
CPU PCIe
One PCIe 5.0 x8 / Two PCIe 4.0 x4
Unknown
Two PCIe 5.0 x8 / Two PCIe 4.0 x4
PCH
ADP_P
ADP_M
ADP_S
PCH PCIe Ports
12
10
28
SATA Ports
6
3
6
We don’t have any information for the Alder Lake-L configuration, so it remains shrouded in mystery. However, as we can see above, the PCIe, PCH, and SATA allocations vary by the model, based on the target market. Notably, the Alder Lake-P configuration is destined for mobile devices.
Intel 12th-Gen Alder Lake Xe LP Integrated Graphics
A series of Geekbench test submissions have given us a rough outline of the graphics accommodations for a few of the Alder Lake chips. Recent Linux patches indicate the chips feature the same Gen12 Xe LP architecture as Tiger Lake, though there is a distinct possibility of a change to the sub-architecture (12.1, 12.2, etc.). Also, there are listings for a GT0.5 configuration in Intel’s media driver, but that is a new paradigm in Intel’s naming convention so we aren’t sure of the details yet.
The Alder Lake-S processors come armed with the 32 EUs (256 shaders) in a GT1 configuration, and the iGPU on early samples run at 1.5 GHz. We’ve also seen Alder Lake-P benchmarks with the GT2 configuration, which means they come with 96 EUs (768 shaders). The early Xe LP iGPU silicon on the -P model runs at 1.15GHz, but as with all engineering samples, that could change with shipping models.
Alder Lake’s integrated GPUs support up to five display outputs (eDP, dual HDMI, and Dual DP++), and support the same encoding/decoding features as both Rocket Lake and Tiger Lake, including AV1 8-bit and 10-bit decode, 12-bit VP9, and 12-bit HEVC.
Intel Alder Lake CPU Architecture and 10nm Enhanced SuperFin Process
Intel pioneered the x86 hybrid architecture with its Lakefield chips, with those inaugural models coming with one Sunny Cove core paired with four Atom Tremont cores.
Compared to Lakefield, both the high- and low-performance Alder Lake-S cores take a step forward to newer microarchitectures. Alder Lake-S actually jumps forward two ‘Cove’ generations compared to the ‘big’ Sunny Cove cores found in Lakefield. The big Golden Cove cores come with increased single-threaded performance, AI performance, Network and 5G performance, and improved security features compared to the Willow Cove cores that debuted with Tiger Lake.
Image 1 of 2
Image 2 of 2
Alder Lake’s smaller Gracemont cores jump forward a single Atom generation and offer the benefit of being more power and area efficient (perf/mm^2) than the larger Golden Cove cores. Gracemont also comes with increased vector performance, a nod to an obvious addition of some level of AVX support (likely AVX2). Intel also lists improved single-threaded performance for the Gracemont cores.
It’s unclear whether Intel will use its Foveros 3D packaging for the chips. This 3D chip-stacking technique reduces the footprint of the chip package, as seen with the Lakefield chips. However, given the large LGA 1700 socket, that type of packaging seems unlikely for the desktop PC variants. We could see some Alder Lake-P, -M, or -L chips employ Foveros packaging, but that remains to be seen.
Lakefield served as a proving ground not only for Intel’s 3D Foveros packaging tech but also for the software and operating system ecosystem. At its Architecture Day, Intel outlined the performance gains above for the Lakefield chips to highlight the promise of hybrid design. Still, the results come with an important caveat: These types of performance improvements are only available through both hardware and operating system optimizations.
Due to the use of both faster and slower cores that are both optimized for different voltage/frequency profiles, unlocking the maximum performance and efficiency requires the operating system and applications to have an awareness of the chip topology to ensure workloads (threads) land in the correct core based upon the type of application.
For instance, if a latency-sensitive workload like web browsing lands in a slower core, performance will suffer. Likewise, if a background task is scheduled into the fast core, some of the potential power efficiency gains are lost. There’s already work underway in both Windows and various applications to support that technique via a hardware-guided OS scheduler.
The current format for Intel’s Lakefield relies upon both cores supporting the same instruction set. Alder Lake’s larger Golden Cove cores support AVX-512, but it appears that those instructions will be disabled to accommodate the fact that the Atom Gracemont cores do not support the instructions. There is a notable caveat that any of the SKUs that come with only big cores might still support the instructions.
Intel Chief Architect Raja Koduri mentioned that a new “next-generation” hardware-guided OS scheduler that’s optimized for performance would debut with Alder Lake, but didn’t provide further details. This next-gen OS scheduler could add in support for targeting cores with specific instruction sets to support a split implementation, but that remains to be seen.
Intel fabs Alder Lake on its Enhanced 10nm SuperFin process. This is the second-generation of Intel’s SuperFin process, which you can learn more about in our deep-dive coverage.
Image 1 of 2
Image 2 of 2
Intel says the first 10nm SuperFin process provides the largest intra-node performance improvement in the company’s history, unlocking higher frequencies and lower power consumption than the first version of its 10nm node. Intel says the net effect is the same amount of performance uplift that the company would normally expect from a whole series of intra-node “+” revisions, but in just one shot. As such, Intel claims these transistors mark the largest single intra-node improvement in the company’s history.
The 10nm SuperFin transistors have what Intel calls breakthrough technology that includes a new thin barrier that reduces interconnect resistance by 30%, improved gate pitch so the transistor can drive higher current, and enhanced source/drain elements that lower resistance and improve strain. Intel also added a Super MIM capacitor that drives a 5X increase in capacitance, reducing vDroop. That’s important, particularly to avoid localized brownouts during heavy vectorized workloads and also to maintain higher clock speeds.
During its Architecture Day, Intel teased the next-gen variant of SuperFin, dubbed ’10nm Enhanced SuperFin,’ saying that this new process was tweaked to increase interconnect and general performance, particularly for data center parts (technically, this is 10nm+++, but we won’t quibble over an arguably clearer naming convention). This is the process used for Alder Lake, but unfortunately, Intel’s descriptions were vague, so we’ll have to wait to learn more.
We know that the 16-core models come armed with 30MB of L3 cache, while the 14-core / 24 thread chip has 24MB of L3 cache and 2.5 MB of L2 cache. However, it is unclear how this cache is partitioned between the two types of cores, which leaves many questions unanswered.
Alder Lake also supports new instructions, like Architectural LBRs, HLAT, and SERIALIZE commands, which you can read more about here. Alder Lake also purportedly supports AVX2 VNNI, which “replicates existing AVX512 computational SP (FP32) instructions using FP16 instead of FP32 for ~2X performance gain.” This rapid math support could be part of Intel’s solution for the lack of AVX-512 support for chips with both big and small cores, but it hasn’t been officially confirmed.
Intel 12th-Generation Alder Lake Price
Intel’s Alder Lake is at least ten months away, so pricing is the wild card. Intel has boosted its 10nm production capacity tremendously over the course of 2020 and hasn’t suffered any recent shortages of its 10nm processors. That means that Intel should have enough production capacity to keep costs within reasonable expectations, but predicting Intel’s 10nm supply simply isn’t reasonable given the complete lack of substantive information on the matter.
However, Intel has proven with its Comet Lake, Ice Lake, and Cooper Lake processors that it is willing to lose margin in order to preserve its market share, and surprisingly, Intel’s recent price adjustments have given Comet Lake a solid value proposition compared to AMD’s Ryzen 5000 chips.
We can only hope that trend continues, but if Alder Lake brings forth both PCIe 5.0 and DDR5 support as expected, we could be looking at exceptionally pricey memory and motherboard accommodations.
A speed demon that prioritizes raw performance, the Alienware m17 R4 puts plenty of pop into a sleek but bulky chassis.
For
Unrivaled performance
Snappy keyboard
Attractive design
At present, RTX 3080 is the fastest laptop graphics card around, but not all RTX 3080-powered laptops are created equal. Many vendors use Nvidia’s Max-Q technology, which prioritizes power efficiency and low fan noise over high performance. Alienware’s m17 R4, however, seeks to pump out every possible frame, deploying a special cooling system and eschewing Max-Q to make its top-of-the-line configuration one of the best gaming laptops,
But the Alienware m17 R4 is not just a speed demon. Starting at $2,106 ($3,586 as tested), this laptop has a snappy keyboard, a sleek sci-fi inspired design with plenty of RGB and an optional 360 Hz screen. You just have to live with a heavy chassis and the occasional bout of fan noise.
Editor’s Note: The Alienware m17 R4 review unit we tested came with a 512GB boot drive and 2TB RAID 0 storage drive. While this hardware is for sale, it is normally shipped to consumers with the 2TB RAID 0 drive as boot drive.
3x USB Type-A 3.2, 1x HDMI 2.2, 1x mini DisplayPort 1.4, 1x Thunderbolt 3, 1x microSD card reader
Camera
1280 x 720
Battery
86 WHr
Power Adapter
330W
Dimensions (WxDxH)
15.74 x 11.56 x 0.87 inches
Weight
6.6 pounds
Price (as configured)
$3,586
Design of the Alienware m17 R4
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
The Alienware m17 R4 has the same sci-fi inspired “Legend” design as both its immediate predecessor, the m17 R3, and its sibling, the Alienware m15 R4. Available in “lunar light: white or “dark side of the moon” (black), the m17 R4 looks like a giant starship, rocketing through space. The body (ours was white) has a black rear end that juts out like the jet engine on the back of an imperial cruiser. The number 17 on the lid appears in a sci-fi font that you might find adorning a secret warehouse at Area 51.
There’s a honeycomb pattern for the vents on the back, above the keyboard and on the bottom surface. We can only assume that Alienware aliens live in some kind of hive where they are all doing CUDA core calculations.
And, of course, there’s lots of RGB lights to brighten the mood in outer space. The keyboard has four-zone RGB and there are customizable lights on the back edge and in the alien heads on the back of the lid and the power button.
The chassis is made from premium materials: a magnesium alloy with matte white or black paint, covered by a clear coat for extra durability. The interior uses Alienware’s cryo-tech cooling technology which has 12-phase graphics voltage regulation, 6-phase CPU voltage regulation and a CPU vapor chamber.
At 6.6 pounds and 15.74 x 11.56 x 0.87 inches, the Alienware m17 R4 is not exactly light or thin, not that would you expect that from a 17-inch laptop with a Core i9 CPU and RTX 3080 graphics. By comparison, the Gigabyte Aorus 17G (5.95 pounds, 15.9 x 10.8 x 1.0 inches) and Razer Blade Pro 17 (6.1 pounds, 15.6 x 10.2 x 0.8 inches) are both significantly lighter, though the Aorus is thicker. The Asus ROG Flow X13, which we’re also comparing to the m17, is much thinner and lighter (2.87 pounds, 11.77 x 8.74 x 0.62 inches), because it’s a 13-inch laptop that gets its RTX 3080 graphics via an external dock.
The Alienware m17 R4 has plenty of room for ports. On the right side, there are two USB 3.2 Type-A ports, along with a micro SD card reader. The left side contains a Killer RJ-45 Ethernet 2.5 Gbps port, a 3.5mm audio jack and another USB Type-A port. The back holds a Thunderbolt 3 port, a mini DisplayPort 1.4, an HDMI 2.1 connection, Alienware’s proprietary graphics amplifier port and the power connector.
Gaming Performance on the Alienware m17 R4
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Sporting an Nvidia RTX 3080 GPU and an Intel Core i9-10980HK CPU, our review configuration of the Alienware m17 R4 is as fast of a gaming laptop as you can get right now. Thanks to Alienware’s strong cryo-tech cooling solution and the company’s willingness to include a full version of the RTX 3080, rather than the Max-Q variants in some thinner notebooks.
When I played Cyberpunk 2077 at Ultra RTX settings, the game ranged between 61 and 72 frames per second, depending on how intense the action was at any given time. The frame rate improved to between 85 and 94 fps after I changed to Ultra settings with no RTX. In both cases, the fan noise was really loud by default. Changing the fan profile to quiet improved this somewhat while shaving only a couple of fps off, and only in intense scenes.
The Alienware m17 R4 hit a rate of 120 fps in Grand Theft Auto V at very high settings (1080p), eclipsing the Gigabyte Aorus 17G and its Max-Q-enabled RTX 3080 and Core i7-10870H CPU by 20%. The Asus ROG Flow 13 with its Ryzen 9 5980HS CPU and external RTX 3080 dock, was also a good 13% behind while the RTX 2080 Super-powered Razer Blade Pro 17 brought up the rear.
On the very-demanding Red Dead Redemption at medium settings, the m17 R4 achieved an impressive rate of 79.7 fps, besting the Aorus 17G and ROG Flow X13 by more than 20%. Saddled with last year’s card, the Razer Blade Pro 17 was a full 29 % behind.
Alienware’s behemoth exceeded 100 fps again in Shadow of the Tomb Raider, hitting 103 while the Aorus 17G and the ROG Flow X13 hovered in the mid 80s and 60s. On this test, surprisingly, the Razer Blade Pro 17 came close to matching the m17 R4.
Far Cry New Dawn at Ultra settings also provided a great example of the Alienware m15 R4’s dominance. It hit a full 105 fps where its nearest competitor, the Gigabyte Aorus 17G could only manage 92 fps with the Asus ROG Flow X13 and Razer Blade Pro 17 were both in the 80s.
To see how well the Alienware m17 R4 performs over the long haul, we ran the Metro Exodus benchmark at RTX, the highest settings level, 15 times at 1080p. The laptop was remarkably consistent, averaging 75.6 fps with a high of 76.2 and a low of 75.4. During that time, the average CPU speed was 4.19 GHz with a peak of 5.088 GHz. By comparison, the Gigabyte Aorus 17G, got an average frame rate of just 59.6 fps with an average CPU speed of 3.47 GHz and the Asus ROG Flow X13 managed a slightly-higher 65.2 fps with an average CPU speed of 3.89 GHz.
Productivity Performance of Alienware m17 R4
Image 1 of 3
Image 2 of 3
Image 3 of 3
With its Core i9-10980HK CPU, 32GB of RAM and dual storage drives, which include both a 2TB RAID 0 PCIe SSD (2 x 1TB) and a 512GB SSD, and that RTX 3080, our review configuration of the Alienware m17 R4 can be a powerful work tool.
On Geekbench 5, a synthetic benchmark that measures overall performance, the m17 R4 got a single-core score of 1,318 and a multi-core score of 8,051, which wa slightly ahead of the of the Core i7-10870H-powered Gigabyte Aorus 17G on both counts but behind the Asus ROG Flow X13 and its Ryzen 9 5980HS on single-core performance while creaming the Razer Blade Pro 17, which we tested with a Core i7-10875H.
The storage in our review unit came misconfigured slightly, with a 512GB NVMe PCIe SSD as boot drive and a significantly faster 2TB RAID 0 drive made from two 1TB NVMe PCIe SSDs. Dell sells this hardware, but consumers receive units with the 2TB as boot and the 512GB SSD as a secondary, storage drive.
In our tests, copying about 25GB of files, the 512GB drive managed a mediocre 379.7 MBps, but the 2TB drive hit an impressive 1305.5 MBps, which beats the Aorus 17G (869 MBps), the ROG Flow X13 (779.5 MBps) and the Blade Pro 17 (925.2 MBps).
The Alienware m17 R4 took just 6 minutes and 44 seconds to transcode a 4K video to 1080p in Handbrake. That time is 21% faster than the Aorus 17G, 18% quicker than the Flow X13 and a full 29% ahead of the Blade Pro 17.
Display on Alienware m17 R4
The Alienware m17 R4 comes with a choice of three different, 17-inch display panels: a 1080p panel with 144 Hz refresh rate, a 4K, 60 Hz panel and the 1080p, 360 Hz panel in our review unit. Our panel provided sharp images and accurate but mostly unexciting colors, along with smooth, tear-free gaming.
When I watched a trailer for upcoming volcano-disaster-flick Skyfire, the red-orange of lava bursts was lively and the green trees in a forest seemed true-to-life. Fine details like the wrinkles in actor Jason Isaacs’ forehead also stood out.
In a 4K nature video of a Costa Rican jungle, details like the scales on a snake and colors like the red on a parrot’s feathers were also strong, but not nearly as strong as when I viewed it on the 4K, OLED panel from the Alienware m15 R4 I tested recently. On both videos, viewing angles on the matte display were strong as colors didn’t fade even at 90 degrees to the left or right.
In Cyberpunk 2077, details like the threads on a rug or the barrel of a gun were prominent and colors like the red and yellow in the UI seemed accurate but didn’t pop.
The Alienware m17 R4’s display registered a strong 316.2 nits of brightness on our light meter, outpacing the Aorus 17G (299.6), the Razer Blade Pro 17 (304.4) and the Asus ROG Flow X13 (281.6). According to our colorimeter, the screen can reproduce a solid 80.6% of the DCI-P3 color gamut, which is about on par with the Aorus 17G and slightly behind the Razer Blade Pro 17, but miles ahead of the ROG Flow X13.
Keyboard and Touchpad on Alienware m17 R4
With a deep, 1.7mm of travel, great tactile feedback and a full numeric keypad, the Alienware m17 R4 offers a fantastic typing experience. On the tenfastfingers.com typing test, I scored a strong 102 words-per-minute with a 3% error rate, which is a little better than my typical 95 to 100 wpm and 3 to 5% rate.
Not only does the keyboard have a full numeric keypad, but it also sports four customizable macro keys above the pad on the top row. The Alienware Command Center software allows you to set these to launch a program, enter text or use a pre-recorded set of keystrokes when you hit them. I found programming them very unintuitive, however.it. Alienware Command Center also allows you to set RGB colors or lightning effects for four different zones on the keyboard.
The 3.1 x 4.1 glass touchpad, which uses Windows precision drivers, offers great navigation with just the right amount of friction. Whether I was navigating around the desktop or using multitouch gestures such as pinch-zoom or three-finger swipe, the pad was always accurate and responsive.
Audio on Alienware m17 R4
The Alienware m17 R4’s audio system outputs sound that’s loud enough to fill a mid-sized room and rich enough to dance to. When I played AC/DC’s “Back in Black” with the volume all the way up, the sound was mostly accurate, but some of the high-pitched percussion sounds were a little harsh. Earth, Wind and Fire’s bass-heavy “September” sounded great, with a clear separation of sound where instruments such as the horns section appeared to come from a different side of the notebook than, for example, the drums.
Gunshots and the sound of my NPC friend Jackie yelling at me to stay down sounded sharp and clear in Cyberpunk 2077. However, I had to turn the volume way up to compensate for the fan noise when the system was on high performance settings. Even on the “quiet” thermal setting, fan noise was quite prominent.
The preloaded Alienware Command Center app has an audio section that lets you tweak the sound settings and choose among profiles such as Music, Movie, Shooter and Role Play. I found that the default “Alienware” profile sounded about the same as the Music one, but disabling the audio enhancement definitely made the sound flatter.
Upgradeability of the Alienware m17 R4
The Alienware m17 R4 has three different M.2 SSD slots, all of which are accessible and user upgradeable. The first slot is an short 2230 length and the other two are both the normal 2280 size. Unfortunately, the RAM is soldered onto the motherboard and therefore not replaceable.
Opening the Alienware m17 R4 should be easy: there are eight Philips-head screws, some of which come out and the others of which you can just loosen, on the bottom panel. In our testing, getting the screws loosened was easy by prying off the bottom panel was challenging and required several minutes with a spudger. Once the panel is off, all three SSDs are visible, but are covered by copper heat sinks you can easily unscrew.
Battery Life on Alienware m17 R4
Forget about using the Alienware m17 R4 without a power outlet for any length of time. The laptop lasted just just 2 hours and 5 minutes on our battery test, which involves surfing the web over Wi-Fi at 150 nits of brightness. That’s awful in comparison to all of its competitors as both the Gigabyte Aorus 17G and Razer Blade Pro 17 lasted for an identical 4 hours and 41 minutes. But this is a 17-inch, 6.6-pound laptop so portability isn’t a primary concern.
Heat on Alienware m17 R4
The main touchpoints on the Alienware m17 R4 stay relatively cool when you’re not gaming and remain warm but tolerable when you are. After we streamed a YouTube video for 15 minutes, the keyboard hit a reasonable 35.5 degrees Celsius (95.9 degrees Fahrenehitt), the touchpad was a chilly 26.2 degrees Celsius (79.3 degrees Fahrenheit) and the underside was just 36.6 degrees Celsius (97.9 degrees Fahrenheit).
After running the Metro Exodus benchmark for 15 minutes to simulate gaming, those temperatures were obviously higher. The keyboard hit 35.5 degrees Celsius (112 degrees Fahrenheit), the touchpad measured 35 degrees (95 degrees Fahrenheit) and the bottom hit 50 degrees (122 degrees Fahrenheit).
When I played Cyberpunk 2077, the area around the WASD keys measured about 40 degrees Celsius (105 degrees Fahrenheit) but the key caps themselves didn’t feel uncomfortably warm to touch. At performance settings, the fan noise was extremely loud.
Webcam on Alienware m17 R4
The Alienware m17 R4’s 720p webcam is nothing special. Even when I shot it in a well-lit room, an image of my face was filled with visual noise and fine details like the hairs in my beard were blurry while colors such as the blue in my shirt and the green on the walls were muted. You’ll get by with this built-in camera if you need to, but you’d be better off springing for one of the best webcams.
Software and Warranty on Alienware m17 R4
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The Alienware m17 R4 comes preloaded with a handful of useful first-party utilities.
Alienware Mobile Connect allows you to control your Android handset or iPhone from your laptop, taking calls and texts for the desktop.
Alienware Command Center lets you control all the RGB lighting effects, set keyboard macros, tweak audio settings and even modify the performance settings and thermals to go for better performance or quieter and cooler temps. You can even change the max frequency, voltage and voltage offset for the CPU manually if you have an unlocked CPU and want to try overclocking.
As with any Windows laptop, there’s also a small amount of preloaded bloatware, including a trial of Microsoft Office, links to download Photoshop Express and Hulu and free-to-play games like Roblox.
Alienware backs the m17 R4 with a standard one year warranty on parts and labor that includes in-home service (if there was already a remote diagnosis). You can pay extra to extend the warranty up to five years and you can add accidental damage protection with no deductible.
Configurations of Alienware m17 R4
When you purchase the Alienware m17 R4 from Dell.com, you can custom configure it with your choice of a Core i7 or Core i9 CPU, RTX 3070 or 3080 GPU, up to 32GB of RAM and up to 4TB of storage. You can choose white or blackcolor options and you can also pay extra to get per-key RGB lighting instead of the standard 4-zone lighting we tested.
You also get a choice of screens that includes 144 Hz and 360 Hz 1080p panels, along with a 4K, 60 Hz panel that promises to hit 100 % of the Adobe RGB color gamut. If you value image quality over fps, we recommend the latter, because the color on our 360 Hz panel was ok, but not exciting.
Our review configuration of the Alienware m17 R4 currently goes for $3,586.79. For that price, you get the Core i9-10980HK, RTX 3080 graphics, the 360 Hz display, 32GB of RAM and a combination of storage drives that includes two, 1TB M.2 PCIe SSDS in RAID 0 and a 512GB M.2 SSD by itself for a total of 2.5TB of storage. Dell lists the RAID drive as the boot drive in its store but our review model came with the 512GB drive as boot and the 2TB RAID drive as storage, which seems odd.
Bottom Line
At this point, it’s hard to imagine someone making a gaming laptop that’s significantly more powerful than the Alienware m17 R4 we tested unless they use desktop parts. The RTX 3080 is currently the fastest mobile GPU around, especially since Alienware didn’t opt for Nvidia’s more power efficient Max-Q technologies.. Using a strong cooling system, pairing it with a Core i9-10980HK, and you have performance that’s often 20% faster than competitors that also use RTX 3080s.
In addition to its strong performance, the Alienware m17 R4 offers a deep, tactile keyboard and a unique, attractive design that’s all its own. The 360 Hz screen is more than capable, but unless you’re a competitive gamer, you can go with the default screen or, better yet, go for the 4K panel which promises much richer colors.
The biggest drawbacks for this epic laptop are those which are kind of inherent to any 17-inch laptop which turns the performance volume up to 11. It’s heavy, has short battery life, emits plenty of fan noise. It’s also quite expensive. It would be nice if, for this price, you got a better-than-awful webcam, but most laptop webcams are terrible.
If you want to save a few dollars or you need a little more battery life, consider the Gigabyte Aorus 17G, which goes for $2,699 with similar specs (but just 1TB of storage) to our Alienware m17 R4. The 17G lasts more than twice as long on a charge and weighs 0.65 pounds less than the m17, but its gaming performance isn’t as good.
If you don’t feel attached to the 17-inch form factor, consider the Alienware m15 R4, which has the same design and keyboard but is much more portable, albeit hotter. It also has an optional, 4K OLED panel which has incredibly vibrant output. However, if you want the ultimate 17-inch gaming rig right now, the Alienware m17 R4 is your best choice.
Unknown hardware enthusiasts have run a burn-in test on Intel’s 11th-Gen Rocket Lake processor to expose its power consumption under extreme loads and compare it to its predecessors from the Comet Lake-S family. It turns out the upcoming Core i9-11900KF CPUs can get extremely hot and power hungry under extreme loads, just like their Comet Lake ancestors. Intel’s upcoming eight-core Core i9-11900KF ‘Rocket Lake-S’ processors can purportedly heat up to 98C and pull 250W of power during stress tests. That means the chips should place well in our CPU Benchmarks Hierarchy, at least one would hope given all that power consumption, but they’ll run hot just like the previous-gen Intel chips.
Although Intel’s latest 10th Generation Core ‘Comet Lake-S’ processors are rated for a 125W TDP, they can actually suck up to 250W ~ 330W of power when they boost on all cores for up to 56 seconds, allowing them to provide their maximum potential in situations where it is actually needed.
Intel’s public-facing specs list power consumption based on the default power level (PL1). There’s a big difference between the default power level and an all-core turbo power level (PL2), so you’ll need an advanced motherboard, a quality PSU, and a capable cooling system to tame the Comet Lake beast. That’s because Intel had to increase the PL2 level on its Comet Lake CPUs in a bid to make them more competitive against AMD’s Ryzen lineup.
Apparently, the same rules apply to Intel’s upcoming eight-core Core i9-11900KF ‘Rocket Lake-S’ processors that can heat up to 98C and pull 250W of power at 1.325V Vcore when running AIDA64’s FPU stress test, according to Chiphell. The test CPU was cooled down using an entry-level 360-mm closed-loop liquid cooling system. The chip’s exact clocks are unknown, but based on leaks, it should run at 3.50GHz by default and boost all of its cores to 4.8 GHz for short periods.
Being manufactured using a mature 14nm process, Intel’s latest enthusiast-grade processors with eight or ten cores are not exactly energy efficiency champions, which isn’t surprising because this node was not developed for CPUs that combine a high frequency and a high core count.
While the Rocket Lake-S CPU is based on a new microarchitecture and has several other advantages over Comet Lake-S processors, it looks like its thermals and power consumption will be comparable to those of its predecessors, at least as far as stress tests are concerned. Meanwhile, bear in mind that stress tests do not usually reflect real-world workloads, but are meant to reveal the weaknesses of your PC build.
As Intel is getting ready to release its 11th Generation ‘Rocket Lake’ CPUs this April, it has already begun to send its samples to a broad audience of its clients so they could prepare for the launch. As a result, certain test results will inevitably emerge well before full-fledged final hardware reviews show up. That said, the unreleased processors’ current test results should be taken with a grain of salt.
Intel has announced that it will exchange the Core i9-10900K’s fancy retail packaging in favor of the standard, folding carton box. Basically, the Core i9-10900K’s will soon share the same packaging as the Core i9-10850K. The change will come into effect starting February 28 and will affect both global and Chinese boxed SKUs.
It’s not Intel’s first time to the rodeo either. The chipmaker previously switched the Core i9-9900K’s unique dodecahedron packaging to a more simple box to facilitate shipping and handling. In the case of the Core i9-10900K, the reason seems to be the same – to improve shipping efficiency. The change in packaging will help reduce the volumetric storage requirements for the Core i9-10900K. As a result, Intel can increase the number of units per pallet from 480 to 1,620, a whopping 237.2% increase.
Intel’s 11th Generation Rocket Lake-S processors are slated to launch in March, meaning Comet Lake-S chips like the Core i9-10900K are on their way out. It makes sense that Intel would want to optimize its logistics to get as many Comet Lake-S processors out the door as possible to focus its efforts on delivering the new Rocket Lake-S parts to the market.
The revamped packaging should provide a slight cost reduction for Intel since the chipmaker no longer has to spend money on the more elaborate box, not to mention the money saved on shipping costs. In the end, the Core i9-10900K will basically ship in the same, boring cardboard box as the other Comet Lake-S Core i7 and Core i5 SKUs.
A mysterious system packing a Core i9-10910 Comet Lake CPU and a (currently unheard of) AMD Radeon Pro 5700 XT with 65GB of RAM has been spotted on an Ashes of the Singularity benchmark score. All data indicates this system could possibly be a prototype for the next generation iMac from Apple.
We’ve already covered the Core i9-10910 several months ago; this processor is rumored to be exclusively sold to Apple. The CPU features 8 cores, 16 threads, a base frequency of 3.6 GHz, and a max turbo boost of 5 GHz flat. The 10910 also includes a configurable TDP of 95W-125W which further proves this CPU is targeted towards compact systems, like an iMac. The Core i9-10910 is roughly 10% faster than its closest brother, the Core i9-10900 according to known Geekbench 5 results of the CPU.
What’s interesting about this benchmark score is the GPU used in the system, a Radeon Pro RX 5700 XT. This is the first we’ve heard of such a GPU, so we wouldn’t be surprised if this is another Mac exclusive, presumably for the new iMac. If we had to guess, the Radeon Pro 5700 XT is simply a Pro model of the 5700 XT with a lower TDP.
For benchmark results, the Core i9-10910 system scored an average frame rate of 41.7; however, the specific resolution used is unknown so we don’t have enough data to make an accurate comparison with another GPU.
So yes, this system could genuinely be a prototype for one of Apple’s new iMacs; all the evidence points to it. However, we do have contradicting info that Apple will feature its new M1ARM-based chips in the unit instead. Presumably, this means Apple will launch two different versions of the iMac, one with Intel silicon, and the other with Apple’s M1 chips.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.