11th Generation Tiger Lake CPU (Image credit: Intel)
Intel’s 11th-Gen eight-core Tiger Lake-H 45W processors for mobile devices are expected to land in the first quarter, and, judging by the latest retailer listings (via momomo_us), the 10nm chips shouldn’t be far off.
Intel has launched several Tiger Lake chips that adhere to power limits under 35W. However, the chipmaker will have to roll out the 45W parts if it really wants to compete against AMD’s core-heavy Ryzen 5000 (Cezanne) processors. Like all Tiger Lake offerings, the 45W processors are based on Intel’s 10nm SuperFin (10SF) process node and wield Willow Cove cores and Xe LP graphics.
Besides the power limit increase, the 45W chips also bump Tiger Lake’s maximum core count of up to eight cores, while current Tiger Lake processors on the market max out at four cores. However, the Tiger Lake-H 45W processors will allow Intel to contend with AMD in mobile high-end gaming and workstation devices.
The four Tiger Lake-H processors from the Lenovo Legion listings cover only a few of the expected SKUs. Intel will likely flesh out the lineup with other configurations with slightly lower clock speeds to tend to different needs.
Barring any last-minute surprises, we expect the Core i9-11980HK to be the Tiger Lake-H flagship chip. In the same vein as its predecessors, the Core i9-11980HK should arrive with eight cores and 16 threads of processing power. The “K” suffix confirms that it will also have an unlocked multiplier for overclocking.
The Core i9-11980HK will go up against AMD’s Ryzen 9 5900HX, which has the same core configuration and allows for overclocking. AMD claims the Ryzen 9 5900X delivers up to 14% more single-threaded performance than the Core i9-10900HK in Cinebench R20m as well as 37% and 21% higher scores in Passmark PT10 and Fire Strike Physics, respectively. Assuming that AMD’s results are accurate, the Core i9-11980HK has a lot of catching up to do.
Meanwhile, the Core i7-11800H and Core i5-11400H should be equipped with eight cores and six cores, respectively. In the case of the Core i7-11800H, the operating clocks will be what separates it from the Core i9-11980HK or the rumored Core i9-11900H.
If we look at the Ryzen 5000 lineup, it’s clear that Intel cooked up the Core i7-11800H and Core i5-11400H to confront the Ryzen 7 5800H and Ryzen 5 5600H, respectively. AMD didn’t share any performance figures for the aforementioned SKUs, though.
Tiger Lake versus Ryzen 5000 will be an interesting fight for sure. Intel’s Willow Cove microarchitecture faces AMD’s Zen 3 microarchitecture, which has shown strong IPC uplifts over the previous microarchitecture. It’s not just a battle of microarchitectures but also a struggle between process nodes, too. While AMD has tapped TSMC’s proven 7nm process node, Intel’s bringing its 10nm SuperFin to the rumble.
Courtesy of database detective @Leakbench on Twitter, we now have our first decent look at how Intel’s next-gen Core i9-11900K Rocket Lake CPU will perform in our CPU Benchmark hierarchy. This test is the first clear result from Geekbench 4 for the 11900K, which is nice to see as it can be a more accurate gauge of raw CPU performance than the other benchmark results we’ve seen, like Passmark or Geekbench 5. The latest test results show that Rocket Lake will assuredly climb the gaming ranks, and if the price is right, the new chips could upset our list of Best CPUs.
In a nutshell, you shouldn’t trust Geekbench 5’s overall scores as an accurate measure of Rocket Lake’s performance, and there’s a technical reason why. We’ve encountered strange phenomenons with Geekbench 5, where its use of AVX-512 can widely skew the results in the encryption subtest. In turn, this inflates Rocket Lake’s overall Geekbench 5 scores against all other processors that don’t support AVX-512. This can lead to an inaccurate picture that makes Rocket Lake appear better in relation to AMD’s competing chips, not to mention Intel’s previous-gen models.
Geekbench 4 isn’t perfect either, but its lack of AVX-512 support makes the test much more accurate when gauging per-core performance without using an exotic SIMD instruction (AVX-512) that has no meaningful uptake in mainstream desktop PC software. In fact, Geekbench’s developer has stated that the AVX-512 testing disparity will be addressed in the Geekbench 6 benchmark that’s due out later this year. The big takeaway here — don’t look too deeply into the overall Geekbench 5 test results.
This particular test submission seems about as close as we’ll get to something solid before the launch, but as usual, we have to take the results with a grain of salt. However, the 11900K boosts to 5.3 GHz throughout this test sequence, signaling it’s running at stock core clocks, and the memory appears to be running at the stock DDR4-3200 for Rocket Lake.
This is important because Geekbench 4 is sensitive to memory frequency, especially when it comes to multi-core tests. To compare, we’re using test results that we generated in our own labs for the Core i9-10900K and Ryzen 7 5800X, with both operating at stock memory clocks (DDR4-2933 and DDR4-3200, respectively).
Geekb
CPU
Geekbench 4 Single-Core Score
Geekbench 4 Multi-Core Score
Intel Core i9-11900K
7562
36326
Intel Core i9-10900K
6592
38704
AMD Ryzen 7 5800X
7247
42609
Intel claims a 19% increase in IPC for the Rocket Lake chips, and that appears to be roughly accurate in this test. The Core i9-11900K was ~15% faster than its predecessor, the 10900K, in the single-core tests.
However, looking at the multi-core results, the inverse happens and the 10900K is 6.5% faster due to its higher core count. That’s actually pretty impressive, though: The ten-core Core i9-10900K has two more cores than the eight-core Core i9-11900K, so we expected a much larger advantage in favor of the chip with two extra cores. Increased IPC truly floats all boats.
But against the 5800X, the single-core results are much closer, naturally, with Zen 3’s much higher IPC performance. Here the 11900K pulls ahead of the 5800X by a mere 4.4%. Strangely though, the 5800X pulls ahead of the 11900K in the multi-core department by 17%, which is a larger delta than we expected because these are both eight-core chips.
This is but one benchmark, though, and several factors could influence the score, including early firmware with the Core i9-11900K. We expect more mature BIOS revisions will be headed out before launch. In either case, these results paint a competitive picture for the desktop PC space soon, one in which price (and supply in light of the shortages) will be exceedingly important.
With Rocket Lake’s release date approaching, testers are getting their hands on more and more SKUs from Intel’s future Rocket Lake lineup; this time, we have benchmark results of Intel’s future Core i5-11600K (thanks to @Leakbench). The 11600K was found running the Geekbench 5 benchmark with mediocre performance at best, though, as usual, pricing will determine if it lands on our list of Best CPUs.
According to the spec sheet found on Geekbench 5’s browser, the Core i5-11600K packs 6 cores and 12 threads with a 3.9GHz base frequency along with a max turbo frequency of 4.9GHz. Nothing is unusual here; this is where we would expect a 11600K to land. Excluding the rare unlocked Core i3 and Pentium, the unlocked Core i5s have traditionally been the lowest clocked chips out of all the “K” SKUs.
That’s not all that will be slowing down Intel’s 11600K, unfortunately. The system configuration for the 11600K shows it being paired with super-slow DDR4-2133 memory. This will noticeably hamper performance, so take the upcoming benchmark results with another dose of salt — they certainly won’t represent what we’ll see in our CPU benchmark hierarchy when these chips come to market.
Image 1 of 2
Image 2 of 2
In the Geekbench 5 results, the Core i5-11600K scores 1565 points in the single-threaded test and 6220 points in the multi-threaded benchmark. These results are quite underwhelming, especially in the multi-core department where even AMD’s older Ryzen 5 3600 beat the 10600K by 7.6% (or roughly 400 points).
When it comes to single-core performance, the 11600K fares better, but it’s still the slowest CPU out of all known Rocket Lake SKUs and AMD Zen 3 CPUs to date. Luckily, the 11600K does take a major win against Comet Lake-S parts like the 10900K, beating that chip by 11%.
Again though, take these results with a huge grain of salt. Geekbench 5 already has a poor reputation for translating well to real-world results, and adding in slow memory complicates the findings.
The Rocket Lake release is coming soon next month, so hopefully, by that time we’ll have a review sample of the 11600K to test for ourselves and give you an in-depth look into how this chip really performs against our best gaming CPUs.
A Chinese overclocker has received what is believed to be the Ryzen 5750G Pro (due to TSME encryption) and revealed its impressive overclocking capabilities.
Specs-wise, this chip appears to be a Zen 3 APU with 8 cores and 16 threads, with presumably a Vega integrated graphics chip with equal performance to that of AMD’s Zen 3 mobile APUs. So expect this chip to perform similar to a Ryzen 7 5800X with integrated graphics.
But the best part about the chip is its crazy overclocking potential. The overclocker managed to crank the APU all the way to 4.8 GHz on all cores and hit a memory clock of 4133 MHz, which we’ve never seen before (even on AMD’s flagship Ryzen 9 5950X) in 1:1 mode.
However, he used very high voltages, with 1.47v for the CPU core voltage and an SoC voltage of 1.2v. The core voltage, in particular, could be dangerously high for multi-threaded workloads, or at least high enough that we wouldn’t recommend running it for a daily driver.
He also ran the CPU-Z benchmark on the Ryzen 7 5750G and compared it to Intel’s Core i9-9900KF. The AMD APU scored 660.8 points in the single-threaded test, and 6897.8 points in the multi-threaded portion. Compared to the 9900KF, the 5750G is 27% faster in multi-threaded performance and 21% faster in single-threaded performance. That isn’t too surprising given the 9900KF’s age.
The 2069 MHz FCLK frequency is exciting; the best Zen 3 parts already struggle to hit (but it’s doable) 2000 MHz, so seeing an APU break that barrier is quite impressive.
Unfortunately, we still don’t know if AMD will release another eight-core APU into the wild that won’t be exclusive to OEM builders. We’ve heard reports that a 5700G might be on its way at some point, but nothing is certain, especially during this major PC part shortage where AMD can’t even keep its current 5000-series chips in stock.
A new member of the Rocket Lake family has been spotted on Geekbench, this time an ultra-low powered variant called the Core i9-11900T. With 8 cores and a 35W TDP, this chip is the most power-efficient Rocket Lake SKU to date. But with performance that might surprise you. As these are unverified results, best to take the data with a pinch of salt.
From what we can tell on Geekbench’s spec sheet, the Core i9-11900T features a very low 1.51 GHz base frequency but maintains a surprisingly high 4.9 GHz maximum boost frequency. While 35W may not sound like a lot of power, it seems that Rocket Lake’s cores are power efficient enough to run 1 or maybe 2 cores at a boost frequency typically found on higher wattage SKUs.
Looking at the results, the 11900T managed a score of 1717 points in the single-threaded test and 8349 points in the multi-threaded score. The single-threaded score, in particular, is impressive. For comparison, the 11900T stomps on the — soon to be — previous gen, Core i9-10900K (with a 1402 score) by a whopping 20%.
Switching over to Intel’s main competitor, AMD, the Ryzen 7 5800X managed to get very close to the 11900T, with the AMD chip being just 2.5% slower and a score of 1674 points.
However, in multi-threaded tests, the 11900T’s 35W TDP really hampers performance. Comparing Intel’s Core i7-10700K from last-gen (not to mention the 10900K); the 10700K managed to be 7% faster than the 11900T. Then compared to the Ryzen 7 5800X, the lead stretches to a 22% difference in performance.
Overall, the Core i9-11900T is an impressive chip, even constrained to just 35W, it can outpace the best Comet Lake-S chips in the single-threaded department and get close to Comet Lake-S’ best 8 core CPU, the 10700K in multi-threaded performance.
This will really help expand the chip’s capabilities for users that require low powered CPUs. Typically with lower wattage chips like this, you expect major performance penalties. But if these performance numbers are true, then the i9-11900T could legitimately make a nice gaming CPU for ultra-compact/portable gaming systems with its excellent single-threaded numbers.
Hopefully, this kind of performance will be the same once the chip goes live and then we can benchmark the chip for ourselves. We still don’t know when the 11900T will be released, usually, Intel delays the launch of its ultra low powered SKUs until well after the launch of its vanilla and overclockable CPUs.
Intel’s 11th Generation Rocket Lake-S processors aren’t in stores yet, but engineering and qualification samples of the chips are evidently going around the black market. Romanian news outlet Lab501 and a Chinese YouTuber have released early reviews of the Core i7-11700K and Core i9-11900K, respectively. Since these are not retail samples, we recommend caution when approaching the results.
By now, Rocket Lake-S shouldn’t require any introductions. The forthcoming chips are still on Intel’s 14nm process node but wield the new Cypress Cove cores, which Intel claims will bring IPC uplifts up to 19%. AMD’s Ryzen 5000 (codename Vermeer) chips have dethroned Intel as the best gaming processor on the market, and the Blue Team is keen to recover its title. On the graphics end, Rocket Lake-S comes equipped with Intel’s 12th Generation Xe LP graphics with a maximum configuration of up to 32 Execution Units (EUs).
The Core i7-11700K and Core i9-11900K are reportedly eight-core, 16-thread processors with a 125W TDP. Intel usually differentiats its Core i7 and i9 lineups by adding more cores (or threading) to the i9 series, but given a hard cap of eight cores for Rocket Lake, it appears that clock rates are the only difference between the two families.
The Core i7-11700K has been rumored to feature a 3.6 GHz base clock, 5 GHz boost clock, and a 4.6 GHz all-core boost clock. Being the flagship part, the Core i9-11900K appears to have a 3.5 GHz base clock, 5.3 GHz boost clock, and a 4.8 GHz all-core boost clock.
Intel Core i7-11700K Benchmarks
Processor
3Ds Studio Max 2020*
Blender*
DaVinci Resolve 15*
HandBrake 1.2.2*
WinRAR 5.91
7-Zip 19
Cinebench R20
POV-Ray 3.7
PCMark 10
Power Consumption*
Ryzen 7 5800X
859
575
133
47
32,588
94,765
6.035
5.422
8,325
224
Core i7-11700K
917
631
154
48
28,072
76,816
5,615
4,505
7,927
286
*Lower is better.
AMD’s Ryzen 7 5800X simply dominated the Core i7-11700K across the board in terms of application performance. In some benchmarks, the margins were less than 10%, while in others, like WinRAR or 7-Zip, the Ryzen 7 5800X delivered up to 16.1% and 23.3% higher performance.
The Core i7-11700K’s power consumption also stood out, and not in a good way. With a Prime95-induced load, the Core i7-11700K drew up to 286W. Unfortunately, Lab501 didn’t include the Core i7-10700K to get an idea of the generation-over-generation power consumption. However, the Core i7-11700K pulled up to 27.7% more power than the Ryzen 7 5800X. Therefore, the Core i7-11700K wasn’t just slower than the Ryzen 7 5800X, but it was more power hungry as well.
Processor
Average
4K
WQHD
FHD
Ryzen 7 5800X
132.76
89.20
136.80
163.15
Core i7-11700K
131.27
89.80
133.90
161.15
According to Lab501’s results, the Ryzen 7 5800X was, on average, up to 1.1% faster than the Core i7-11700K. If we look at it individually, the Ryzen 7 5800X was marginally better than the Core i7-11700K in WQHD and FHD with differences of 2.2% and 1.2%, respectively.
Obviously, gaming is important for Intel, but the Core i7-11700K failed to help the chipmaker recover the lost ground. However, with such slim performance deltas, pricing could define the winner – we just don’t know pricing for Rocket Lake yet.
Intel Core i9-11900K Benchmarks
Processor
PCMark 10
Blender
X264 FHD Benchmark
V-Ray
Cinebench R15
CPU-Z Single Thread
CPU-Z Multi Thread
Core i9-11900K
14,536
142.06
72.8
17,181
2,526
719.6
7035.5
Ryzen 7 5800X
14,062
164.49
64.2
16,317
2,354
657.0
6366.0
The Core i9-11900K, on the other hand, had no problems outperforming the Ryzen 7 5800X in application workloads. Intel’s chip pumped out between 3% to 13% more performance than the Ryzen 7 5800X.
Processor
Wolfenstein: Youngblood
Total War: Three Kingdoms
PlayerUnknown’s Battlegrounds
Cyberpunk 2077
Hitman 3
League of Legends
Assassin’s Creed Valhalla
Ryzen 7 5800X
366
117
215
113
156
473
123
Core i9-11900K
353
117
215
110
158
361
122
It would seem that even the Core i9-11900K had trouble beating the Ryzen 7 5800X in gaming. Out of the seven titles, the Ryzen 7 5800X outpaced the Core i9-11900K in four of them. Both chips tied in two games, and the Core i9-11900K only managed to defeat the Ryzen 7 5800X in Hitman 3.
From what we’ve seen so far, the Core i7-11700K is no match for the Ryzen 7 5800X in either application or gaming workloads. Intel redeemed itself with the Core i9-11900K as it offers better application performance over the Ryzen 7 5800X.
Gaming, which Intel is big on, still seems to be on the Ryzen 7 5800X’s side though. Of course, we can’t pass judgment until proper reviews come out.
Although it’s hard to find any Zen 3 chips nowadays, the Ryzen 7 5800X retails for $449 when in stock. We can’t be certain of Rocket Lake’s pricing until the processors officialy come out. However, if preliminary retailer listings are even remotely accurate, the Core i7-11700K and Core i9-11900K may well end up with official price tags in the $450 and $600 range, respectively.
What’s the best mining GPU, and is it worth getting into the whole cryptocurrency craze? Bitcoin and Ethereum mining are making headlines again; prices and mining profitability are way up compared to the last couple of years. Everyone who didn’t start mining last time is kicking themselves for their lack of foresight. Not surprisingly, the best graphics cards and those chips at the top of our GPU benchmarks hierarchy end up being very good options for mining as well. How good? That’s what we’re here to discuss, as we’ve got hard numbers on hashing performance, prices, power, and more.
We’re not here to encourage people to start mining, and we’re definitely not suggesting you should mortgage your house or take out a big loan to try and become the next big mining sensation. Mostly, we’re looking at the hard data based on current market conditions. Predicting where cryptocurrencies will go next is even more difficult than predicting the weather, politics, or the next big meme. Chances are, if you don’t already have the hardware required to get started on mining today (or really, about two months ago), you’re already late and won’t see the big gains that others are talking about. Like the old gold rush, the ones most likely to strike it rich are those selling equipment to the miners rather than the miners themselves.
If you’ve looked for a new (or used) graphics card lately, the current going prices probably caused at least a raised eyebrow, maybe even two or three! We’ve heard from people who have said, in effect, “I figured with the Ampere and RDNA2 launches, it was finally time to retire my old GTX 1070/1080 or RX Vega 56/64. Then I looked at prices and realized my old card is selling for as much as I paid over three years ago!” They’re not wrong. Pascal and Vega cards from three or four years ago are currently selling at close to their original launch prices — sometimes more. If you’ve got an old graphics card sitting around, you might even consider selling it yourself (though finding a replacement could prove difficult).
Ultimately, we know many gamers and PC enthusiasts are upset at the lack of availability for graphics cards (and Zen 3 CPUs), but we cover all aspects of hardware — not just gaming. We’ve looked at GPU mining many times over the years, including back in 2011, 2014, and 2017. Those are all times when the price of Bitcoin shot up, driving interest and demand. 2021 is just the latest in the crypto coin mining cycle. About the only prediction we’re willing to make is that prices on Bitcoin and Ethereum will change in the months and years ahead — sometimes up, and sometimes down. And just like we’ve seen so many times before, the impact on graphics card pricing and availability will continue to exist. You should also be aware that, based on past personal experience that some of us have running consumer graphics cards 24/7, it is absolutely possible to burn out the fans, VRMs, or other elements on your card. Proceed at your own risk.
The Best Mining GPUs Benchmarked, Tested and Ranked
With that preamble out of the way, let’s get to the main point: What are the best mining GPUs? This is somewhat on a theoretical level, as you can’t actually buy the cards at retail for the most part, but we have a solution for that as well. We’re going to use eBay pricing — on sold listings — and take the data from the past seven days (for prices). We’ll also provide some charts showing pricing information from the past three months (90 days) from eBay, where most GPUs show a clear upward trend. How much can you make by mining Ethereum with a graphics card, and how long will it take to recover the cost of the card using the currently inflated eBay prices? Let’s take a look.
For this chart, we’ve used the current difficulty and price of Ethereum — because nothing else is coming close to GPU Ethereum for mining profitability right now. We’ve tested all of these GPUs on our standard test PC, which uses a Core i9-9900K, MSI MEG Z390 ACE motherboard, 2x16GB Corsair DDR4-3600 RAM, a 2TB XPG M.2 SSD, and a SeaSonic 850W 80 Plus Platinum certified PSU. We’ve tuned mining performance using either NBminer or PhoenixMiner, depending on the GPU, with an eye toward minimizing power consumption while maximizing hash rates. We’ve used $0.10 per kWh for power costs, which is much lower than some areas of the world but also higher than others. Then we’ve used the approximate eBay price divided by the current daily profits to come up with a time to repay the cost of the graphics card.
It’s rather surprising to see older GPUs at the very top of the list, but that’s largely based on the current going prices. GTX 1060 6GB and RX 590 can both hit modest hash rates, and they’re the two least expensive GPUs in the list. Power use isn’t bad either, meaning it’s feasible to potentially run six GPUs off a single PC — though then you’d need PCIe riser cards and other extras that would add to the total cost.
Note that the power figures for all GPUs are before taking PSU efficiency into account. That means actual power use (not counting the CPU, motherboard, and other PC components) will be higher. For the RTX 3080 as an example, total wall outlet power for a single GPU on our test PC is about 60W more than what we’ve listed in the chart. If you’re running multiple GPUs off a single PC, total waste power would be somewhat lower, though it really doesn’t impact things that much. (If you take the worst-case scenario and add 60W to every GPU, the time to break even only increases by 4-5 days.)
It’s also fair to say that our test results are not representative of all graphics cards of a particular model. RTX 3090 and RTX 3080 can run high GDDR6X temperatures without some tweaking, but if you do make the effort, the 3090 can potentially do 120-125MH/s. That would still only put the 3090 at third from the bottom in terms of time to break even, but it’s quite good in terms of power efficiency, and it’s the fastest GPU around. There’s certainly something to be said for mining with fewer higher efficiency GPUs if you can acquire them.
Here’s the real problem: None of the above table has any way of predicting the price of Ethereum or the mining difficulty. Guessing at the price is like guessing at the value of any other commodity: It may go up or down, and Ethereum, Bitcoin, and other cryptocurrencies are generally more volatile than even the most volatile of stocks. On the other hand, mining difficulty tends to increase over time and rarely goes down, as the rate of increased difficulty is directly tied to how many people (PCs, GPUs, ASICs, etc.) are mining.
So, the above is really a best-case scenario for when you’d break even on the cost of a GPU. Actually, that’s not true. The best-case scenario is that the price of Ethereum doubles or triples or whatever, and then everyone holding Ethereum makes a bunch of money. Until people start to cash out and the price drops, triggering panic sells and a plummeting price. That happened in 2018 with Ethereum, and it’s happened at least three times during the history of Bitcoin. Like we said: Volatile. But here we are at record highs, so everyone is happy and nothing could possibly ever go wrong this time. Until it does.
Still, there are obviously plenty of people who believe in the potential of Ethereum, Bitcoin, and blockchain technologies. Even at today’s inflated GPU prices, which are often double the MSRPs for the latest cards, and higher than MSRP for just about everything, the worst cards on the chart (RTX 3090 and RX 6900 XT) would still theoretically pay for themselves in less than seven months. And even if the value of the coins drops, you still have the hardware that’s at least worth something (provided the card doesn’t prematurely die due to heavy mining use). Which means, despite the overall rankings (in terms of time to break even), you’re generally better off buying newer hardware if possible.
Here’s a look at what has happened with GPU pricing during the past 90 days, using tweaked code from:
GeForce RTX 3060 Ti: The newest and least expensive of the Ampere GPUs, it’s just as fast as the RTX 3070 and sometimes costs less. After tuning, it’s also the most efficient GPU for Ethereum right now, using under 120W while breaking 60MH/s.
Radeon RX 5700: AMD’s previous generation Navi GPUs are very good at mining, and can break 50MH/s while using about 135W of power. The vanilla 5700 is as fast as the 5700 XT and costs less, making it a great overall choice.
GeForce RTX 2060 Super: Ethereum mining needs a lot of memory bandwidth, and all of the RTX 20-series GPUs with 8GB end up at around 44MH/s and 130W of power, meaning you should buy whichever is cheapest. That’s usually the RTX 2060 Super.
Radeon RX 590: All the Polaris GPUs with 8GB of GDDR5 memory (including the RX 580 8GB, RX 570 8GB, RX 480 8GB, and RX 470 8GB) end up with relatively similar performance, depending on how well your card’s memory overclocks. The RX 590 is currently the cheapest (theoretically), but all of the Polaris 10/20 GPUs remain viable. Just don’t get the 4GB models!
Radeon RX Vega 56: Overall performance is good, and some cards can perform much better — our reference models used for testing are more of a worst-case choice for most of the GPUs. After tuning, some Vega 56 cards might even hit 45-50MH/s, which would put this at the top of the chart.
Radeon RX 6800: Big Navi is potent when it comes to hashing, and all of the cards we’ve tested hit similar hash rates of around 65MH/s and 170W power use. The RX 6800 is generally several hundred dollars cheaper than the others and used a bit less power, making it the clear winner. Plus, when you’re not mining, it’s a very capable gaming GPU.
GeForce RTX 3080: This is the second-fastest graphics card right now, for mining and gaming purposes. The time to break even is only slightly worse than the other GPUs, after which profitability ends up being better overall. And if you ever decide to stop mining, this is the best graphics card for gaming — especially if it paid for itself! At around 95MH/s, it will also earn money faster after you recover the cost of the hardware (if you break even, of course).
What About Ethereum ASICs?
One final topic worth discussing is ASIC mining. Bitcoin (SHA256), Litecoin (Scrypt), and many other popular cryptocurrencies have reached the point where companies have put in the time and effort to create dedicated ASICs — Application Specific Integrated Circuits. Just like GPUs were originally ASICs designed for graphics workloads, ASICs designed for mining are generally only good at one specific thing. Bitcoin ASICs do SHA256 hashing really, really fast (some can do around 25TH/s while using 1000W — that’s trillions of hashes per second), Litecoin ASICs do Scrypt hashing fast, and there are X11, Equihash, and even Ethereum ASICs.
The interesting thing with hashing is that many crypto coins and hashing algorithms have been created over the years, some specifically designed to thwart ASIC mining. Usually, that means creating an algorithm that requires more memory, and Ethereum falls into that category. Still, it’s possible to optimize hardware to hash faster while using less power than a GPU. Some of the fastest Ethereum ASICs (e.g. Innosilicon A10 Pro) can reportedly do around 500MH/s while using only 1000W. That’s about ten times more efficient than the best GPUs. Naturally, the cost of such ASICs is prohibitively expensive, and every big miner and their dog wants a bunch of them. They’re all sold out, in other words, just like GPUs.
Ethereum has actually tried to deemphasize mining, but obviously that didn’t quite work out. Ethereum 2.0 was supposed to put an end to proof of work hashing, transitioning to a proof of stake model. We won’t get into the complexities of the situation, other than to note that Ethereum mining very much remains a hot item, and there are other non-Ethereum coins that use the same hashing algorithm (though none are as popular / profitable as ETH). Eventually, the biggest cryptocurrencies inevitably end up being supported by ASICs rather than GPUs — or CPUs or FPGAs. But we’re not at that point for Ethereum yet.
Finding the best graphics card at anything approaching reasonable pricing has become increasingly difficult. Just about everything we’ve tested in our GPU benchmarks hierarchy is sold out unless you go to eBay, but current eBay prices will take your breath away. If you thought things were bad before, they’re apparently getting even worse. No doubt, a lot of this is due to the recent uptick in Ethereum mining’s profitability on GPUs, compounded by component shortages, and it’s not slowing down.
A couple of weeks back, we wrote about Michael Driscoll tracking scalper sales of Zen 3 CPUs. Driscoll also covered other hot ticket items like RTX 30-series GPUs, RDNA2 / Big Navi GPUs, Xbox Series S / X, and PlayStation 5 consoles. Thankfully, he provided the code to his little project, and we’ve taken the opportunity to run the data (with some additional filtering out of junk ‘box/picture only’ sales) to see how things are going in the first six weeks of 2021. Here’s how things stand for the latest AMD and Nvidia graphics cards:
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
That’s … disheartening. Just in the past month, prices have increased anywhere from 10% to 35% on average. The increase is partly due to the recent graphics card tariffs, and as you’d expect, the jump in prices is more pronounced on the lower-priced GPUs.
For example, the RTX 3060 Ti went from average prices of $690 in the first week of January to $920 in the past week. It also represents nearly 3,000 individual sales on eBay, after filtering out junk listings — and these are actual sales, not just items listed on eBay. RTX 3080 saw the next-biggest jump in pricing, going from $1,290 to $1,593 for the same time periods, with 3,400 listings sold.
Nvidia’s RTX 3070 represents the largest number of any specific GPU sold, with nearly 5,400 units, but prices have only increased 17% — from $804 in January to $940 in February. The February price is interesting because it’s only slightly higher than the RTX 3060 Ti price, which suggests strongly that it’s Ethereum miners snapping up most of these cards. (The 3060 Ti hits roughly the same 60MH/s as the 3070 after tuning since they both have the same 8GB of GDDR6 memory.)
Wrapping up Nvidia, the RTX 3090 accounts for 2,291 units sold on eBay, with pricing increasing 14% since January. For the most expensive GPU that already had an extreme price, it’s pretty shocking to see it move up from $2,087 to a new average of $2,379. I suppose it really is the heir to the Titan RTX throne now.
We see a similar pattern on the AMD side, but at far lower volumes in terms of units sold. The RX 6900 XT had 334 listings sold, with average pricing moving up just 8% from $1,458 to $1,570 during the past six weeks. Considering it delivers roughly the same mining performance as the less expensive Big Navi GPUs, that makes sense from the Ethereum mining perspective.
Radeon RX 6800 XT prices increased 11% from $1,179 in January to $1,312 in February. It’s also the largest number of GPUs sold (on eBay) for Team Red, at 448 graphics cards. Not far behind is the RX 6800 vanilla, with 434 units. It saw the biggest jump in pricing over the same period, from $865 to $1,018 (18%). That strongly correlates with expected profits from GPU mining.
That’s both good and bad news. The good news is that gamers are most likely being sensible and refusing to pay these exorbitant markups. The bad news is that as long as mining remains this profitable, stock and pricing of graphics cards isn’t likely to recover. It’s 2017 all over again, plus the continuing effects of the pandemic.
Back at the start of last month, the first traces of the AMD Ryzen 7 5700G showed up through entries in the USB-IF, and just a couple of days later, its specifications surfaced. But what we didn’t know yet was what performance is set to be like — but that changes today with yet another new leak. The chip is now up for sale for $499 on eBay.
Shared on Twitter by Harakuze, a Ryzen 7 5700G Engineering Sample has been listed on eBay with, hidden in further images, results for a Cinebench R23 run. As with most leaked info, we aren’t able to confirm these performance figures. However, after some digging, we found entries for the 8-core 5700G APU and the 6-core model, the Ryzen 5 5600G APU. That being said, these latter figures come from CPU-Monkey, so we should still take them with a significant grain of salt and as little more than confirmation of the posted scores — though they are all in-line with our expectations.
In the test shown by the eBay seller, the 5700G ES jots down a single-core score of 1514 pts, with the multi-core score at an impressive 15456 points. For comparison, Intel’s 11th-Gen Core i7-1165G7 has a single-core score of 1532 points — but that’s a mobile chip and not a desktop APU.
That being said, we haven’t gotten around to testing with Cinebench R23 yet, but peeking around jots the Zen-2 based Ryzen 3700X (the most comparable Zen-2 chip) at a score of roughly single-core 1250 points. The 8-core Zen-1 based 1700X is shown in Cinebench to score 959 points. The generational gains are evident.
Comparing to the 5800X — a chip with the same Zen 3 CPU architecture and core count, the 5700G takes a small step back in performance. This reddit thread shows user’s scores, with the Ryzen 7 5800X having single-core scores in the low-1600’s and multi-core scores approaching 16,000 pts. Of course, this difference comes down to the higher TDP and boost clock of the 5800X, which boosts up to 4.7 GHz according to its spec sheet but will often boost higher than that, too. But, if you look carefully, you’ll spot that the alleged engineering sample 5700G is clocked at the same 4.7 GHz for the test, indicating that it might be slightly overclocked.
AMD Ryzen 7 5700G Specifications
The Ryzen 7 5700G is rumored to feature 8 cores, 16 threads, a boost clock of 4.4 GHz, and operate at a 65 W TDP. But of course, the real start of the show in these APUs is the onboard GPU. Alas, details about the GPU aren’t shared in the eBay listing, nor do we have any available elsewhere.
Processor
Cores / Threads
Base / Boost Clocks
L3 Cache
TDP
Ryzen 7 5700G
8 / 16
? / 4.4 GHz
16 MB
65 W
Ryzen 7 4700G
8 / 16
3.6 / 4.4 GHz
8 MB
65 W
Ryzen 7 5800X
8 / 16
3.8 / 4.7 GHz
32 MB
105 W
Ryzen 7 3700X
8 / 16
3.6 / 4.4 GHz
32 MB
65 W
All that being said, we would still take all the information with a pinch of salt. The seller claims that the engineering sample is a 5700G, and the lettering on the CPU (100-000000263-30) appears to suggest it’s the same chip as we saw earlier in January, but as with any pre-release hardware, you never know what you’re really going to get. We would be very hesitant in offering the seller the $500 he wants for the chip — even with all the chip shortages.
Let’s just hope these chips don’t become OEM-only like the 4000-series APUs did.
There is no concrete date for AMD’s Zen 3 APU announcement, but the latest sighting of the Ryzen 7 Pro 5750G indicates that the official launch may not be too far off. Unfortunately, the first sighting of the desktop PC APU comes as a Pro model, which could mean that AMD will reserve its Zen 3 APUs for OEM systems only, just like it did with Renoir. That certainly wouldn’t be of much help to enthusiasts in these times of GPU shortages, but only time will tell us if these chips will also come to the DIY market.
The Ryzen 7 Pro 5750G is basically the direct replacement for the Ryzen 7 Pro 4750G and the Pro version of the Ryzen 7 5700G that showed up last month. A user from the Chiphell forums has shared an alleged CPU-Z screenshot of the Ryzen 7 Pro 5750G that gives us a small taste of what’s to come for AMD’s Zen 3 desktop PC APU chips. As with all leaked information, we should take the info with a grain of salt, but the specifications line up with our expectations.
While not explicitly present in the screenshot, we expect the Ryzen 5000 APUs to possess similar traits as the mobile Ryzen 5000 (Cezanne) counterparts. The Ryzen 5000 desktop APUs are still on a monolithic die, but they wield the famed Zen 3 cores that brought substantial IPC uplifts to AMD’s army. The Ryzen 5000 APUs lack support for PCIe 4.0 and will probably just use an improved Vega graphics engine, just like the Ryzen 5000 Mobile variants.
Ryzen 7 Pro 5750G Specifications
Processor
Cores / Threads
Base Clock (GHz)
Boost Clock (GHz)
L2 Cache (MB)
L3 Cache (MB)
TDP (W)
Ryzen 7 Pro 5750G*
8 / 16
3.8
?
4
16
65
Ryzen 7 Pro 4750G
8 / 16
3.6
4.4
4
8
65
*Specifications are unconfirmed.
Zen 3 enabled Cezanne to sport up to twice the L3 cache compared to Renoir on the mobile parts. The same treatment goes for the desktop APUs, too. The Ryzen 7 Pro 5750G emerged with 16MB of L3 cache, twice what’s on the existing Ryzen 7 Pro 4750G. As expected, the 4MB of L2 cache remains unmodified.
In the case of the Ryzen 7 Pro 5750G, we’re looking at an eight-core 16-thread setup with a potential 3.8 GHz base clock. According to the forum user, the octa-core APU has a default full-core boost clock up to 4.05 GHz. However, the chip may boost over 4.75 GHz.
The author also claims that the Ryzen 7 Pro 5750G runs a bit cooler than the Ryzen 7 5800X. However, the Zen 3 APU has a stronger FCLK than Ryzen 5000 (Vermeer) processors. The Ryzen 7 Pro 5750G allegedly had its FCLK at 2,300 MHz, and there are rumors that engineering samples can even do 2,500 MHz.
The author bought his Ryzen 7 Pro 5750G for 2,750 yuan or $427.33. He seems to think that the retail pricing won’t be much cheaper than Vermeer.
Intel’s 12th-Gen Alder Lake chip will bring the company’s hybrid architecture, which combines a mix of larger high-performance cores paired with smaller high-efficiency cores, to desktop x86 PCs for the first time. That represents a massive strategic shift as Intel looks to regain the uncontested performance lead against AMD’s Ryzen 5000 series processors. AMD’s Zen 3 architecture has taken the lead in our Best CPUs and CPU Benchmarks hierarchy, partly on the strength of their higher core counts. That’s not to mention Apple’s M1 processors that feature a similar hybrid design and come with explosive performance improvements of their own.
Intel’s Alder Lake brings disruptive new architectures and reportedly supports features like PCIe 5.0 and DDR5 that leapfrog AMD and Apple in connectivity technology, but the new chips come with significant risks. It all starts with a new way of thinking, at least as far as x86 chips are concerned, of pairing high-performance and high-efficiency cores within a single chip. That well-traveled design philosophy powers billions of Arm chips, often referred to as Big.Little (Intel calls its implementation Big-Bigger), but it’s a first for x86 desktop PCs.
Intel has confirmed that its Golden Cove architecture powers Alder Lake’s ‘big’ high-performance cores, while the ‘small’ Atom efficiency cores come with the Gracemont architecture, making for a dizzying number of possible processor configurations. Intel will etch the cores on its 10nm Enhanced SuperFin process, marking the company’s first truly new node for the desktop since 14nm debuted six long years ago.
As with the launch of any new processor, Intel has a lot riding on Alder Lake. However, the move to a hybrid architecture is unquestionably riskier than prior technology transitions because it requires operating system and software optimizations to achieve maximum performance and efficiency. It’s unclear how unoptimized code will impact performance.
In either case, Intel is going all-in: Intel will reunify its desktop and mobile lines with Alder Lake, and we could even see the design come to the company’s high-end desktop (HEDT) lineup.
Intel might have a few tricks up its sleeve, though. Intel paved the way for hybrid x86 designs with its Lakefield chips, the first such chips to come to market, and established a beachhead in terms of both Windows and software support. Lakefield really wasn’t a performance stunner, though, due to a focus on lower-end mobile devices where power efficiency is key. In contrast, Intel says it will tune Alder Lake for high-performance, a must for desktop PCs and high-end notebooks. There are also signs that some models will come with only the big cores active, which should perform exceedingly well in gaming.
Meanwhile, Apple’s potent M1 processors with their Arm-based design have brought a step function improvement in both performance and power consumption over competing x86 chips. Much of that success comes from Arm’s long-standing support for hybrid architectures and the requisite software optimizations. Comparatively, Intel’s efforts to enable the same tightly-knit level of support are still in the opening stages.
Potent adversaries challenge Intel on both sides. Apple’s M1 processors have set a high bar for hybrid designs, outperforming all other processors in their class with the promise of more powerful designs to come. Meanwhile, AMD’s Ryzen 5000 chips have taken the lead in every metric that matters over Intel’s aging Skylake derivatives.
Intel certainly needs a come-from-behind design to thoroughly unseat its competitors, swinging the tables back in its favor like the Conroe chips did back in 2006 when the Core architecture debuted with a ~40% performance advantage that cemented Intel’s dominance for a decade. Intel’s Raja Koduri has already likened the transition to Alder Lake with the debut of Core, suggesting that Alder Lake could indeed be a Conroe-esque moment.
In the meantime, Intel’s Rocket Lake will arrive later this month, and all signs point to the new chips overtaking AMD in single-threaded performance. However, they’ll still trail in multi-core workloads due to Rocket Lake’s maximum of eight cores, while AMD has 16-core models for the mainstream desktop. That makes Alder Lake exceedingly important as Intel looks to regain its performance lead in the desktop PC and laptop markets.
While Intel hasn’t shared many of the details on the new chip, plenty of unofficial details have come to light over the last few months, giving us a broad indication of Intel’s vision for the future. Let’s dive in.
Intel’s 12th-Gen Alder Lake At a Glance
Qualification and production in the second half of 2021
Hybrid x86 design with a mix of big and small cores (Golden Cove/Gracemont)
10nm Enhanced SuperFin process
LGA1700 socket requires new motherboards
PCIe 5.0 and DDR5 support rumored
Four variants: -S for desktop PCs, -P for mobile, -M for low-power devices, -L Atom replacement
Gen12 Xe integrated graphics
New hardware-guided operating system scheduler tuned for high performance
Intel Alder Lake Release Date
Intel hasn’t given a specific date for Alder Lake’s debut, but it has said that the chips will be validated for production for desktop PCs and notebooks with the volume production ramp beginning in the second half of the year. That means the first salvo of chips could land in late 2021, though it might also end up being early 2022. Given the slew of benchmark submissions and operating system patches we’ve seen, early silicon is obviously already in the hands of OEMs and various ecosystem partners.
Intel and its partners also have plenty of incentive to get the new platform and CPUs out as soon as possible, and we could have a similar situation to 2015’s short-lived Broadwell desktop CPUs that were almost immediately replaced by Skylake. Rocket Lake seems competitive on performance, but the existing Comet Lake chips (e.g. i9-10900K) already use a lot of power, and i9-11900K doesn’t look to change that. With Enhanced SuperFIN, Intel could dramatically cut power requirements while improving performance.
Intel Alder Lake Specifications and Families
Intel hasn’t released the official specifications of the Alder Lake processors, but a recent update to the SiSoft Sandra benchmark software, along with listings to the open-source Coreboot (a lightweight motherboard firmware option), have given us plenty of clues to work with.
The Coreboot listing outlines various combinations of the big and little cores in different chip models, with some models even using only the larger cores (possibly for high-performance gaming models). The information suggests four configurations with -S, -P, and -M designators, and an -L variant has also emerged:
Alder Lake-S: Desktop PCs
Alder Lake-P: High-performance notebooks
Alder Lake-M: Low-power devices
Alder Lake-L: Listed as “Small Core” Processors (Atom)
Intel Alder Lake-S Desktop PC Specifications
Alder Lake-S*
Big + Small Cores
Cores / Threads
GPU
8 + 8
16 / 24
GT1 – Gen12 32EU
8 + 6
14 / 22
GT1 – Gen12 32EU
8 + 4
12 / 20
GT1 – Gen12 32EU
8 + 2
10 / 18
GT1 – Gen12 32EU
8 + 0
8 / 16
GT1 – Gen12 32EU
6 + 8
14 / 20
GT1 – Gen12 32EU
6 + 6
12 / 18
GT1 – Gen12 32EU
6 + 4
10 / 16
GT1 – Gen12 32EU
6 + 2
8 / 14
GT1 – Gen12 32EU
6 + 0
6 / 12
GT1 – Gen12 32EU
4 + 0
4 / 8
GT1 – Gen12 32EU
2 + 0
2 / 4
GT1 – Gen12 32EU
*Intel has not officially confirmed these configurations. Not all models may come to market. Listings assume all models have Hyper-Threading enabled on the large cores.
Intel’s 10nm Alder Lake combines large Golden Cove cores that support Hyper-Threading (Intel’s branded version of SMT, symmetric multi-threading, that allows two threads to run on a single core) with smaller single-threaded Atom cores. That means some models could come with seemingly-odd distributions of cores and threads. We’ll jump into the process technology a bit later.
As we can see above, a potential flagship model would come with eight Hyper-Threading enabled ‘big’ cores and eight single-threaded ‘small’ cores, for a total of 24 threads. Logically we could expect the 8 + 8 configuration to fall into the Core i9 classification, while 8 + 4 could land as Core i7, and 6 + 8 and 4 + 0 could fall into Core i5 and i3 families, respectively. Naturally, it’s impossible to know how Intel will carve up its product stack due to the completely new paradigm of the hybrid x86 design.
We’re still quite far from knowing particular model names, as recent submissions to public-facing benchmark databases list the chips as “Intel Corporation Alder Lake Client Platform” but use ‘0000’ identifier strings in place of the model name and number. This indicates the silicon is still in the early phases of testing, and newer steppings will eventually progress to production-class processors with identifiable model names.
Given that these engineering samples (ES) chips are still in the qualification stage, we can expect drastic alterations to clock rates and overall performance as Intel dials in the silicon. It’s best to use the test submissions for general information only, as they rarely represent final performance.
The 16-core desktop model has been spotted in benchmarks with a 1.8 GHz base and 4.0 GHz boost clock speed, but we can expect that to increase in the future. For example, a 14-core 20-thread Alder Lake-P model was recently spotted at 4.7 GHz. We would expect clock rates to be even higher for the desktop models, possibly even reaching or exceeding 5.0 GHz on the ‘big’ cores due to a higher thermal budget.
Meanwhile, it’s widely thought that the smaller efficiency cores will come with lower clock rates, but current benchmarks and utilities don’t enumerate the second set of cores with a separate frequency domain, meaning we’ll have to wait for proper software support before we can learn clock rates for the efficiency cores.
We do know from Coreboot patches that Alder Lake-S supports two eight-lane PCIe 5.0 connections and two four-lane PCIe 4.0 connections, for a total of 24 lanes. Conversely, Alder Lake-P dials back connectivity due to its more mobile-centric nature and has a single eight-lane PCIe 5.0 connection along with two four-lane PCIe 4.0 interfaces. There have also been concrete signs of support for DDR5 memory. There are some caveats, though, which you can read about in the motherboard section.
Intel Alder Lake-P and Alder Lake-M Mobile Processor Specifications
Alder Lake-P* Alder Lake-M*
Big + Small Cores
Cores / Threads
GPU
6 + 8
14 / 20
GT2 Gen12 96EU
6 + 4
10 / 14
GT2 Gen12 96EU
4 + 8
12 / 16
GT2 Gen12 96EU
2 + 8
10 / 12
GT2 Gen12 96EU
2 + 4
6 / 8
GT2 Gen12 96EU
2 + 0
2 / 4
GT2 Gen12 96EU
*Intel has not officially confirmed these configurations. Not all models may come to market. Listings assume all models have Hyper-Threading enabled on the large cores.
The Alder Lake-P processors are listed as laptop chips, so we’ll probably see those debut in a wide range of notebooks that range from thin-and-light form factors up to high-end gaming notebooks. As you’ll notice above, all of these processors purportedly come armed with Intel’s Gen 12 Xe architecture in a GT2 configuration, imparting 96 EUs across the range of chips. That’s a doubling of execution units over the desktop chips and could indicate a focus on reducing the need for discrete graphics chips.
There is precious little information available for the -M variants, but they’re thought to be destined for lower-power devices and serve as a replacement for Lakefield chips. We do know from recent patches that Alder Lake-M comes with reduced I/O support, which we’ll cover below.
Finally, an Alder Lake-L version has been added to the Linux kernel, classifying the chips as ‘”Small Core” Processors (Atom),’ but we haven’t seen other mentions of this configuration elsewhere.
Intel Alder Lake 600-Series Motherboards, LGA 1700 Socket, DDR5 and PCIe 5.0
Intel’s incessant motherboard upgrades, which require new sockets or restrict support within existing sockets, have earned the company plenty of criticism from the enthusiast community – especially given AMD’s long line of AM4-compatible processors. That trend will continue with a new requirement for LGA 1200 sockets and the 600-series chipset for Alder Lake. Still, if rumors hold true, Intel will stick to the new socket for at least the next generation of processors (7nm Meteor Lake) and possibly for an additional generation beyond that, rivaling AMD’s AM4 longevity.
Last year, an Intel document revealed an LGA 1700 interposer for its Alder Lake-S test platform, confirming that the rumored socket will likely house the new chips. Months later, an image surfaced at VideoCardz, showing an Alder Lake-S chip and the 37.5 x 45.0mm socket dimensions. That’s noticeably larger than the current-gen LGA 1200’s 37.5 x 37.5mm.
Because the LGA 2077 socket is bigger than the current sockets used in LGA 1151/LGA 1200 motherboards, existing coolers will be incompatible, but we expect that cooler conversion kits could accommodate the larger socket. Naturally, the larger socket is needed to accommodate 500 more pins than the LGA 1200 socket. Those pins are needed to support newer interfaces, like PCIe 5.0 and DDR5, among other purposes, like power delivery.
PCIe 5.0 and DDR5 support are both listed in patch notes, possibly giving Intel a connectivity advantage over competing chips, but there are a lot of considerations involved with these big technology transitions. As we saw with the move from PCIe 3.0 to 4.0, a step up to a faster PCIe interface requires thicker motherboards (more layers) to accommodate wider lane spacing, more robust materials, and retimers due to stricter trace length requirements. All of these factors conspire to increase cost.
We recently spoke with Microchip, which develops PCIe 5.0 switches, and the company tells us that, as a general statement, we can expect those same PCIe 4.0 requirements to become more arduous for motherboards with a PCIe 5.0 interface, particularly because they will require retimers for even shorter lane lengths and even thicker motherboards. That means we could see yet another jump in motherboard pricing over what the industry already absorbed with the move to PCIe 4.0. Additionally, PCIe 5.0 also consumes more power, which will present challenges in mobile form factors.
Both Microchip and the PCI-SIG standards body tell us that PCIe 5.0 adoption is expected to come to the high-performance server market and workstations first, largely because of the increased cost and power consumption. That isn’t a good fit for consumer devices considering the slim performance advantages in lighter workloads. That means that while Alder Lake may support PCIe 5.0, it’s possible that we could see the first implementations run at standard PCIe 4.0 signaling rates.
Intel took a similar tactic with its Tiger Lake processors – while the chips internal pathways are designed to accommodate the increased throughput of the DDR5 interface via a dual ring bus, they came to market with DDR4 memory controllers, with the option of swapping in new DDR5 controllers in the future. We could see a similar approach with PCIe 4.0, with the first devices using existing controller tech, or the PCIe 5.0 controllers merely defaulting to PCIe 4.0.
Benchmarks have surfaced that indicate that Alder Lake supports DDR5 memory, but like the PCIe 5.0 interface, but it also remains to be seen if Intel will enable it on the leading wave of processors. Notably, every transition to a newer memory interface has resulted in higher up-front DIMM pricing, which is concerning in the price-sensitive desktop PC market.
DDR5 is in the opening stages; some vendors, like Adata, TeamGroup, and Micron, have already begun shipping modules. The inaugural modules are expected to run in the DDR5-4800 to DDR5-6400 range. The JEDEC spec tops out at DDR5-8400, but as with DDR4, it will take some time before we see those peak speeds. Notably, several of these vendors have reported that they don’t expect the transition to DDR5 to happen until early 2022.
While the details are hazy around the separation of the Alder Lake-S, -P, -M, and -L variants, some details have emerged about the I/O allocations via Coreboot patches:
Alder Lake-P
Alder Lake-M
Alder Lake-S
CPU PCIe
One PCIe 5.0 x8 / Two PCIe 4.0 x4
Unknown
Two PCIe 5.0 x8 / Two PCIe 4.0 x4
PCH
ADP_P
ADP_M
ADP_S
PCH PCIe Ports
12
10
28
SATA Ports
6
3
6
We don’t have any information for the Alder Lake-L configuration, so it remains shrouded in mystery. However, as we can see above, the PCIe, PCH, and SATA allocations vary by the model, based on the target market. Notably, the Alder Lake-P configuration is destined for mobile devices.
Intel 12th-Gen Alder Lake Xe LP Integrated Graphics
A series of Geekbench test submissions have given us a rough outline of the graphics accommodations for a few of the Alder Lake chips. Recent Linux patches indicate the chips feature the same Gen12 Xe LP architecture as Tiger Lake, though there is a distinct possibility of a change to the sub-architecture (12.1, 12.2, etc.). Also, there are listings for a GT0.5 configuration in Intel’s media driver, but that is a new paradigm in Intel’s naming convention so we aren’t sure of the details yet.
The Alder Lake-S processors come armed with the 32 EUs (256 shaders) in a GT1 configuration, and the iGPU on early samples run at 1.5 GHz. We’ve also seen Alder Lake-P benchmarks with the GT2 configuration, which means they come with 96 EUs (768 shaders). The early Xe LP iGPU silicon on the -P model runs at 1.15GHz, but as with all engineering samples, that could change with shipping models.
Alder Lake’s integrated GPUs support up to five display outputs (eDP, dual HDMI, and Dual DP++), and support the same encoding/decoding features as both Rocket Lake and Tiger Lake, including AV1 8-bit and 10-bit decode, 12-bit VP9, and 12-bit HEVC.
Intel Alder Lake CPU Architecture and 10nm Enhanced SuperFin Process
Intel pioneered the x86 hybrid architecture with its Lakefield chips, with those inaugural models coming with one Sunny Cove core paired with four Atom Tremont cores.
Compared to Lakefield, both the high- and low-performance Alder Lake-S cores take a step forward to newer microarchitectures. Alder Lake-S actually jumps forward two ‘Cove’ generations compared to the ‘big’ Sunny Cove cores found in Lakefield. The big Golden Cove cores come with increased single-threaded performance, AI performance, Network and 5G performance, and improved security features compared to the Willow Cove cores that debuted with Tiger Lake.
Image 1 of 2
Image 2 of 2
Alder Lake’s smaller Gracemont cores jump forward a single Atom generation and offer the benefit of being more power and area efficient (perf/mm^2) than the larger Golden Cove cores. Gracemont also comes with increased vector performance, a nod to an obvious addition of some level of AVX support (likely AVX2). Intel also lists improved single-threaded performance for the Gracemont cores.
It’s unclear whether Intel will use its Foveros 3D packaging for the chips. This 3D chip-stacking technique reduces the footprint of the chip package, as seen with the Lakefield chips. However, given the large LGA 1700 socket, that type of packaging seems unlikely for the desktop PC variants. We could see some Alder Lake-P, -M, or -L chips employ Foveros packaging, but that remains to be seen.
Lakefield served as a proving ground not only for Intel’s 3D Foveros packaging tech but also for the software and operating system ecosystem. At its Architecture Day, Intel outlined the performance gains above for the Lakefield chips to highlight the promise of hybrid design. Still, the results come with an important caveat: These types of performance improvements are only available through both hardware and operating system optimizations.
Due to the use of both faster and slower cores that are both optimized for different voltage/frequency profiles, unlocking the maximum performance and efficiency requires the operating system and applications to have an awareness of the chip topology to ensure workloads (threads) land in the correct core based upon the type of application.
For instance, if a latency-sensitive workload like web browsing lands in a slower core, performance will suffer. Likewise, if a background task is scheduled into the fast core, some of the potential power efficiency gains are lost. There’s already work underway in both Windows and various applications to support that technique via a hardware-guided OS scheduler.
The current format for Intel’s Lakefield relies upon both cores supporting the same instruction set. Alder Lake’s larger Golden Cove cores support AVX-512, but it appears that those instructions will be disabled to accommodate the fact that the Atom Gracemont cores do not support the instructions. There is a notable caveat that any of the SKUs that come with only big cores might still support the instructions.
Intel Chief Architect Raja Koduri mentioned that a new “next-generation” hardware-guided OS scheduler that’s optimized for performance would debut with Alder Lake, but didn’t provide further details. This next-gen OS scheduler could add in support for targeting cores with specific instruction sets to support a split implementation, but that remains to be seen.
Intel fabs Alder Lake on its Enhanced 10nm SuperFin process. This is the second-generation of Intel’s SuperFin process, which you can learn more about in our deep-dive coverage.
Image 1 of 2
Image 2 of 2
Intel says the first 10nm SuperFin process provides the largest intra-node performance improvement in the company’s history, unlocking higher frequencies and lower power consumption than the first version of its 10nm node. Intel says the net effect is the same amount of performance uplift that the company would normally expect from a whole series of intra-node “+” revisions, but in just one shot. As such, Intel claims these transistors mark the largest single intra-node improvement in the company’s history.
The 10nm SuperFin transistors have what Intel calls breakthrough technology that includes a new thin barrier that reduces interconnect resistance by 30%, improved gate pitch so the transistor can drive higher current, and enhanced source/drain elements that lower resistance and improve strain. Intel also added a Super MIM capacitor that drives a 5X increase in capacitance, reducing vDroop. That’s important, particularly to avoid localized brownouts during heavy vectorized workloads and also to maintain higher clock speeds.
During its Architecture Day, Intel teased the next-gen variant of SuperFin, dubbed ’10nm Enhanced SuperFin,’ saying that this new process was tweaked to increase interconnect and general performance, particularly for data center parts (technically, this is 10nm+++, but we won’t quibble over an arguably clearer naming convention). This is the process used for Alder Lake, but unfortunately, Intel’s descriptions were vague, so we’ll have to wait to learn more.
We know that the 16-core models come armed with 30MB of L3 cache, while the 14-core / 24 thread chip has 24MB of L3 cache and 2.5 MB of L2 cache. However, it is unclear how this cache is partitioned between the two types of cores, which leaves many questions unanswered.
Alder Lake also supports new instructions, like Architectural LBRs, HLAT, and SERIALIZE commands, which you can read more about here. Alder Lake also purportedly supports AVX2 VNNI, which “replicates existing AVX512 computational SP (FP32) instructions using FP16 instead of FP32 for ~2X performance gain.” This rapid math support could be part of Intel’s solution for the lack of AVX-512 support for chips with both big and small cores, but it hasn’t been officially confirmed.
Intel 12th-Generation Alder Lake Price
Intel’s Alder Lake is at least ten months away, so pricing is the wild card. Intel has boosted its 10nm production capacity tremendously over the course of 2020 and hasn’t suffered any recent shortages of its 10nm processors. That means that Intel should have enough production capacity to keep costs within reasonable expectations, but predicting Intel’s 10nm supply simply isn’t reasonable given the complete lack of substantive information on the matter.
However, Intel has proven with its Comet Lake, Ice Lake, and Cooper Lake processors that it is willing to lose margin in order to preserve its market share, and surprisingly, Intel’s recent price adjustments have given Comet Lake a solid value proposition compared to AMD’s Ryzen 5000 chips.
We can only hope that trend continues, but if Alder Lake brings forth both PCIe 5.0 and DDR5 support as expected, we could be looking at exceptionally pricey memory and motherboard accommodations.
3rd party Ryzen overclocking tool, ClockTuner, has just been updated to version 2.0, adding support for Zen 3-based Ryzen CPUs (Ryzen 5000 series) and a few new features to optimize performance of Zen 2 or Zen 3 CPUs, including those that made our list of best CPUs. The application is designed to make overclocking easy for Ryzen users, and achieve the highest efficiency possible out of your Ryzen chip, no matter the core count.
With an application like ClockTuner, you can always just plug in a static core frequency and core voltage, but doing that with today’s high-core count CPUs could yield negative results such as high power consumption and reduced single-core turbo frequencies.
ClockTuner will automatically figure out what each CCX on your Ryzen chip is fully capable of. You have full control over what frequency targets you want to hit, as well as what voltage you want to hit. Then the program will try and achieve your targeted inputs as best as it can on all CCXs.
An important new feature of ClockTuner 2.0, besides Ryzen 5000 support, is the addition of “CTR Hybrid OC” mode, which works like a turbo-boosting algorithm. ClockTuner will produce several profiles that are responsible for running the CPU at different frequencies and voltages. Similar to turbo boost, ClockTuner will keep voltages and frequencies very low, at idle conditions, run at a “medium” configuration at normal loads, then at very high loads, increase voltage and core frequencies as high as possible.
Just beware that this is a 3rd party application that will void your warranty with AMD. Also, there’s no guarantee that this software will work 100% of the time, so just make sure you know you are getting into. Just like with anything to do with overclocking, there is some risk involved.
The Mercury Research CPU market share results are in for the fourth quarter of 2020, with the headline news being that Intel has clawed back share from AMD in the desktop PC market for the first time in three years. Intel also stopped its slide in notebook PCs, gaining share for the first time since we began collecting data for that segment in early 2018. AMD also lost share in the overall x86 market during the quarter but notched a solid gain for the year. Meanwhile, AMD continued to make slow but steady gains in the server market.
It’s noteworthy that the fourth quarter of 2020 was anything but typical: The PC market continued its pandemic-fueled surge, seeing its fastest growth in a decade. For example, while AMD lost share in the overall x86 market (less IoT) during the quarter, Mercury Research pegs the overall x86 market growth rate at an explosive 20.1%.
Intel obviously captured more of that growth than AMD, but it’s important to remember that a slight loss of share in the midst of an explosive growth environment doesn’t equate to declining sales – AMD grew its processor revenue by 50% last year and posted record financial results for the year.
Shortages have plagued AMD due to ongoing supply chain issues. Given the lack of AMD products on shelves, the company is obviously selling all of the silicon it can punch out, signaling strong demand. AMD expects to see ‘tightness’ throughout the first half of 2021 until added production capacity comes online, meaning we could see a limited supply of AMD’s PC and console chips until the middle of the year (you can see AMD CEO Lisa Su’s take on the situation in our recent interview).
Those shortages led to a scarcity of AMD’s chips during the critical holiday shopping season in the fourth quarter, while Intel’s chips were widely available and often selling at a discount. That obviously helped Intel recoup some share. During its recent earnings call, Intel also cited improving supply of lower-end processors, like those destined for Chromebooks, as a contributing factor. Intel CEO Bob Swan noted the company increased its PC CPU units by 33% during the fourth quarter.
Intel has also expanded its chip production by leaps and bounds over the last several years as it recovered from its own shortage of production capacity. The advantages of its IDM model are on clear display during the pandemic – the company’s tight control of its supply chain and production facilities have allowed it to better weather disruptions. That’s an important consideration as the company has come under intense criticism that it should spin off its fabs while it weighs how much of its own production it should outsource (you can see Bob Swan’s take on the situation in our recent interview).
That said, given the dynamic nature of the market, it’s hard to draw firm conclusions on several of the categories below without more information. Dean McCarron of Mercury Research will provide us with detailed breakdowns for each segment in the morning, and we’ll add his analysis as soon as it is available. For now, here’s our analysis of the raw numbers.
Image 1 of 2
Image 2 of 2
4Q20
3Q20
2Q20
1Q20
4Q19
3Q19
2Q19
1Q2019
4Q18
3Q18
2Q18
1Q18
4Q17
3Q17
2Q17
1Q17
4Q16
3Q16
AMD Desktop Unit Share
19.3%
20.1%
19.2%
18.6%
18.3%
18%
17.1%
17.1%
15.8%
13%
12.3%
12.2%
12.0%
10.9%
11.1%
11.4%
9.9%
9.1%
Quarter over Quarter / Year over Year (pp)
-0.8 / +1.0
+0.9 / +2.1
+0.6 / +2.1
+0.3 / +1.5
+0.3 / +2.4
+0.9 / +5
Flat / +4.8
+1.3 / +4.9
+2.8 / +3.8
+0.7 / +2.1
+0.1 / +1.2
+0.2 / +0.8
+1.1 / +2.1
-0.2 / +1.8
-0.3 / –
+1.5 / –
+0.8 / –
–
AMD recently introduced its Ryzen 5000 processors that take the lead in every meaningful metric from Intel’s Comet Lake chips, but a lack of supply could have hindered the company’s gains in this fast-growing segment. Intel’s Rocket Lake lands in Q1 2021, which could present more competition for AMD’s Ryzen 5000.
While AMD lost some share here during the quarter, it gained 1 percentage point for the year. However, AMD recently noted that its Ryzen 5000 chips doubled the launch sales of any other previous Ryzen generation, and annual processor revenue grew 50% even though the PC market only grew 13%. It’s logical to expect that AMD will prioritize the production of these higher-margin desktop processors to maximize its profitability.
AMD has noted that its shortages are most acute in the lower end of the PC market, while Intel says it has improved its own shipments of small-core (lower-end) CPUs.
4Q20
3Q20
2Q20
1Q20
Q419
3Q19
2Q19
1Q2019
4Q18
3Q18
2Q18
AMD Mobile Unit Share
19%
20.2%
19.9%
17.1%
16.2%
14.7%
14.1%
13.1%
12.2%
10.9%
8.8%
Quarter over Quarter / Year over Year (pp)
-1.2 / +2.8
+0.3 / +5.5
+2.9 / +5.8
+0.9 / +3.2
+1.5 / +4.0
+0.7 / +3.8
+1.0 / +5.3
+0.9 / ?
Recently, this has been AMD’s fastest-growing market. The mobile segment comprises roughly 60% of the client processor market, meaning any gains are very important in terms of overall volume and revenue.
Intel has cited its increasing penetration into the lower-end of the market, like Chromebooks, which likely contributed to its strong gains here. Again, AMD has said that its shortages are most pressing in the lower-end of the market.
Notably, AMD remained in the black here for the year, with a 2.8 percentage point gain. AMD recently launched its Ryzen 5000 Mobile processors, which bring the powerful Zen 3 microarchitecture to laptops for the first time. AMD has 50% more designs coming to market than the previous-gen Ryzen 4000 lineup, but supply could be tight.
AMD bases its server share projections on IDC’s forecasts but only accounts for the single- and dual-socket market, which eliminates four-socket (and beyond) servers, networking infrastructure and Xeon D’s (edge). As such, Mercury’s numbers differ from the numbers cited by AMD, which predict a higher market share. Here is AMD’s comment on the matter: “Mercury Research captures all x86 server class processors in their server unit estimate, regardless of device (server, network or storage), whereas the estimated 1P [single-socket] and 2P [two-socket] TAM [Total Addressable Market] provided by IDC only includes traditional servers.”
4Q20
3Q20
2Q20
1Q20
4Q19
3Q19
2Q19
1Q2019
4Q18
3Q18
2Q18
4Q17
AMD Server Unit Share
7.1%
6.6%
5.8%
5.1%
4.5%
4.3%
3.4%
2.9%
3.2%
1.6%
1.4%
0.8%
Quarter over Quarter / Year over Year (pp)
+0.5 / +2.6
+0.8 / +2.3
+0.7 / +2.4
+0.6 / 2.2
+0.2 / +1.4
+0.9 / +2.7
+0.5 / +2.0
-0.3 / –
+1.6 / 2.4
+0.2 / –
AMD continues to chew away server share from Intel at a steady rate. These gains come on the cusp of the company’s highly-anticipated EPYC Milan launch in March. It’s logical to expect that some customers may have paused purchases on current-gen EPYC Rome processors in anticipation of the looming Milan launch, and the resultant pent up demand could increase AMD’s server penetration next quarter. Also, given the importance of this lucrative segment, AMD will likely prioritize server chip production.
4Q20
3Q20
2Q20
1Q20
4Q19
3Q19
2Q19
4Q18
3Q18
AMD Overall x86
21.7%
22.4%
18.3%
14.8%
15.5%
14.6%
13.9%
12.3%
10.6%
Overall PP Change QoQ / YoY
-0.7 / +6.2
+4.1 / +6.3
+3.5 / +1.2 (+3.7?)
-0.7 / ?
+0.9 / +3.2
+0.7 / +4
?
?
–
The overall x86 market grew at an explosive 20.1% rate during the quarter, reflecting that a growing TAM benefits both players. AMD lost a minor amount of overall share during the quarter, but gained 6.2 percentage points for the year.
We’ll add McCarron’s segment analysis as soon as it is available.
Valve has just updated its Steam Hardware Survey with results from January 2021. If you believe the numbers for videocards, Nvidia’s RTX 30-series GPUs now account for over 1% of the total gaming market on Steam. These are some of the best graphics cards, but demand is so high (and supply is so low) that finding one for sale is virtually impossible, and prices are much higher than the launch MSRPs.
I’ve followed the Steam Hardware Survey for a long time, wondering at the statistics behind the data. The past few months give me (even more) reason to suspect it isn’t a proper random sampling of users, which means no one should attempt to draw any meaningful conclusions. Valve has never revealed any details of how the survey gets conducted, but I suspect there’s a higher chance for it to ask for someone’s hardware details if it doesn’t recognize the graphics card. This means new cards like the RTX 30-series are much more likely to get included. However, that’s just a guess, and it’s possible Valve is actually doing a proper random sampling and simply hasn’t made that fact public. (But I doubt it.) In other words, don’t take these figures as any true indication of the distribution of various GPU models, even among Steam users. But the numbers are still interesting and fun to gawk at, wherever they come from.
The biggest news is that the GeForce RTX 3080 now sits at 0.66% of PCs surveyed. That’s up 0.18% from December, which was up 0.25% from November. The RTX 3080 has been on the charts for three months now, steadily gaining share, and is closing in on the GTX 1660 Ti. The other two Ampere GPUs to show up on the charts are the GeForce RTX 3060 Ti, sitting at 0.27%, and the GeForce RTX 3090, which has 0.27% and theoretically has more hits on the survey than AMD’s previous-gen Radeon RX 5700 (the non-XT model).
AMD’s RDNA2 Radeon RX 6000-series GPUs are not yet part of the list, though there’s still the nebulous ‘Other’ category with 9.29% of all GPUs. Presumably, the Radeon RX 6900 XT, Radeon RX 6800XT, and RX 6800 all fall into that category, with less than 0.15% of the total each. Interestingly, Nvidia’s GeForce RTX 3070 also fails to show up on the chart, so for now, at least the 3090 appears to be ahead in terms of Steam use.
The top of the charts is also interesting. The GeForce GTX 1060 remains the most popular card, but its total share dropped 1.61%, falling below 10% for the first time in I don’t know how long. It could be all the users with Pascal are finally upgrading, or maybe just the Internet cafes have finally decided to move on from the 1060. Or it could simply be a normal variation in the sampling, as the GTX 1050 Ti use is up 0.61%, and GTX 1050 is up 0.44%.
Also of note is that several other reasonably popular GPUs have shown large dips this past month. The RTX 2060 is down 1.16%, 2070 Super is down 0.53%, and RTX 2070 is down 1.1%. The generic label of “Nvidia Graphics Device” is also down 0.68% (perhaps because formerly ‘unknown’ GPUs like the 3060 Ti and 3090 are now listed separately).
We’re perhaps getting too far into the weeds, as we don’t know the actual collection policy and statistical accuracy of any of the data. At best, this could be a random sample of Steam users from the past several months. At worst, it’s a biased sample of Steam users. Either way, it doesn’t account for any hardware that’s not used with Steam. Still, it does bear at least some semblance to what we’d expect to see in the market.
For instance, AMD CPU usage is up 3% last month relative to Intel usage, which is down 3%. AMD’s total for CPU use is now 28%, the highest it’s ever been. That makes perfect sense, as Intel’s desktop CPUs, in particular, have been stagnating on 14nm, while AMD’s Zen 3 (Ryzen 5000) series CPUs are now at the top of our best CPUs list and lead our CPU Benchmarks hierarchy. Meanwhile, 1920×1080 still claims the lion’s share of resolution usage, at 67% of the total, with the second most popular resolution being 1366×768 (yuck), and 2560×1440 usage represents just 7.5% of the users surveyed.
Let me close by once again calling on Valve to do the right thing and provide a clear explanation of the statistics behind the survey. If it’s a random sampling, tell us so we know we can put more confidence in the numbers. Tell us how many PCs were surveyed so we know the sample size. And if it’s not a proper statistical analysis, then fix the code. The ‘Other’ category in GPUs also continues to be quite large, and it would be great to allow numbers nerds like me to get the full list of GPUs, even for those with only a 0.01% share. And as long as I’m making wishes that are unlikely to be fulfilled, please fix all the PC component shortages, especially on the new video cards.
It seems that Dell’s PowerEdge R6525 rack server is already available with AMD’s EPYC Milan chips. Dell Canada (via momomo_us) has already listed a couple of Zen 3 parts as processor options for the dual-socket 1U system, inadvertently exposing their specifications and pricing.
In terms of similaries, Milan will preserve the same configuration as Rome, meaning the chips will arrive with eight compute dies and one I/O die. Once again, the server processors will max out at 64 cores, but with significant upgrades. For Milan, AMD has switched over to the Zen 3 microarchitecture and TSMC’s reported 7nm+ process node. Zen 3 alone will be able to catapult Milan performance-wise, since the microarchitecture has done miracles for AMD’s consumer-focused Ryzen 5000 lineup.
Milan seamlessly fits into the SP3 socket, so compatibility won’t be an issue even for previous-generation motherboards. We expect Milan to operate within 120W to 280W thermal limits and provide the same features as Rome, such as eight-channel support for DDR4 and PCIe 4.0.
AMD EPYC Milan Specifications and Pricing
Processor
Pricing (Converted from CAD)
Cores / Threads
Base Clock (GHz)
L3 Cache (MB)
TDP (W)
EPYC 7763
$8,184.99
64 / 128
2.45
256
280
EPYC 7H12
$7,703.91
64 / 128
2.60
256
280
EPYC 7713
$7,215.34
64 / 128
2.00
256
225
EPYC 7662
$6,183.26
64 / 128
2.00
256
225
EPYC 7543
$2,709.32
32 / 64
2.80
256
225
EPYC 7542
$2,426.08
32 / 64
2.90
128
225
The EPYC 7763 is one of many expected 64-core EPYC Zen 3 chips from AMD. This model in particular has a 2.45 GHz base clock, a 256MB L3 cache and a 280W TDP. In comparison to the existing EPYC 7H12, the EPYC 7763 will only be 6.2% more expensive, according to Dell Canada’s pricing.
The EPYC 7713 shows identical specifications to the current EPYC 7662, but bear in mind that the first comes wielding Zen 3 cores. The EPYC 7713 could carry a price tag that’s 16.7% higher than that of the EPYC 7662.
The EPYC 7543, on the other hand, seems to have a 100 MHz lower base clock speed than the EPYC 7542. But it shouldn’t impact the EPYC 7543’s performance since Zen 3 ushers in some significant IPC gains. Furthermore, the EPYC 7542 has double the L3 cache in comparison to the EPYC 7542. Pricing-wise, the EPYC 7543 might only cost 11.7% more than the EPYC 7542.
Overall, Milan doesn’t appear to be more costly than Rome. From the processors that Dell has listed, we’re looking at price escalation that spans between 6.2% to 16.7%. Considering the performance increases that should come along with the price changes, we’re sure AMD won’t have too much trouble finding customers.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.