Dominic Moass 3 days ago CPU, Featured Announcement, Graphics
KitGuru was recently able to sit down with Frank Azor, Chief Architect of Gaming Solutions at AMD. In a wide-ranging discussion, we spoke about the unique position AMD is in as both a CPU and GPU manufacturer and how the company can use that to its advantage, with features like Smart Access Memory and SmartShift. We also get the latest update on the company’s highly anticipated FidelityFX Super Resolution feature.
Watch via our Vimeo channel (below) or over on YouTube at 2160p HERE
Timestamps
00:00 Start
00:37 What is Frank’s official title and what has he been responsible for at AMD?
02:38 How has Frank’s time at Alienware informed what he does at AMD?
06:20 Smart Access Memory – is it just resizable BAR, or is there more to the story?
10:28 Why can Smart Access Memory result in negative scaling?
13:40 Was Assassin’s Creed Valhalla developed with SAM in mind?
17:34 Why haven’t we seen a unified CPU+GPU strategy from AMD before?
20:21 SmartShift – what is it and how does it work?
25:21 Why haven’t we seen more SmartShift-enabled machines?
28:00 is Ryzen 5000 a turning point for AMD mobile CPUs?
30:28 What’s the latest with AMD Radeon software?
36:56 FidelityFX Super Resolution – latest update
38:46 Wrapping up
Discuss on our Facebook page HERE.
KitGuru says: Thanks to Frank for taking the time to answer our questions – and here’s hoping we will see FidelityFX Super Resolution in action soon!
Become a Patron!
Check Also
RX 7900 XT RDNA 3 GPU reportedly brings at least 40 percent more performance
AMD’s flagship RDNA 2 GPUs have been out for a while now, which means it is time to start looking ahead to the next generation. AMD is currently working on its RDNA 3 GPU architecture, and according to early leaks, we can expect at least a 40% performance improvement.
The Mercury Research CPU market share results are in for the first quarter of 2021, which finds AMD scoring its highest single-quarter market share increase in the server market since 2006, leading to record revenue as it steals more sockets from Intel. Those share gains are isolated, though, as AMD lost share in the notebook segment and overall market share while remaining flat in desktop PC chips. Those regressions might not be as problematic as they appear on the surface, though, due to AMD’s shift to producing pricier chips that generate more profit. (We have the full breakdown for each segment at the end of the article).
“While we don’t often discuss average selling prices, we note that this quarter saw unusually strong price moves for AMD — as AMD shipped fewer low-end parts and more high-end parts, as well as shipping many more server parts, the company’s average selling price increased significantly,” said Dean McCarron of Mercury Research.
It’s clear that AMD has prioritized its highest-end desktop PC models and its server chips during the pandemic-induced supply chain shortages. These moves come as the CPU market continues to move at a record pace: Last quarter marked the second-highest CPU shipment volume in history, second only to the prior quarter. Also, the first quarter usually suffers from lower sales volume as we exit the holiday season, but the first quarter of 2021 set yet another record – the 41% on-year gain was the highest for the CPU market in 25 years.
These developments benefit both companies, but AMD has clearly suffered more than Intel from the crushing combination of supply shortages and overwhelming demand. AMD actually recently lost share to Intel in both notebooks and desktop PCs for the first time in three years, but it reminded at a flat 19.3% of the desktop PC market during the quarter, meaning it stopped the slide despite supply challenges.
However, Intel’s Rocket Lake processors landed right at the end of the quarter, and they’re particularly competitive against AMD’s Ryzen 5000 in the lower end that tends to move the most volume. Additionally, these chips are widely available at retail at very competitive pricing while AMD’s chips are still a rarity on shelves at anywhere near the suggested selling price. That will make the results of the next quarter all the more interesting.
Both Intel and AMD set records for the number of units shipped and revenue during the quarter for mobile chips. AMD couldn’t stop the slide in notebook PC chips, but as McCarron points out, the company has prioritized higher-priced Ryzen “Renoir” 5 and 7 models while Intel has grown in its lower-margin and lower-priced Celeron chips. AMD slipped 1 percentage point to 18% of the notebook PC unit share.
Most concerning for Intel? It lost a significant amount of share to AMD in the profitable server market. AMD notched its highest single-quarter gain in server CPU share since 2016 at a growth of 1.8 percentage points, bringing the company to 8.9% (a few caveats apply, listed below).
While a 1.8 percentage point decline doesn’t sound too severe, it is concerning given the typically small changes we see in server market share. Intel’s data center revenue absolutely plummeted in the first quarter of the year, dropping 20% YoY while units shipped drop 13%, but Intel chalked that up to its customers pausing orders while ‘digesting’ their existing inventory. However, AMD’s financial results, in which the company’s server and semi-custom revenue jumped 286%, imply that Intel’s customers were actually digesting AMD’s chips instead.
AMD’s strong gains in server CPU share during the quarter occurred before its newest AMD EPYC Milan chips and Intel’s newest Ice Lake chips had their official launch, but both companies began shipping chips to their biggest customers early this year/late last year. Additionally, samples of these chips are in customers’ hands long before general availability, so large volume purchases are often decided long before server CPUs hit the shelves.
AMD’s big supercomputer wins with its EPYC Milan chips foretold strong buy-in from those seeking the highest performance possible, and it appears that momentum has carried over to the broader server CPU market. Given that most of these customers already know which company they’ll use for their long-term deployments, it is rational to expect that AMD’s server charge could continue into the next quarter.
Finally, AMD lost 1 percentage point in the overall x86 CPU market share, receding to 20.7%. Again, this comes as the company struggles from pandemic-induced supply chain shortages that it is minimizing by prioritizing high end chips. Meanwhile, Intel is leveraging its production scale to flood the lower-end of the market and gain share, but that comes at the expense of profitability.
Below you’ll find the specific numbers for each segment, complete with historical data.
1Q21
4Q20
3Q20
2Q20
1Q20
4Q19
3Q19
2Q19
1Q2019
4Q18
3Q18
2Q18
1Q18
4Q17
3Q17
2Q17
1Q17
4Q16
3Q16
AMD Desktop Unit Share
19.3%
19.3%
20.1%
19.2%
18.6%
18.3%
18%
17.1%
17.1%
15.8%
13%
12.3%
12.2%
12.0%
10.9%
11.1%
11.4%
9.9%
9.1%
Quarter over Quarter / Year over Year (pp)
+0.1 / +0.7
-0.8 / +1.0
+0.9 / +2.1
+0.6 / +2.1
+0.3 / +1.5
+0.3 / +2.4
+0.9 / +5
Flat / +4.8
+1.3 / +4.9
+2.8 / +3.8
+0.7 / +2.1
+0.1 / +1.2
+0.2 / +0.8
+1.1 / +2.1
-0.2 / +1.8
-0.3 / –
+1.5 / –
+0.8 / –
–
1Q21
4Q20
3Q20
2Q20
1Q20
Q419
3Q19
2Q19
1Q2019
4Q18
3Q18
2Q18
AMD Mobile Unit Share
18.0%
19%
20.2%
19.9%
17.1%
16.2%
14.7%
14.1%
13.1%
12.2%
10.9%
8.8%
Quarter over Quarter / Year over Year (pp)
-1.0 / +1.1
-1.2 / +2.8
+0.3 / +5.5
+2.9 / +5.8
+0.9 / +3.2
+1.5 / +4.0
+0.7 / +3.8
+1.0 / +5.3
+0.9 / ?
AMD bases its server share projections on IDC’s forecasts but only accounts for the single- and dual-socket market, which eliminates four-socket (and beyond) servers, networking infrastructure and Xeon D’s (edge). As such, Mercury’s numbers differ from the numbers cited by AMD, which predict a higher market share. Here is AMD’s comment on the matter: “Mercury Research captures all x86 server-class processors in their server unit estimate, regardless of device (server, network or storage), whereas the estimated 1P [single-socket] and 2P [two-socket] TAM [Total Addressable Market] provided by IDC only includes traditional servers.”
The new Chia cryptocurrency has already started making waves in the storage industry, as we’ve reported back in April. With Chia trading now live, it looks set to become even more interesting in the coming months. The total netspace for Chia has already eclipsed 2 exabytes, and it’s well on its way to double- and probably even triple-digit EiB levels if current trends continue. If you’re looking to join the latest crypto-bandwagon, here’s how to get started farming Chia coin.
First, if you’ve dabbled in other cryptocurrencies before, Chia is a very different beast. Some of the fundamental blockchain concepts aren’t radically different from what’s going before, but Chia coin ditches the Proof of Work algorithm for securing the blockchain and instead implements Proof of Space — technically Proof of Time and Space, but the latter appears to be the more pertinent factor. Rather than mining coins by dedicating large amounts of processing power to the task, Chia simply requires storage plots — but these plots need to be filled with the correct data.
The analogies with real-world farming are intentional. First you need to clear a field (i.e., delete any files on your storage devices that are taking up space), then you plough and seed the field (compute a plot for Chia), and then… well, you wait for the crops to grow, which can take quite a long time when those crops are Chia blocks.
Your chances of solving a Chia coin block are basically equal to your portion of the total network space (netspace). Right now, Chia’s netspace sits at roughly 2.7 EiB (Exbibytes — the binary SI unit, so 1 EiB equals 2^60 bytes, or 1,152,921,504,606,846,976 bytes decimal). That means if you dedicate a complete 10TB (10 trillion bytes) of storage to Chia plots, your odds of winning are 0.00035%, or 0.0000035 if we drop the percentage part. Those might sound like terrible odds — they’re not great — but the catch is that there are approximately 4,608 Chia blocks created every day (a rate of 32 blocks per 10 minutes, or 18.75 seconds per block) and any one of them could match your plot.
Simple math can then give you the average time to win, though Chia calculators make estimating this far easier than doing the math yourself. A completely full 10TB HDD can store 91 standard Chia blocks (101.4 GiB). Yeah, don’t get lazy and forget to convert between tebibytes and terabytes, as SI units definitely matters. Anyway, 91 blocks on a single 10TB HDD should win a block every two months or so — once every 68 days.
Each Chia plot ends up being sort of like a massive, complex Bingo card. There’s lots of math behind it, but that analogy should suffice. Each time a block challenge comes up, the Chia network determines a winner based on various rules. If your plot matches and ‘wins’ the block, you get the block reward (currently 2 XCH, Chia’s coin abbreviation). That block reward is set to decrease every three years, for the first 12 years, after which the block reward will be static ad infinitum. The official FAQ lists the reward rate as 64 XCH per 10 minutes, and it will get cut in half every three years until it’s at 4 XCH per 10 minutes with a block reward of 0.125 XCH.
Of course, luck comes into play. It’s theoretically possible (though highly unlikely) to have just a few plots and win a block solution immediately. It’s also possible to have hundreds of plots and go for a couple of months without a single solution. The law of averages should equalize over time, though. Which means to better your chances, you’ll need more storage storing more Chia plots. Also, just because a plot wins once doesn’t mean it can’t win again, so don’t delete your plots after they win.
This is the standard cryptocurrency arms race that we’ve seen repeated over the past decade with hundreds of popular coins. The big miners — farmers in this case — want more of the total Chia pie, and rush out to buy more hardware and increase their odds of winning. Except, this time it’s not just a matter of buying more SSDs or HDDs. This time farmers need to fill each of those with plots, and based on our testing, that is neither a simple task nor something that can be done quickly.
Hardware Requirements for Chia Coin Farming
With Ethereum, once you have the requisite GPUs in hand, perhaps some of the best mining GPUs, all you have to do is get them running in a PC. Chia requires that whole ploughing and plotting business, and that takes time. How much time? Tentatively, about six or seven hours seems typical per plot, with a very fast Optane 905P SSD, though it’s possible to do multiple plots at once with the right hardware. You could plot directly to hard drive storage, but then it might take twice as long, and the number of concurrent plots you can do drops to basically one.
The best solution is to have a fast SSD — probably an enterprise grade U.2 drive with plenty of capacity — and then use that for the plotting and transfer the finished plots to a large HDD. Chia’s app will let you do that, but it can be a bit finicky, and if something goes wrong like exceeding the temp storage space, the plotting will crash and you’ll lose all that work. Don’t over schedule your plotting, in other words.
Each 101.4 GiB plot officially requires up to 350 GiB of temporary storage, though we’ve managed to do a single plot multiple times on a 260 GiB SSD. Average write speed during the plotting process varies, sometimes it reaches over 100MB/s, other times it can drop closer to zero. When it drops, that usually means more computational work and memory are being used. Plotting also requires 4 GiB of RAM, so again, high capacity memory sticks are par for the course.
Ultimately, for fast SSDs, the main limiting will likely be storage capacity. If we use the official 350 GiB temp space requirement, that means a 2TB SSD (1863 TiB) can handle at most five concurrent plots. Our own testing suggests that it can probably do six just fine, maybe even seven, but we’d stick with six to be safe. If you want to do more than that (and you probably will if you’re serious about farming Chia), you’ll need either a higher capacity SSD, or multiple SSDs. Each plot your PC is creating also needs 4GB of memory and two CPU threads, and there appear to be scaling limits.
Based on the requirements, here are two recommended builds — one for faster plotting (more concurrent plots) and one for slower plotting.
Our baseline Chia plotting PC uses a 6-core/12-thread CPU, and we’ve elected to go with Intel’s latest Core i5-11400 simply because it’s affordable, comes with a cooler, and should prove sufficiently fast. AMD’s Ryzen 5 5600X would be a good alternative, were it readily available — right now it tends to cost about twice as much as the i5-11400, plus it also needs a dedicated graphics card, and we all know how difficult it can be to find those right now.
For storage, we’ve selected a Sabrent Rocket 4 Plus 2TB that’s rated for 1400 TBW. That’s enough to create around 800–900 plots, at which point your Chia farm should be doing quite nicely and you’ll be able to afford a replacement SSD. Mass storage comes via a 10TB HDD, because that’s the most economical option — 12TB, 14TB, 16TB, and 18TB drives exist, but they all cost quite a bit more per GB of storage. Plus, you’ll probably want to move your stored plots to a separate machine when a drive is filled, but more on that below.
The other components are basically whatever seems like a reasonably priced option, with an eye toward decent quality. You could probably use a smaller case and motherboard, or a different PSU as well. You’ll also need to add more HDDs — probably a lot more — as you go. This PC should support up to six internal SATA HDDs, though finding space in the case for all the drives might be difficult.
At a rate of 18 plots per day, it would take about 30 days of solid plotting time to fill six 10TB HDDs. Meanwhile, the potential profit from 60TB of Chia plots (546 101.4 GiB plots) is currently… wow. Okay, we don’t really want to get your hopes up, because things are definitely going to change. There will be more netspace, the price could drop, etc. But right now, at this snapshot in time, you’d potentially solve a Chia block every 11 days and earn around $5,900 per month.
What’s better than a PC that can do six plots at a time? Naturally it’s a PC that can do even more concurrent plots! This particular setup has a 10-core CPU, again from Intel because of pricing considerations. We’ve doubled the memory and opted for an enterprise class 3.84TB SSD this time. That’s sufficient for the desired ten concurrent plots, which will require up to nearly all of the 3.57 TiB of capacity. We’ve also added a second 10TB HDD, with the idea being that you do two sets of five plots at the same time, with the resulting plots going out to different HDDs (so that HDD write speed doesn’t cause a massive delay when plotting is finished for each batch).
Most of the remaining components are the same as before, though we swapped to a larger case for those who want to do all the farming and plotting on one PC. You should be able to put at least 10 HDDs into this case (using the external 5.25-inch bays). At a rate of 30 plots per day, it should take around 30 days again to fill ten 10TB drives (which aren’t included in the price, though we did put in two). As before, no promises on the profitability since it’s virtually guaranteed to be a lot lower than this, but theoretically such a setup should solve a Chia block every seven days and earn up to $9,800 per month.
Chia farming rig from https://t.co/IPJadpARFa 96 terabytes running off a RockPi4 Model C pic.twitter.com/F6iKOMIdIyJanuary 15, 2021
See more
Long-term Efficient Chia Farming
So far we’ve focused on the hardware needed to get plotting, which is the more difficult part of Chia farming. Once you’re finished building your farm, though, you’ll probably want to look at ways to efficiently keep the farm online. While it’s possible to build out PCs with dozens of HDDs using PCIe SATA cards and extra power supplies, it’s likely far easier and more efficient to skip all that and go with Raspberry Pi. That’s actually the recommended long-term farming solution from the Chia creators.
It’s not possible to directly connected dozens of SATA drives to Raspberry Pi, but using USB-to-SATA adapters and USB hubs overcomes that limitation. There’s the added benefit of not overloading the 5V rail on a PSU, since the enclosures should have their own power — or the USB hubs will. And once you’re finished building out a farm, the power costs to keep dozens of hard drives connected and running are relatively trivial — you could probably run 50 HDDs for the same amount of power as a single RTX 3080 mining Ethereum.
How to Create Chia Plots
We’ve mostly glossed over the plot creation process so far. It’s not terribly complicated, but there are some potential pitfalls. One is that the plotting process can’t be stopped and restarted. You don’t want to do this on a laptop that may power off, though theoretically it should be possible to put a system to sleep and wake it back up, and then let it pick up where it left off. But if you overfill the temp storage, Chia will crash and you’ll lose all progress on any plots, and since it can take six or seven hours, that’s a painful loss.
The first step naturally is to install Chia. We’re using Windows, though it’s available on MacOS and can be compiled from source code for various Linux platforms. Once installed, you’ll need to let the blockchain sync up before you can get to work on farming. However, you can still create plots before the blockchain gets fully synced — that takes perhaps 10 hours, in our experience, but it will inevitably start to take longer as more blocks get added.
You’ll need to create a new private key to get started — don’t use the above key, as anyone else on the ‘net can just steal any coins you farm. Screenshot and write down your 24 word mnemonic, as that’s the only way you can regain access to your wallet should your PC die. Store this in a safe and secure place!
Next, you’ll see the main page. As noted above, it can take quite a while to sync up, and any information displayed on this screen prior to having the full blockchain won’t be current. For example, the above screenshot was taken when the total netspace was only 1.51 EiB (sometime earlier this week). The Wallets and Farm tabs on the left won’t have anything useful right now, so head over to Plots and get started on the plotting process.
If you’ve previously generated plots, you could import the folder here, but your key has to match the key used for generating plots. If you were to gain access to someone else’s plot files somehow, without the key they’d do you no good. Again, don’t lose your key — or share it online! Hit the Add a Plot button, though.
Here’s where the ‘magic’ happens. We’ve specified six concurrent plots, with a ten minute delay between each plot starting. That should result in roughly a ten minute delay between plots finishing, which should be enough time for the program to move a finished plot to the final directory.
The Temporary Directory will be your big and fast SSD drive. You could try for a smaller delay between plots starting, but six concurrent plots will certainly put a decent load on most SSDs. Note also that Chia says it needs 239 GiB of temporary storage per plot — it’s not clear (to us) if that’s in addition to the 101.4 GiB for the final plot, but the amount of used space definitely fluctuates during the course of plot creation.
Once everything is set, click the Create Plot button at the bottom, and walk away for the next 6–8 hours. If you come back in eight hours, hopefully everything will have finished without incident and you’ll now see active plots on your Chia farm. Queue up another set of six plots (or however many plots your PC can handle concurrently), and done properly you should be able to get around three cycles in per day.
Then you just leave everything online (or migrate full drives to a separate system that uses the same key), and eventually you should manage to solve a block, earn some XCH coin, and then you can hoard that and hope the price goes up, or exchange it for some other cryptocurrency. Happy farming!
Chia Farming: The Bottom Line
Just looking at that income potential should tell you one thing: More people are going to do this than what we’re currently seeing. That or price is going to implode. For the cost of an RTX 3080 off of eBay right now, you could break even in just a couple of weeks. Our short take: anyone looking for new hard drives or large SSDs — could be in for a world of hurt as Chia causes a storage shortage.
During its first week of trading, Chia started with a price of around $1,600, climbed up to a peak of around $1,900, and then dropped to a minimum value of around $560. But then it started going up again and reached a relatively stable (which isn’t really stable at all) $1,000 or so on Friday. A couple more exchanges have joined the initial trio, with OKex accounting for around 67% of trades right now.
More importantly than just price is volume of trading. The first day saw only $11 million in trades, but Thursday/Friday has chalked up over 10X as much action. It might be market manipulation, as cryptocurrencies are full of such shenanigans, but anyone that claimed Chia was going to fade away after the first 12 hours of trading clearly missed the boat.
Unlike other cryptocurrencies, Chia will take a lot more effort to bring more plots online, but we’re still seeing an incredibly fast ramp in allocated netspace. It’s currently at 2.7 EiB, which is a 55% increase just in the past four days. We’ll probably see that fast rate of acceleration for at least a few weeks, before things start to calm down and become more linear in nature.
There are still concerns with e-waste and other aspects of any cryptocurrency, but Chia at least does drastically cut back on the power requirements. Maybe that’s only temporary as well, though. 50 HDDs use as much power as a single high-end GPU, but if we end up with 50X as many HDDs farming Chia, we’ll be right back to square one. For the sake of the environment, let’s hope that doesn’t happen.
Valve’s Steam hardware survey was just updated, and one of Nvidia’s GeForce RTX 3060, one of the best graphics cards, showed up on the charts for the first time. Sure, it has a very minute 0.17% market share at the time of this writing, but the card only launched in late February. Every Ampere GPU now at least shows up, which makes us wonder where AMD’s RX 6000 series GPUs are hiding.
It’s no secret that Nvidia is pumping out a far greater volume of Ampere graphics cards compared to AMD and its competition RDNA2 products. AMD has to split its allotment of TSMC 7nm wafers between the PlayStation 5, Xbox Series X/S, Ryzen 5000 CPUs, and Radeon 6000 GPUs. AMD’s contractually obligated to provide a lot of console chips, relative to the others, so guess which product line gets the short end of the stick.
AMD CEO Lisa Su is aware of the problem as she has promised to ramp up production significantly for the Radeon RX 6700 XT after it launched. And to be fair, it’s only been out a bit less than two months. However, it seems RX 6700 XT production still isn’t competitive with even the lowest stock of Nvidia’s GPUs.
Looking at eBay’s sold listings for the RTX 3060 and RX 6700 XT, which is what we do in our GPU pricing index, scalpers are selling off literally double the number of 3060s compared to RX 6700 XTs. Just this week alone, 382 RTX 3060s and 182 RX 6700 XTs were sold off to buyers. There were also 734 RTX 3070 cards sold, at an average price of $1,371.
These numbers from eBay should give us a good guess as to the production numbers of AMD and Nvidia worldwide. If these numbers are at all accurate, it helps explain why Steam hasn’t put any RX 6000 series GPU on its hardware survey list.
Steam has never told us how it operates the hardware survey program and what requirements hardware models have to meet to be on the chart. However, looking at the charts, we believe there’s a market share limit in place that all products on the charts have to meet. It may change a bit month to month, but right now the minimum value to show up as a line item is 0.15%. Considering Nvidia’s RTX 3060 has 0.17% and appears to be selling at roughly twice the rate of the RX 6700 XT, that would put AMD’s best RDNA2 share at less than 0.10%.
Again, there’s some conjecture on our part, but this suggests Steam simply needs more of AMD’s RX 6000 series graphics cards before they’ll breach the 0.14% market share point. The RX 6700 XT has also been selling at nearly triple the rate of the other RX 6000 series cards — since launch, on eBay at least, RX 6700 XT alone outsold the combined RX 6900 XT, RX 6800 XT, and RX 6800.
This certainly isn’t good news for AMD, though it’s regretably expected. There were rumors that 80% of the wafers AMD uses at TSMC right now are for the latest consoles, leaving Ryzen and Radeon to share the remaining 20%. Nvidia meanwhile only has to produce GPU wafers at Samsung, and while it can’t keep up with demand, it appears to be doing a much better jump at shipping cards than AMD. In general, Nvidia looks like it’s outselling AMD GPUs by at least a 5-to-1 ratio.
Hopefully, something will change so that AMD’s more budget-friendly RDNA2 products can be more competitive in production volume with Nvidia’s Ampere GPUs. But while we wait for RX 6700, RX 6600, and RX 6500 products to launch, there are strong indications Nvidia will have RTX 3050 Ti and RTX 3050 laptops this month, and likely RTX 3080 Ti desktop cards as well.
In the past few weeks we saw a number of indicators that Razer is getting ready to launch an AMD-based Blade gaming notebook. In a rather odd-looking Twitter conversation on Thursday night that went into Friday morning, Razer CEO Min-Liang Tan and Frank Azor of AMD suggested the possibility of a collaboration.
“What do you think about making an AMD equipped Razer Blade laptop, @minglintan,” asked Frank Azor, AMD’s gaming chief in a Twitter post.
I dunno guys. What do you all think? Do you all want to see an @AMD equipped @Razer Blade? https://t.co/u6hpVPWFj3May 7, 2021
See more
“I get a ton of requests all the time to make an AMD gaming laptop,” Min-Lian Tan, a co-founder and CEO of Razer, responded shortly. “What do you guys think?” […] FWIW I think we could design/engineer a pretty awesome AMD gaming laptop. The current laptops out there don’t really push the limits of what can be done. What would you guys like to see in a Razer Blade with AMD?”
FWIW I think we could design/engineer a pretty awesome @AMD gaming laptop. The current laptops out there don’t really push the limits of what can be done. What would you guys like to see in a @Razer Blade with AMD?May 7, 2021
See more
Razer is the last major notebook brands to exclusively offer Intel-based machines, which has frustrated some gamers looking for more choice. Back in 2019, Tan told Tom’s Hardwarethat “we do have quite a number of customers reaching out to us asking us about AMD[.]”
Last month someone submitted benchmark results of two Razer machines, codenamed PI411, featuring AMD’s unlocked Ryzen 9 5900HX processor accompanied by Nvidia’s GeForce RTX 3060 or RTX 3070 GPU with an 80W TGP. That’s a clear indicator that the PC maker was at least experimenting with AMD’s CPU.
The 3DMark submission itself does not mean that a product is coming to the market as some products do not meet certain goals that manufacturers set. But when high-ranking executives start to talk about new products publicly, it certainly suggests that some plans are being set internally.
Tan is a self-described loose cannon on social media, once telling Tom’s Hardware that “my PR team and my legal team lives on tenterhooks that I’m gonna say something stupid.” But with a partner involved, this seems like it could potentially lead to something real.
Reviews for Capcom’s Resident Evil Village have gone live, and we’re taking the opportunity to look at how the game runs on the best graphics cards. We’re running the PC version on Steam, and while patches and future driver updates could change things a bit, both AMD and Nvidia have provided Game Ready drivers for REV.
This installment in the Resident Evil series adds DirectX Raytracing (DXR) support for AMD’s RX 6000 RDNA2 architecture, or Nvidia’s RTX cards — both the Ampere architecture and Turing architecture. AMD’s promoting Resident Evil Village, and it’s on the latest gen consoles as well, so there’s no support of Nvidia’s DLSS technology. We’ll look at image quality in a moment, but first let’s hit the official system requirements.
Capcom notes that in either case, the game targets 1080p at 60 fps, using the “Prioritize Performance” and presumably “Recommended” presets. Capcom does state that the framerate “might drop in graphics-intensive scenes,” but most mid-range and higher GPUs should be okay. We didn’t check lower settings, but we can confirm that 60 fps at 1080p will certainly be within reach of a lot of graphics cards.
The main pain point for anyone running a lesser graphics card will be VRAM, particularly at higher resolutions. With AMD pushing 12GB and 16GB on its latest RX 6000-series cards, it’s not too surprising that the Max preset uses 12GB VRAM. It’s possible to run 1080p Max on a 6GB card, and 1440p Max on an 8GB card, but 4K Max definitely wants more than 8GB VRAM — we experienced inconsistent frametimes in our testing. We’ve omitted results on cards where performance wasn’t reliable in the charts.
Anyway, let’s hit the benchmarks. Due to time constraints, we’re not going to run every GPU under the sun in these benchmarks, but will instead focus on the latest gen GPUs, plus the top and bottom RTX 20-series GPUs and a few others as we see fit. We used the ‘Max’ preset, with and without ray tracing, and most of the cards we tested broke 60 fps. Turning on ray tracing disables Ambient Occlusion, because that’s handled by the ray-traced GI and Reflection options, but every other setting is on the highest quality option (which means variable-rate shading is off for our testing).
Our test system consists of a Core i9-9900K CPU, 32GB VRAM and a 2TB SSD — the same PC we’ve been using for our graphics card and gaming benchmarks for about two years now, because it continues to work well. With the current graphics card shortages, acquiring a new high-end GPU will be difficult — our GPU pricing index covers the details. Hopefully, you already have a capable GPU from pre-2021, back in the halcyon days when graphics cards were available at and often below MSRP. [Wistful sigh]
Granted, these are mostly high-end cards, but even the RTX 2060 still posted an impressive 114 fps in our test sequence — and it also nearly managed 60 fps with ray tracing enabled (see below). Everything else runs more than fast enough as well, with the old GTX 1070 bringing up the caboose with a still more than acceptable 85 fps. Based off what we’ve seen with these GPUs and other games, it’s a safe bet that cards like the GTX 1660, RX 5600 XT, and anything faster than those will do just fine in Resident Evil Village.
AMD’s RDNA2 cards all run smack into an apparent CPU limit at around 195 fps for our test sequence, while Nvidia’s fastest GPUs (2080 Ti and above) end up with a lower 177 fps limit. At 1080p, VRAM doesn’t appear to matter too much, provided your GPU has at least 6GB.
Turning on ray tracing drops performance, but the drop isn’t too painful on many of the cards. Actually, that’s not quite true — the penalty for DXR depends greatly on your GPU. The RTX 3090 only lost about 13% of its performance, and the RTX 3080 performance dropped by 20%. AMD’s RX 6900 XT and RX 6800 XT both lost about 30-35% of their non-RT performance, while the RTX 2080 Ti, RX 6800, RTX 3070, RTX 3060 Ti, and RTX 3060 plummeted by 40–45%. Meanwhile, the RX 6700 XT ended up running at less than half its non-DXR rate, and the RTX 2060 also saw performance chopped in half.
Memory and memory bandwidth seem to be major factors with ray tracing enabled, and the 8GB and lower cards were hit particularly hard. Turning down a few settings should help a lot, but for these initial results we wanted to focus on maxed-out graphics quality. Let us know in the comments what other tests you’d like to see us run.
The performance trends we saw at 1080p become more pronounced at higher resolutions. At 1440p Max, more VRAM and memory bandwidth definitely helped. The RX 6900 XT, RX 6800 XT, RTX 3090, and RTX 3080 only lost a few fps in performance compared to 1080p when running without DXR enabled, and the RX 6800 dipped by 10%. All of the other GPUs drop by around 20–30%, but the 6GB RTX 2060 plummeted by 55%. Only the RTX 2060 and GTX 1070 failed to average 60 fps or more.
1440p and ray tracing with max settings really needs more than 8GB VRAM — which probably explains why the Ray Tracing preset (which we didn’t use) opts for modest settings everywhere else. Anyway, the RTX 2060, 3060 Ti, and 3070 all started having problems at 1440p with DXR, which you can see in the numbers. Some runs were much better than we show here, others much worse, and after repeating each test a bunch of times, we still aren’t confident those three cards will consistently deliver a good experience without further tweaking the graphics settings.
On the other hand, cards with 10GB or more VRAM don’t show nearly the drop that we saw without ray tracing when moving from 1080p to 1440p. The RTX 3060 only lost 18% of its 1080p performance, and chugs along happily at just shy of 60 fps. The higher-end AMD and Nvidia cards were all around the 15% drop mark as well.
But enough dawdling. Let’s just kill everything with some 4K testing…
Well, ‘kill’ is probably too strong of a word. Without ray tracing, most of the GPUs we tested still broke 60 fps. But of those that came up short, they’re very short. RTX 3060 is still generally playable, but Resident Evil Village appears to expect 30 fps or more, as dropping below that tends to cause the game to slow down. The RX 5700 XT should suffice in a pinch, even though it lost 67% of its 1440p performance, but the 1070 and 2060 would need lower settings to even take a crack at 4K.
Even with DXR, the RTX 2080 Ti and RX 6800 and above continue to deliver 60 fps or more. The RTX 3060 also still manages a playable 41 fps — this isn’t a twitch action game, so sub-60 frame rates aren’t the end of the world. Of course, we’re not showing the cards that dropped into the teens or worse — which is basically all the RTX cards with 8GB or less VRAM.
The point isn’t how badly some of the cards did at 4K Max (with or without DXR), but rather how fast a lot of the cards still remained. The DXR switch often imposed a massive performance hit at 1080p, but at 4K the Nvidia cards with at least 10GB VRAM only lost about 15% of their non-DXR performance. AMD’s GPUs took a larger 25% hit, but it was very consistent across all four GPUs.
Resident Evil Village Graphics Settings
Image 1 of 8
Image 2 of 8
Image 3 of 8
Image 4 of 8
Image 5 of 8
Image 6 of 8
Image 7 of 8
Image 8 of 8
You can see the various advanced settings available in the above gallery. Besides the usual resolution, refresh rate, vsync, and scaling options, there are 18 individual graphics settings, plus two more settings for ray tracing. Screen space reflections, volumetric lighting and shadow quality are likely to cause the biggest impact on performance, though the sum of the others can add up as well. For anyone with a reasonably high-end GPU, though, you should be able to play at close to max quality (minus ray tracing if you don’t have an appropriate GPU, naturally).
But how does the game look? Capturing screenshots with the various settings on and off is a pain, since there are only scattered save points (typewriters), and some settings appear to require a restart to take effect. Instead of worrying about all of the settings, let’s just look at how ray tracing improves things.
Resident Evil Village Image Quality: Ray Tracing On / Off
Image 1 of 18
Image 2 of 18
Image 3 of 18
Image 4 of 18
Image 5 of 18
Image 6 of 18
Image 7 of 18
Image 8 of 18
Image 9 of 18
Image 10 of 18
Image 11 of 18
Image 12 of 18
Image 13 of 18
Image 14 of 18
Image 15 of 18
Image 16 of 18
Image 17 of 18
Image 18 of 18
Or doesn’t, I guess. Seriously, the effect is subtle at the best of times, and in many scenes, I couldn’t even tell you whether RT was on or off. If there’s a strong light source, it can make a difference. Sometimes a window or glass surface will change with RT enabled, but even then (e.g., in the images of the truck and van) it’s not always clearly better.
The above gallery should be ordered with RT off and RT on for each pair of images. You can click (on a PC) to get the full images, which I’ve compressed to JPGs (and they look visually almost the same as the original PNG files). Indoor areas tend to show the subtle lighting effects more than outside, but unless a patch dramatically changes the way RT looks, Resident Evil Village will be another entry in the growing list of ray tracing games where you could skip it and not really miss anything.
Resident Evil Village will release to the public on May 7. So far, reviews are quite favorable, and if you enjoyed Resident Evil 7, it’s an easy recommendation. Just don’t go in expecting ray tracing to make a big difference in the way the game looks or feels.
Dr Lisa Su, CEO of AMD, will deliver a keynote address at an all virtual Computex 2021. The keynote will be titled “AMD Accelerating – The High-Performance Computing Ecosystem” and will cover AMD’s recent consumer innovations, including CPUs and GPUs for PC enthusiasts and gamers.
“The past year has shown us the important role high-performance computing plays in our daily lives — from the way we work to the way we learn and play,” said Dr Lisa Su. “At this year’s Computex, AMD will share how we accelerate innovation with our ecosystem partners to deliver a leadership product portfolio.”
During the keynote AMD will share its vision for the future of computing, including the company’s high-performance computing and graphics solutions aimed at enthusiasts and gamers. Since Computex is a trade show mostly covering consumer technology, despite the title of the keynote AMD does not promise to discuss its datacenter / supercomputer HPC innovations, which include EPYC processors as well as Instinct compute GPUs, at least not in depth.
It remains to be seen whether AMD formally unveils its new Radeon RX 6000-series for notebooks and desktops at Computex, but there is certainly an outside chance. Furthermore, it is likely to give a glimpse on what to expect from its Ryzen Threadripper processors for high-end desktops.
The Computex 2021 keynote will be delivered on Tuesday, June 1, at 10:00 AM Taipei time (May 31, 10:00 PM Eastern Time). Both Taitra, the organizer of the trade show, and AMD yet have to announce where to watch Dr. Su’s keynote address.
AMD’s Ryzen 5000 (Cezanne) desktop APUs will make their debut in OEM and pre-built systems before hitting the retail market by the end of this year. However, the hexa-core Zen 3 APU (via Tum_Apisak) is already showing up in multiple benchmarks around the Internet.
The Ryzen 5 5600G comes equipped with six Zen 3 cores with simultaneous multithreading (SMT) and 16MB of L3 cache. The 7nm APU operates with a 3.9 GHz base clock and 4.4 GHz within the a 65W TDP limit. The chip also leverages seven Vega Compute Units (CUs) that are clocked at 1,900 MHz.
The Core i5-11400, on the other hand, is part of Intel’s latest 11th Generation Rocket Lake lineup. Intel’s 14nm chip features six Cypress Cove cores with Hyper-Threading and 12MB of L3 cache. The hexa-core processor, which also conforms to a 65W TDP, sports a 2.6 GHz base clock and 4.4 GHz boost clock. On the graphics side, the Core i5-11400 rolls with the Intel UHD Graphics 730 engine with 24 Execution Units (EUs) with clock speeds between 350 MHz and 1.3 GHz.
The results were mixed, which didn’t come as a surprise. They probably originated from different systems with different hardware so one result might have an edge over the other that we don’t know about. Futhermore, the available benchmarks aren’t on our preferred list so we should take the results with a pinch of salt.
AMD Ryzen 5 5600G Benchmarks
Processor
Geekbench 5 Single-Core
Geekbench 5 Multi-Core
UserBenchmark 1-Core
UserBenchmark 8-Core
CPU-Z Single-Thread
CPU-Z Multi-Thread
Ryzen 5 5600G
1,508
7,455
149
889
596
4,537
Core i5-11400
1,593*
7,704*
161
941
544
4,012
*Our own results.
Starting with Geekbench 5, the Core i5-11400 outperformed the Ryzen 5 5600G by up to 5.6% in the single-core test and 3.3% in the multi-core test. The Core i5-11400 also prevailed over the Ryzen 5 5600G in UserBenchmark. The Rocket Lake part delivered up to 8.1% and 5.8% higher single-and multi-core performance, respectively.
The Ryzen 5 5600G didn’t go home empty-handed either. The Zen 3 APU offered up to 9.7% and 13.1% higher single- and multi-core peformance, respectively, in comparison to the Core i5-11400 in CPU-Z.
It goes to show that while Zen 3 is a solid microarchitecture, Intel’s Cypress Cove isn’t a pushover, either. The Ryzen 5 5600G has a 1.3 GHz higher base clock than the Core i5-11400, but the latter still managed overcome the Zen 3 APU.
So far, the benchmarks show the processors’ computing performance. It’s unlikely that the Core i5-11400 will beat the Ryzen 5 5600G in iGPU gaming performance, which is where the 7nm APU excels at. After all, consumers pick up APUs for their brawny integrated graphics. The Ryzen 5 5600G will makes its way to the DIY market later this year so we’ll get our chance to put the Zen 3 chip through its paces in a proper review. The Core i5-11400, which retails for $188.99, is the interim winner until then.
Corsair’s K70 RGB TKL is an unashamedly gaming-focused keyboard. For one thing, it’s only available with the kinds of switches people normally recommend for playing games, while other features like low input latency and a dedicated “tournament switch” for esports clearly have gamers in mind.
At a price of $139.99 (£139.99), these features are coming at a premium, and they’re overkill for anyone not planning on competing at the next Dota 2 International. But the K70 RGB TKL is a complete package with lots of quality-of-life features for non-gamers. It attaches to your PC with a detachable USB-C cable (a first for Corsair) and features customizable per-key RGB backlighting, dedicated media keys, and a roller wheel for volume control.
At this price, the result is a great gaming-focused keyboard but only a good general purpose keyboard.
In case the specs didn’t tip you off beforehand, just looking at the Corsair K70 RGB TKL should tell you everything you need to know about its target audience. The bold, squared-off font on its keycaps is peak gamer, and the case itself has a minimalist, angular design. The only branding you get is a small Corsair logo on the keyboard’s forehead, which illuminates along with the rest of the keyboard’s lighting.
Its design might not be for everyone, but construction quality is good here. The K70’s keycaps are made of hard-wearing PBT plastic, and their legends are double-shot, meaning they let each switch’s lighting shine through and will never rub off. Corsair uses a standard keyboard layout, so you shouldn’t have any problems finding replacement keycaps in the right sizes.
As its name implies, the K70 RGB TKL is a tenkeyless board (hence the “TKL” in its name), meaning you don’t get a numpad to the right of the arrow keys. This makes perfect sense on a gaming keyboard, where you’ll typically spend most of your time with your left hand on the WASD keys and your right hand on a mouse. Unless you really need it for data entry, a numpad just gets in the way. Available layouts include US ANSI, UK ISO (which I’m using), and other European layouts, but there are no Mac-specific keys available.
Although it’s not particularly wide, Corsair’s keyboard has a bit of a forehead to house its media keys and volume roller. I generally prefer this simple approach, rather than having to access media controls through a combination of keypresses, even if it adds a little more bulk to the board. Build quality is otherwise solid; the keyboard wouldn’t flex, no matter how much I tried to bend it.
The keyboard’s configuration options are aimed squarely at gamers. There’s no option for tactile Cherry MX Browns or clicky Cherry MX Blue switches here. Instead, your options are classic gamer Cherry MX Reds, competitive gamer Cherry MX Speed Silvers, or, if you’re in Korea, considerate gamer Cherry MX Silent Reds. My review board came equipped with standard Red switches. The switches aren’t hot-swappable, so you’re going to have to use desoldering tools and then a soldering iron if you want to try out any other switch types.
The nice thing about buying from an established company like Corsair is that its companion software for configuring the keyboard’s layout and lighting effects is slick and polished. iCue is available for Mac and Windows and offers a truly dizzying amount of control over the K70 RGB TKL. You can remap the keyboard’s keys however you like and get access to a plethora of additional lighting effects. The controls are granular and get complicated fast, so I ended up ignoring them and just controlled the keyboard’s lighting from the board itself.
As well as handling lighting controls, iCue can also handle key remapping if you want to swap the layout of your keyboard around. It’s not as necessary a feature on a TKL board as on a smaller board with a more limited selection of keys, but it’s a useful inclusion if you want to tinker.
All of these are useful features regardless of what you want to use the Corsair K70 RGB TKL for. But its more unique features are gaming-focused. First up is a “tournament switch” on the top of the board, which disables any custom macros you’ve set up and switches the backlighting to a single less-distracting color. (You can customize which color using iCue.) It’s the kind of feature I could see being helpful if you’re simultaneously big into online gaming and also use a ton of macros. That’s a pretty slim Venn diagram of users, but thankfully, the switch is completely out of the way otherwise.
The other gaming feature here is an advertised polling rate of 8,000Hz, which is eight times higher than the 1,000Hz rate used by most keyboards. In theory, this means the keyboard’s input lag or the time between you pressing a key and the signal being transmitted to your PC, is as minimal as possible, presumably making all the difference in a high-speed gaming situation. Corsair tells me this brings down median latency to under a quarter of a millisecond, compared to 2 milliseconds and up with a 1,000Hz keyboard. You enable the 8,000Hz polling rate from within Corsair’s software. It’ll warn you that the higher polling rate uses more system resources, but I didn’t notice any impact on performance on my Ryzen 5 3600-equipped gaming PC, and Corsair tells me this should be the same for anyone using a gaming machine built in the last three years.
We’ve seen a similar trend with gaming mice, and Linus Tech Tips did a great analysis of what that actually means for performance. But the real-world difference it makes is minor, and I struggled to feel any difference in responsiveness when switching between playing Overwatch on the Corsair K70 RGB TKL and a regular 1,000Hz Filco office keyboard when playing on a 100Hz monitor. I have no reason to doubt Corsair’s low latency claims, but I think it’s the kind of improvement that only a small number of players will actually be able to notice.
The Corsair K70 RGB TKL is being sold by a gaming-oriented brand as a gaming-oriented keyboard with gaming-oriented switches, so it shouldn’t come as a surprise that it doesn’t offer the best typing experience. The typing feel just doesn’t match the crispness of a board like the similarly priced Filco Majestouch 2. Instead, bottoming out each keypress feels slightly dulled or softened, and since this keyboard is only available with linear switches, you’re all but guaranteed to bottom out each keypress while you’re typing.
I’ll give credit to Corsair for the K70’s spacebar stabilizer (the mechanism installed under the larger keycap to stop it from wobbling). While this can sometimes sound rattly on other keyboards, there’s no such problem here. But if you listen to the typing sample above, you’ll hear that other stabilized keys like Backspace and Enter have more rattle. Ultimately, the overall typing experience on the K70 RGB TKL is only good, never great.
At this point, Corsair knows what it’s doing when it comes to mechanical keyboards for gaming. The K70 RGB TKL comes equipped with all of the quality-of-life features that are expected out of a mainstream keyboard at this point: nice durable keycaps, media keys and volume dial, and a detachable USB-C cable. Some of its more gaming-focused features are borderline overkill, but they don’t get in the way.
At its core, though, the K70 RGB TKL is a keyboard designed for gamers, and there are better keyboards out there if you’re only an occasional gamer. You can get a better range of switches elsewhere, as well as a more satisfying typing experience overall. That makes the Corsair K70 RGB TKL a great option for a gaming keyboard, but only a good keyboard overall.
After about a month of preparation, following the initial mainnet launch, cryptocurrency Chia coin (XCH) has officially started trading — which means it’s possibly preparing to suck up all of the best SSDs like Ethereum (see how to mine Ethereum) has been gobbling up the best graphics cards. Early Chia calculators suggested an estimated starting price of $20 per XCH. That was way off, but with the initial fervor and hype subsiding, we’re ready to look at where things stand and where they might stabilize.
To recap, Chia is a novel approach to cryptocurrencies, ditching the Proof of Work hashing used by most coins (i.e., Bitcoin, Ethereum, Litecoin, Dogecoin, and others) and instead opting for a new Proof of Time and Space algorithm. Using storage capacity helps reduce the potential power footprint, obviously at the cost of storage. And let’s be clear: The amount of storage space (aka netspace) already used by the Chia network is astonishing. It passed 1 EiB (Exbibyte, or 2^60 bytes) of storage on April 28, and just a few days later it’s approaching the 2 EiB mark. Where will it stop? That’s the $21 billion dollar question.
All of that space goes to storing plots of Chia, which are basically massive 101.4GiB Bingo cards. Each online plot has an equal chance, based on the total netspace, of ‘winning’ the block solution. This occurs at a rate of approximately 32 blocks per 10 minutes, with 2 XCH as the reward per block. Right now, assuming every Chia plot was stored on a 10TB HDD (which obviously isn’t accurate, but roll with it for a moment), that would require about 200,000 HDDs worth of Chia farms.
Assuming 5W per HDD, since they’re just sitting idle for the most part, that’s potentially 1 MW of power use. That might sound like a lot, and it is — about 8.8 GWh per year — but it pales in comparison to the amount of power going into Bitcoin and Ethereum. Ethereum, as an example, currently uses an estimated 41.3 TWh per year of power because it relies primarily on the best mining GPUs, while Bitcoin uses 109.7 TWh per year. That’s around 4,700 and 12,500 times more power than Chia at present, respectively. Of course, Ethereum and Bitcoin are also far more valuable than Chia at current exchange rates, and Chia has a long way to go to prove itself a viable cryptocoin.
Back to the launch, though. Only a few cryptocurrency exchanges have picked up XCH trading so far, and none of them are what we would call major exchanges. Considering how many things have gone wrong in the past (like the Turkish exchange where the founder appears to have walked off with $2 billion in Bitcoins), discretion is definitely the best approach. Initially, according to Coinmarketcap, Gate.io accounted for around 65% of transactions, MXC.com was around 34.5%, and Bibox made up the remaining 0.5%. Since then, MSC and Gate.io swapped places, with MXC now sitting at 64% of all transactions.
By way of reference, Gate.io only accounts for around 0.21% of all Bitcoin transactions, and MXC doesn’t even show up on Coinmarketcap’s list of the top 500 BTC exchange pairs. So, we’re talking about small-time trading right now, on riskier platforms, with a total trading volume of around $27 million in the first day. That might sound like a lot, but it’s only a fraction of Bitcoin’s $60 billion or so in daily trade volume.
Chia started at an initial trading price of nearly $1,600 per XCH, peaked in early trading to peak at around $1,800, and has been on a steady downward slope since then. At present, the price seems to mostly have flattened out (at least temporarily) at around $700. It could certainly end up going a lot lower, however, so we wouldn’t recommend betting the farm on Chia, but even at $100 per XCH a lot of miners/crypto-farmers are likely to jump on the bandwagon.
As with many cryptocoins, Chia is searching for equilibrium right now. 10TB of storage dedicated to Chia plots would be enough for a farm of 100 plots and should in theory account for 0.0005% of the netspace. That would mean about 0.046 XCH per day of potential farming, except you’re flying solo (proper Chia pools don’t exist yet), so it would take on average 43 days to farm a block — and that’s assuming netspace doesn’t continue to increase, which it will. But if you could bring in a steady stream of 0.04 XCH per day, even if we lowball things with a value of $100, that’s $4-$5 per day, from a 10TB HDD that only costs about $250. Scale that up to ten drives and you’d be looking at $45 per day, albeit with returns trending downward over time.
GPU miners have paid a lot more than that for similar returns, and the power and complexity of running lots of GPUs (or ASICs) ends up being far higher than running a Chia farm. In fact, the recommended approach to Chia farming is to get the plots set up using a high-end PC, and then connect all the storage to a Raspberry Pi afterwards for low-power farming. You could run around 50 10TB HDDs for the same amount of power as a single RTX 3080 mining Ethereum.
It’s important to note that it takes a decent amount of time to get a Chia farm up and running. If you have a server with a 64-core EPYC processor, 256GB of RAM, and at least 16TB of fast SSD storage, you could potentially create up to 64 plots at a time, at a rate of around six (give or take) hours per group of plots. That’s enough to create 256 plots per day, filling over 2.5 10TB HDDs with data. For a more typical PC, with an 8-core CPU (e.g, Ryzen 7 5800X or Core i9-11900K), 32GB of RAM, and an enterprise SSD with at least 2.4TB of storage, doing eight concurrent plots should be feasible. The higher clocks on consumer CPUs probably mean you could do a group of plots in four hours, which means 48 plots per day occupying about half of a 10TB HDD. That’s still a relatively fast ramp to a bunch of drives running a Chia farm, though.
In either case, the potential returns even with a price of $100 per XCH amount to hundreds of dollars per month. Obviously, that’s way too high of a return rate, so things will continue to change. Keep in mind that where a GPU can cost $15-$20 in power per month (depending on the price of electricity), a hard drive running 24/7 will only cost $0.35. So what’s a reasonable rate of return for filling up a hard drive or SSD and letting it sit, farming Chia? If we target $20 per month for a $250 10TB HDD, then either Chia’s netspace needs to balloon to around 60EiB, or the price needs to drop to around $16 per XCH — or more likely some combination of more netspace and lower prices.
In the meantime, don’t be surprised if prices on storage shoots up. It was already starting to happen, but like the GPU and other component shortages, it might be set to get a lot worse.
Gigabyte has announced a pair of pre-built desktop gaming PCs, the Aorus Model X and Aorus Model S, both featuring top-of-the-range Intel and AMD CPUs alongside Nvidia RTX GPUs. What’s more, while the Model X is a standard-looking PC tower, the Model S comes in a 14L low-profile case that bears a distinct resemblance to Microsoft’s Xbox Series X.
Image 1 of 20
Image 2 of 20
Image 3 of 20
Image 4 of 20
Image 5 of 20
Image 6 of 20
Image 7 of 20
Image 8 of 20
Image 9 of 20
Image 10 of 20
Image 11 of 20
Image 12 of 20
Image 13 of 20
Image 14 of 20
Image 15 of 20
Image 16 of 20
Image 17 of 20
Image 18 of 20
Image 19 of 20
Image 20 of 20
Across the board specs are high, with the Intel models sporting Rocket Lake i9 CPUs with eight cores and 16 threads, that turbo up to 5.3 GHz. AMD fans get the 12-core/24-thread Ryzen 9-5900X, which boosts up to 4.8 GHz and has 64MB of L3 cache, compared to 16MB on the Intel chip. RAM is also fast, with the Model X fitted with 16GB of 4400 MHz DDR4 (3600 MHz on the AMD model), while the Model S gets 32GB of 4000 MHz chips (again 3600 MHz if you choose AMD). To back all this up, the GPUs in both models are RTX 3080s.
Built on the Intel Z590 and AMD X570 / B550 chipsets, there’s also plenty of networking and I/O available, with Wi-Fi 6 available on all models. The Ethernet ports are both fast models – with 10bE LAN on the Intel Model X (plus a secondary 2.5GbE port), 2.5GbE on the AMD Model X (with a secondary 1GbE port) and 2.5GbE on both flavours of Model S. USB ports are plentiful – especially on the 58L Model X, which supports the Thunderbolt 4 standard in its Intel incarnation – and SSDs are fast, with each tower featuring a 1TB PCIe Gen 4 model and a 2TB PCIe 3.0 drive.
And while the X is cooled by a 360mm AIO liquid cooler putting out 40 decibels (dB), the Model S features an Xbox Series X-like cooling system that draws air in at the bottom of the tower and vents it from the top across a thermal fin. This system is so quiet it claims to put out less than 37 dB while gaming. That’s equivalent to, according to the American Academy of Audiology, something between a whisper and a quiet library.
At the time of writing, neither system seemed to be available for purchase.
The first benchmark results of Intel’s yet-to-be-announced eight-core Core i9-11950 ‘Tiger Lake-H’ processor for gaming notebooks have been published in Primate Labs’ Geekbench 5 database. The new unit expectedly beats Intel’s own quad-core Core i7-1185G7 CPU both in single and multi-thread workloads, but when it comes to comparison with other rivals, its results are not that obvious.
Intel’s Core i9-11950 processor has never been revealed in leaks, so it was surprising to see benchmark results of HP’s ZBook Studio 15.6-inch G8 laptop based on this CPU in Geekbench 5. The chip has eight cores based on the Willow Cove microarchitecture running at 2.60 GHz – 4.90 GHz, it is equipped with a 24MB cache, a dual-channel DDR4-3200 memory controller, and a basic UHD Graphics core featuring the Xe architecture.
In Geekbench 5, the ZBook Studio 15.6-inch G8 powered by the Core i9-11950H scored 1,365 points in single-thread benchmark and 6,266 points in multi-thread benchmark. The system operated in ‘HP Optimized (Modern Standby)’ power plan, though we do not know the maximum TDP that is supported in this mode.
CPU
Single-Core
Multi-Core
Cores/Threads, uArch
Cache
Clocks
TDP
Link
AMD Ryzen 9 5980HS
1,540
8,225
8C/16T, Zen 3
16MB
3.30 ~ 4.53 GHz
35W
https://browser.geekbench.com/v5/cpu/6027200
AMD Ryzen 9 4900H
1,230
7,125
8C/16T, Zen 2
8MB
3.30 ~ 4.44 GHz
35~54W
https://browser.geekbench.com/v5/cpu/6028856
Intel Core i9-11900
1,715
10,565
8C/16T, Cedar Cove
16 MB
2.50 ~ 5.20 GHz
65W
https://browser.geekbench.com/v5/cpu/7485886
Intel Core i9-11950H
1,365
6,266
8C/16T, Willow Cove
24MB
2.60 ~ 4.90 GHz
?
https://browser.geekbench.com/v5/cpu/7670672
Intel Core i9-10885H
1,335
7,900
8C/16T, Skylake
16MB
2.40 ~ 5.08 GHz
45W
https://browser.geekbench.com/v5/cpu/6006773
Intel Core i7-1185G7
1,550
5,600
4C/8T, Willow Cove
12MB
3.0 ~ 4.80 GHz
28W
https://browser.geekbench.com/v5/cpu/5644005
Apple M1
1,710
7,660
4C Firestorm + 4C Icestorm
12MB + 4MB
3.20 GHz
20~24W
https://browser.geekbench.com/v5/cpu/6038094
The upcoming Core i9-11950H processor easily defeats its quad-core Core i7-1185G7 brother for mainstream and thin-and-light laptops both in single-thread and multi-thread workloads. This is not particularly surprising as the model i7-1185G7 has a TDP of 28W. Meanwhile, the Core i9-11950H is behind AMD’s Ryzen 9 5980HS as well as Apple’s M1 in all kinds of workloads. Furthermore, its multi-thread score is behind that of its predecessor, the Core i9-10885H.
Perhaps, the unimpressive results of the Core i9-11950H in Geekbench 5 are due to a preliminary BIOS, early drivers, wrong settings, or some other anomalies. In short, since the CPU does not officially exist, its test results should be taken with a grain of salt. Yet, at this point, the product does not look too good in this benchmark.
Home/Component/APU/AMD Zen 5 ‘Strix Point’ APU to be based on 3nm node and hybrid core architecture
João Silva 2 days ago APU, Featured Tech News
We’re still some ways off from seeing AMD launch its Zen 5 architecture, but nonetheless, the rumour mill is churning out some early information. Apparently, AMD’s Zen 5 APUs, codenamed ‘Strix Point’, will be based on 3nm process technology and feature the emerging hybrid core architecture.
According to MoePC (via @Avery78), the Zen 5 APUs will reportedly belong to the Ryzen 8000 series and feature a hybrid architecture with up to 8x big (high-performance) cores and 4x small (high-efficiency) cores, which should total in 20 threads.
Scheduled to release in 2024, the Strix Point APUs iGPU performance targets have already been set, but specific details on this were not shared. Besides the jump to a hybrid core architecture, the Zen 5-based APUs may also bring a new memory subsystem with significant changes. It’s unclear if these changes will also be seen in Zen 5-based CPUs.
The report also notes that AMD is no longer going forward with plans for the recently rumoured ‘Warhol’ series of CPUs, possibly due to the on-going chip shortage. If Warhol is actually out of the picture, then a Zen 3 refresh would be the Ryzen 6000 series, Zen 4 would become the Ryzen 7000 series, and Zen 5 the Ryzen 8000 series.
Discuss on our Facebook page, HERE.
KitGuru says: With Strix Point APUs allegedly releasing in 2024, we are still far from seeing something official from AMD. In a 3-year span, much can change, especially with the current chip situation that we are facing. What do you expect from Zen 5-based chips?
Become a Patron!
Check Also
Monster Hunter Rise has outsold Street Fighter V in just one month
For most of its history, the Monster Hunter franchise had seen support from a select …
There a numerous causes that have led into the ongoing computer component shortage, one of which is the lack of availability of materials called ABF substrates. The Good news is that companies like AMD and Intel are taking the problem seriously and are investing in packaging facilities and production of substrates.
A wide variety of chips from inexpensive entry-level processors for client PCs to complex high-end CPUs for servers use laminated packaging. Usually, chips that use laminated packaging also use IC substrates featuring insulating Ajinomoto build-up films (ABF), which are made by just one company, Ajinomoto Fine-Techno Co.
While there are dozens of companies that package chips and use ABF substrates, there is only one ABF supplier that has to serve them all. But as it transpired this year, the Japanese company is not the bottleneck here, but OSAT (outsourced assembly and test) houses like ASE Technology are.
Earlier this year, numerous top packaging houses vowed to increase their production capacities. But for large companies, a tangible capacity increase is complicated, as equipment vendors can’t simply increase their output overnight. But now even second-tier OSAT players have announced plans to expand their capacity already. For example, Kinsus plans to raise its ABF substate capacity by 30% this year, DigiTimes reports. But chip packaging houses are not the only companies that can address issues with chip packaging.
“I would say overall, the demand if we look at coming into this year, the demand has been sort of higher than our expectations,” said Lisa Su, chief executive of AMD, at this week’s conference call, reports SeekingAlpha. “There are sort of industry-wide types of things that are going on. We work very closely with our supply chain partners. So, whether it’s wafers or back-end assembly test, capacity or substrate capacity, we work it on a product line by product line level.”
AMD used to own assembly, test, mark and pack facilities, but sold them in 2016 when it was in dire need of money. Apparently, AMD wants to address its chip shortages by investing in OSAT and substrate partners to gain capacity that is dedicated to AMD.
“We continue — on the substrate side in particular, I think, there has been underinvestment in the industry,” said Su. “So, we have taken the opportunity to invest in some substrate capacity dedicated to AMD, and that’ll be something that we continue to do going forward.”
Intel has its own chip production as well as test and assembly facilities in multiple countries. But apparently the in-house packaging capacities are not enough for Intel, which significantly increased its chip output capacities in the recent year following shortages it faced in 2018 – 2019. In a bid to meet demand for its products, Intel is working with its third-party substrate partners.
“By partnering closely with our suppliers, we are creatively utilizing our internal assembly factory network to remove a major constraint in our substrate supply,” said Pat Gelsinger, CEO of Intel, during a recent call. “Coming online in Q2, this capability will increase the availability of millions of units in 2021. It is a great example where the IDM model gives us flexibility to address the dynamic market.”
AMD is perhaps among the companies that suffered the most from chip production crisis. In the second half of 2020 the company had to supply its partners from Microsoft and Sony over 10 million of SoCs for the the Xbox Series X, Xbox Series S and PlayStation 5 that were launched last November. Around the same time AMD introduced its Ryzen 5000-series CPUs based on the Zen 3 microarchitecture as well as the Radeon RX 6000-series GPUs running the RDNA2 architecture.
Eventually AMD admitted that it could not meet demand for its products because it could not procure enough chips from its manufacturing partners and because its OSAT partners did not have enough capacity to test and pack its chips too.
Intel is finally releasing its full lineup of Tiger Lake CPUs beyond the H35 quad-core parts (via ComputerBase) soon, meaning we’ll finally have Tiger Lake CPUs on mobile with an eight-core configuration instead of four cores.
Intel’s H45 series chips will represent all Intel’s Tiger Lake SKUs with eight cores — there’s no word on six core parts just yet. We still don’t know the exact details of any specific SKUs, like core frequencies, cache sizes, and integrated graphics, but a recent tweet from ASUS’s ROG Global Twitter account leaves little doubt that an announcement will arrive on May 11, so stay tuned for more details in the upcoming weeks.
Get ready!May 11, 2 PM (CEST)#ROG #IntelGaming #UnleashTheTigerApril 27, 2021
See more
It’s rational to think that Intel’s H45 should take on all the features of the H35 quad-core parts while packing on much more performance; features like Resizable Bar, four PCIe lanes dedicated towards an NVME SSD, the latest connectivity like WiFi 6E, and Intel’s large 96 and 80 EU Xe graphics chips will be a nice addition to Intel’s new high core count parts.
As for performance, current H35 chips are already very close to AMD’s latest Ryzen 5000 mobile processors in single-core work. So we have no reason to doubt that Intel’s H45 chips will reach the same level of performance since previous Comet-Lake H parts like the 10980HK and the current 11375H can already hit 5.0GHz boost frequencies on a single core.
It remains to be seen how well Intel’s H45 parts will perform in multi-core workloads that utilize more than four cores, though. We will know very soon, though. We expect that an announcement for Intel’s H45 chips will drop on May 11th, just a few weeks from now.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.