Apple’s new Mac mini with the M1 chip is on sale at select retailers. If you are looking to buy the base configuration, which has 8GB of memory and 256GB of SSD storage, you can grab it for as low as $600 at Costco, but please note membership is required. If you don’t want a Costco membership, you can buy that same configuration for $664 at Amazon or $669 at B&H Photo.
If you need a bit more storage, B&H Photo also has the model with 512GB of SSD (with 8GB of RAM) for $849.
If you prefer a laptop, the M1-powered MacBook Pro is one of the best-performing laptops on the market. Both Amazon and B&H Photo have marked the late-2020 MacBook Pro base model, which includes 256GB of storage and 8GB of RAM, down to $1,199, which is the best price on this laptop yet.
Sony’s PlayStation Plus online subscription service includes many perks outside of the ability to play games online with friends. An active PS Plus membership also nets you access to exclusive discounts from the PS Store, along with free games that rotate out on a monthly cadence. Eneba is offering Verge readers based in the United States the opportunity to stock up on service by purchasing two one-year subscriptions for only $54 when you enter code VERGE27AYEAR at checkout, while supplies last. If you want to buy one year only, it’s $29, no promotional code required.
If you are an early adopter of either the PS5 or the Xbox Series X and you’re looking to find a TV that can take full advantage of the next-gen hardware, LG’s CX OLED TVs are a good option. Right now, you can save $600 on the CX series 55-inch model at Amazon and Best Buy, bringing the price down to $1,350 at both retailers. Please note that if you are interested in buying the TV at Best Buy, you will need to be signed in (or sign up; it’s free) to your My Best Buy account.
LG CX 55-inch OLED
$1,350
$2,000
33% off
Prices taken at time of publishing.
LG’s CX series OLED is basically the ultimate TV for next-gen gaming consoles — and it delivers gorgeous image quality for everything else, too. Available in 48-, 55-, 65-, and 77-inch sizes, the CX is one of those TVs you’ll get enjoyment from every time you power it on.
Samsung has temporarily halted chip production at its facilities in Austin, TX in response to the region’s power outages brought on by Winter Storm Uri, the Austin American-Statesman reports. “With prior notice, appropriate measures have safely been taken for the facilities and wafers in production,” Samsung said in a statement, “We will resume production as soon as power is restored.” On Tuesday, Austin Energy confirmed it had ordered its biggest customers to shut down, although it’s unknown how long they were without power.
The shutdowns were ordered as some 200,000 Austin homes were without power.
What’s unclear at the moment is whether production of Apple’s Mac Pro, which is manufactured in Austin, has also been affected. A spokesperson from Apple did not immediately return a request for comment. Other big Austin manufacturers, including NXP Semiconductors and Infineon Semiconductors, were also reportedly shut down.
The Statesman reports that the shutdown has the potential to cost Samsung millions, especially if manufacturing processes were suddenly interrupted. Tom’s Hardware notes that in March 2018 an unplanned 30 minute outage at one of Samsung’s plants in South Korea resulted in damage to tens of thousands of wafers, equivalent to 11 percent of its NAND flash output for the month. However, given Samsung had prior notice of the Austin shutdown, it presumably avoided any damage.
Samsung’s Austin factory started mass manufacturing memory chips in the late 90s, and over the years has produced DRAM, NAND, and mobile processors. Samsung’s website notes it’s primarily focused on producing chips with a 14nm process. A recent report said the company is considering building a new chipmaking plant in the region, capable of producing processors as advanced as 3nm.
Austin Energy General Manager Jackie Sargent said in comments reported by the Statesman that the energy company had initially asked industrial users to try to conserve energy, and had also tried using backup generators to help the situation. However, eventually Sargent says the manufacturers had to be asked to shut down completely. “We reached out to our largest customers, and in partnership with them, they shut down their facilities,” Sargent said.
Five years into its manufacture of high-end headphones, Focal has now launched an enhanced version of its Clear circumaural open-back headphones geared more towards music creators. The Clear Mg Professional sport similar 40mm drivers with ‘M’-shaped inverted domes, but they’re made out of magnesium rather than an aluminium/magnesium alloy. The French audio brand claims this new cone is lighter, more rigid and better damped, delivering a sound that’s “even more precise”.
That new cone is complemented by a 25mm diameter and 5.5mm high copper voice coil, while the new honeycomb grille inside the earcups work to extend the higher frequencies and follow the cone’s ‘M’ profile more closely to reduce any distortion or other adverse effects.
Those familiar with Focal’s excellent home headphone line-up – featuring the Utopia, Stellia, Elear, Elegia, Radiance and newly announced Celestee – will recognise the Clear Mg Professional’s aesthetic design, centred around 20mm memory foam, perforated fabric-coated earpads, a matching microfibre and leather headband and a black-painted aluminium yoke. Bundled accessories include 6.35mm and 3.5mm cables and a carrying case (pictured above).
The Focal Clear MG Professional will be available from this month, priced £1299 ($1490).
MORE:
Best audiophile headphones 2021: the ultimate high-end headphones
Focal Arche headphone amplifier review
Join us for the What Hi-Fi? Virtual Show on Saturday 24th April
Today, Samsung announced that its new HBM2-based memory has an integrated AI processor that can push out (up to) 1.2 TFLOPS of embedded computing power, allowing the memory chip itself to perform operations that are usually reserved for CPUs, GPUs, ASICs, or FPGAs.
The new HBM-PIM (processing-in-memory) chips inject an AI engine inside each memory bank, thus offloading processing operations to the HBM itself. The new class of memory is designed to alleviate the burden of moving data between memory and processors, which is often more expensive in terms of power consumption and time than the actual compute operations.
Samsung says that, when applied to its existing HBM2 Aquabolt memory, the tech can deliver twice the system performance while reducing energy consumption by more than 70%. The company also claims that the new memory doesn’t require any software or hardware changes (including to the memory controllers), thus enabling a faster time to market for early adopters.
Samsung says the memory is already under trials in AI accelerators with leading AI solutions providers. The company expects all validations to be completed in the first half of this year, marking a speedy path to market.
Inside Samsung’s HBM-PIM Memory
Samsung presented the finer details of its new memory architecture during the International Solid-State Circuits Virtual Conference (ISSCC) this week.
Image 1 of 8
Image 2 of 8
Image 3 of 8
Image 4 of 8
Image 5 of 8
Image 6 of 8
Image 7 of 8
Image 8 of 8
As you can see in the slides above, each memory bank has an embedded Programmable Computing Unit (PCU) that runs at 300 MHz. This unit is controlled via conventional memory commands from the host to enable in-DRAM processing, and it can execute various FP16 computations. The memory can also operate in either standard mode, meaning it operates as normal HBM2, or in FIM mode for in-memory data processing.
Naturally, making room for the PCU units reduces memory capacity — each PCU-equipped memory die has half the capacity (4Gb) per die compared to a standard 8Gb HBM2 die. To help defray that issue, Samsung employs 6GB stacks by combining four 4Gb die with PCUs with four 8Gb dies without PCUs (as opposed to an 8GB stack with normal HBM2).
Notably, the paper and slides above refer to the tech as Function-In Memory DRAM (FIMDRAM), but that was an internal codename for the technology that now carries the HBM-PIM brand name. Samsung’s examples are based on a 20nm prototype chip that achieves 2.4 Gbps of throughput per pin without increasing power consumption.
The paper describes the underlying tech as “Function-In Memory DRAM (FIMDRAM) that integrates a 16-wide single-instruction multiple-data engine within the memory banks and that exploits bank-level parallelism to provide 4× higher processing bandwidth than an off-chip memory solution. Second, we show techniques that do not require any modification to conventional memory controllers and their command protocols, which make FIMDRAM more practical for quick industry adoption.”
Unfortunately, we won’t see these capabilities in the latest gaming GPUs, at least for now. Samsung notes that the new memory is destined to satisfy large-scale processing requirements in data centers, HPC systems, and AI-enabled mobile applications.
As with most in-memory processing techniques, we expect this tech will press the boundaries of the memory chips’ cooling limitations, especially given that HBM chips are typically deployed in stacks that aren’t exactly conducive to easy cooling. Samsung’s presentation did not cover how HBM-PIM addresses those challenges.
Kwangil Park, senior vice president of Memory Product Planning at Samsung Electronics stated, “Our groundbreaking HBM-PIM is the industry’s first programmable PIM solution tailored for diverse AI-driven workloads such as HPC, training and inference. We plan to build upon this breakthrough by further collaborating with AI solution providers for even more advanced PIM-powered applications.”
As one of the world leaders in digital technology, Samsung pretty much makes any type of electronic device you can think of. Their products are used by millions of people around the world.
Being a leader in DRAM and flash memory production, it comes as no surprise that they are also a huge player in the SSD business. Their EVO and PRO Series SSDs are highly popular among upgraders, system builders, and enthusiasts.
Today, we’re reviewing the Samsung 980 Pro SSD, which is a high-end M.2 NVMe drive that introduces support for the PCI-Express 4.0 interface. Internally, the Samsung 980 Pro uses Samsung’s new eight-channel “Elpis” controller. Everything on the 980 Pro is produced by Samsung—the flash chips are their V-NAND v6, which uses between 110 and 136 layers of TLC. A DRAM chip is also included—it provides 1 GB of storage for the mapping tables of the SSD.
Samsung offers the 980 Pro in capacities of 250 GB ($90), 500 GB ($135), 1 TB ($230), and 2 TB ($460). Endurance for these models is set to 150 TBW, 300 TBW, 600 TBW, and 1200 TBW respectively. Samsung provides a five-year warranty for the 980 Pro.
Nvidia’s RTX 3070 is a great GPU, but the RTX 3060 Ti edges so close to it in performance that it has become the more popular card. But between the MSRP’s of the RTX 3070 and RTX 3080, which are supposed to retail at $499 and $699, respectively, there is a big enough gap to squeeze in another card: The alleged RTX 3070 Ti. We’ve heard of it before, and now it’s popping up again.
German retailer Alternate briefly published a product page listing a Lenovo gaming PC containing this graphics card, as spotted by GameStar. Of course, the product page is now down, and Alternate told GameStar that the information was just a bug. A bug, huh? Sure. Pull the other one.
In earlier rumors, the RTX 3070 Ti was said to feature 16 GB of memory, and this new listing shows the same — there’s no GDDR6X here, just GDDR6. In fact, we’ve seen this exact PC before.
This large a frame buffer would make the GPU more interesting to stand the test of time, as there are multiple games where lack of VRAM on 8GB cards has been shown to limit performance (if you run at 4K and don’t turn certain settings down).
Limited Information on RTX 3070 Ti Specs
So far, we don’t have a lot of information on the specifications for the RTX 3070 Ti, nor has there been any official word from Nvidia on its existence. We know from multiple leaks now that there’s a good chance that it will feature 16GB of GDDR6 memory, but we don’t know anything about its CUDA core count, clock speeds, memory bus sizes, or the lot.
We also don’t know when the card will come out. Alternate listed delivery for the Legion T7 34IMZ5 PC as ‘Within 2021,’ but that leaves quite a big window. We can also reasonably guess that a 3070 Ti will have at least as many GPU cores as the 3070, but fewer cores than the 3080. Anything more is just speculation.
A Chinese overclocker has received what is believed to be the Ryzen 5750G Pro (due to TSME encryption) and revealed its impressive overclocking capabilities.
Specs-wise, this chip appears to be a Zen 3 APU with 8 cores and 16 threads, with presumably a Vega integrated graphics chip with equal performance to that of AMD’s Zen 3 mobile APUs. So expect this chip to perform similar to a Ryzen 7 5800X with integrated graphics.
But the best part about the chip is its crazy overclocking potential. The overclocker managed to crank the APU all the way to 4.8 GHz on all cores and hit a memory clock of 4133 MHz, which we’ve never seen before (even on AMD’s flagship Ryzen 9 5950X) in 1:1 mode.
However, he used very high voltages, with 1.47v for the CPU core voltage and an SoC voltage of 1.2v. The core voltage, in particular, could be dangerously high for multi-threaded workloads, or at least high enough that we wouldn’t recommend running it for a daily driver.
He also ran the CPU-Z benchmark on the Ryzen 7 5750G and compared it to Intel’s Core i9-9900KF. The AMD APU scored 660.8 points in the single-threaded test, and 6897.8 points in the multi-threaded portion. Compared to the 9900KF, the 5750G is 27% faster in multi-threaded performance and 21% faster in single-threaded performance. That isn’t too surprising given the 9900KF’s age.
The 2069 MHz FCLK frequency is exciting; the best Zen 3 parts already struggle to hit (but it’s doable) 2000 MHz, so seeing an APU break that barrier is quite impressive.
Unfortunately, we still don’t know if AMD will release another eight-core APU into the wild that won’t be exclusive to OEM builders. We’ve heard reports that a 5700G might be on its way at some point, but nothing is certain, especially during this major PC part shortage where AMD can’t even keep its current 5000-series chips in stock.
What’s the best mining GPU, and is it worth getting into the whole cryptocurrency craze? Bitcoin and Ethereum mining are making headlines again; prices and mining profitability are way up compared to the last couple of years. Everyone who didn’t start mining last time is kicking themselves for their lack of foresight. Not surprisingly, the best graphics cards and those chips at the top of our GPU benchmarks hierarchy end up being very good options for mining as well. How good? That’s what we’re here to discuss, as we’ve got hard numbers on hashing performance, prices, power, and more.
We’re not here to encourage people to start mining, and we’re definitely not suggesting you should mortgage your house or take out a big loan to try and become the next big mining sensation. Mostly, we’re looking at the hard data based on current market conditions. Predicting where cryptocurrencies will go next is even more difficult than predicting the weather, politics, or the next big meme. Chances are, if you don’t already have the hardware required to get started on mining today (or really, about two months ago), you’re already late and won’t see the big gains that others are talking about. Like the old gold rush, the ones most likely to strike it rich are those selling equipment to the miners rather than the miners themselves.
If you’ve looked for a new (or used) graphics card lately, the current going prices probably caused at least a raised eyebrow, maybe even two or three! We’ve heard from people who have said, in effect, “I figured with the Ampere and RDNA2 launches, it was finally time to retire my old GTX 1070/1080 or RX Vega 56/64. Then I looked at prices and realized my old card is selling for as much as I paid over three years ago!” They’re not wrong. Pascal and Vega cards from three or four years ago are currently selling at close to their original launch prices — sometimes more. If you’ve got an old graphics card sitting around, you might even consider selling it yourself (though finding a replacement could prove difficult).
Ultimately, we know many gamers and PC enthusiasts are upset at the lack of availability for graphics cards (and Zen 3 CPUs), but we cover all aspects of hardware — not just gaming. We’ve looked at GPU mining many times over the years, including back in 2011, 2014, and 2017. Those are all times when the price of Bitcoin shot up, driving interest and demand. 2021 is just the latest in the crypto coin mining cycle. About the only prediction we’re willing to make is that prices on Bitcoin and Ethereum will change in the months and years ahead — sometimes up, and sometimes down. And just like we’ve seen so many times before, the impact on graphics card pricing and availability will continue to exist. You should also be aware that, based on past personal experience that some of us have running consumer graphics cards 24/7, it is absolutely possible to burn out the fans, VRMs, or other elements on your card. Proceed at your own risk.
The Best Mining GPUs Benchmarked, Tested and Ranked
With that preamble out of the way, let’s get to the main point: What are the best mining GPUs? This is somewhat on a theoretical level, as you can’t actually buy the cards at retail for the most part, but we have a solution for that as well. We’re going to use eBay pricing — on sold listings — and take the data from the past seven days (for prices). We’ll also provide some charts showing pricing information from the past three months (90 days) from eBay, where most GPUs show a clear upward trend. How much can you make by mining Ethereum with a graphics card, and how long will it take to recover the cost of the card using the currently inflated eBay prices? Let’s take a look.
For this chart, we’ve used the current difficulty and price of Ethereum — because nothing else is coming close to GPU Ethereum for mining profitability right now. We’ve tested all of these GPUs on our standard test PC, which uses a Core i9-9900K, MSI MEG Z390 ACE motherboard, 2x16GB Corsair DDR4-3600 RAM, a 2TB XPG M.2 SSD, and a SeaSonic 850W 80 Plus Platinum certified PSU. We’ve tuned mining performance using either NBminer or PhoenixMiner, depending on the GPU, with an eye toward minimizing power consumption while maximizing hash rates. We’ve used $0.10 per kWh for power costs, which is much lower than some areas of the world but also higher than others. Then we’ve used the approximate eBay price divided by the current daily profits to come up with a time to repay the cost of the graphics card.
It’s rather surprising to see older GPUs at the very top of the list, but that’s largely based on the current going prices. GTX 1060 6GB and RX 590 can both hit modest hash rates, and they’re the two least expensive GPUs in the list. Power use isn’t bad either, meaning it’s feasible to potentially run six GPUs off a single PC — though then you’d need PCIe riser cards and other extras that would add to the total cost.
Note that the power figures for all GPUs are before taking PSU efficiency into account. That means actual power use (not counting the CPU, motherboard, and other PC components) will be higher. For the RTX 3080 as an example, total wall outlet power for a single GPU on our test PC is about 60W more than what we’ve listed in the chart. If you’re running multiple GPUs off a single PC, total waste power would be somewhat lower, though it really doesn’t impact things that much. (If you take the worst-case scenario and add 60W to every GPU, the time to break even only increases by 4-5 days.)
It’s also fair to say that our test results are not representative of all graphics cards of a particular model. RTX 3090 and RTX 3080 can run high GDDR6X temperatures without some tweaking, but if you do make the effort, the 3090 can potentially do 120-125MH/s. That would still only put the 3090 at third from the bottom in terms of time to break even, but it’s quite good in terms of power efficiency, and it’s the fastest GPU around. There’s certainly something to be said for mining with fewer higher efficiency GPUs if you can acquire them.
Here’s the real problem: None of the above table has any way of predicting the price of Ethereum or the mining difficulty. Guessing at the price is like guessing at the value of any other commodity: It may go up or down, and Ethereum, Bitcoin, and other cryptocurrencies are generally more volatile than even the most volatile of stocks. On the other hand, mining difficulty tends to increase over time and rarely goes down, as the rate of increased difficulty is directly tied to how many people (PCs, GPUs, ASICs, etc.) are mining.
So, the above is really a best-case scenario for when you’d break even on the cost of a GPU. Actually, that’s not true. The best-case scenario is that the price of Ethereum doubles or triples or whatever, and then everyone holding Ethereum makes a bunch of money. Until people start to cash out and the price drops, triggering panic sells and a plummeting price. That happened in 2018 with Ethereum, and it’s happened at least three times during the history of Bitcoin. Like we said: Volatile. But here we are at record highs, so everyone is happy and nothing could possibly ever go wrong this time. Until it does.
Still, there are obviously plenty of people who believe in the potential of Ethereum, Bitcoin, and blockchain technologies. Even at today’s inflated GPU prices, which are often double the MSRPs for the latest cards, and higher than MSRP for just about everything, the worst cards on the chart (RTX 3090 and RX 6900 XT) would still theoretically pay for themselves in less than seven months. And even if the value of the coins drops, you still have the hardware that’s at least worth something (provided the card doesn’t prematurely die due to heavy mining use). Which means, despite the overall rankings (in terms of time to break even), you’re generally better off buying newer hardware if possible.
Here’s a look at what has happened with GPU pricing during the past 90 days, using tweaked code from:
GeForce RTX 3060 Ti: The newest and least expensive of the Ampere GPUs, it’s just as fast as the RTX 3070 and sometimes costs less. After tuning, it’s also the most efficient GPU for Ethereum right now, using under 120W while breaking 60MH/s.
Radeon RX 5700: AMD’s previous generation Navi GPUs are very good at mining, and can break 50MH/s while using about 135W of power. The vanilla 5700 is as fast as the 5700 XT and costs less, making it a great overall choice.
GeForce RTX 2060 Super: Ethereum mining needs a lot of memory bandwidth, and all of the RTX 20-series GPUs with 8GB end up at around 44MH/s and 130W of power, meaning you should buy whichever is cheapest. That’s usually the RTX 2060 Super.
Radeon RX 590: All the Polaris GPUs with 8GB of GDDR5 memory (including the RX 580 8GB, RX 570 8GB, RX 480 8GB, and RX 470 8GB) end up with relatively similar performance, depending on how well your card’s memory overclocks. The RX 590 is currently the cheapest (theoretically), but all of the Polaris 10/20 GPUs remain viable. Just don’t get the 4GB models!
Radeon RX Vega 56: Overall performance is good, and some cards can perform much better — our reference models used for testing are more of a worst-case choice for most of the GPUs. After tuning, some Vega 56 cards might even hit 45-50MH/s, which would put this at the top of the chart.
Radeon RX 6800: Big Navi is potent when it comes to hashing, and all of the cards we’ve tested hit similar hash rates of around 65MH/s and 170W power use. The RX 6800 is generally several hundred dollars cheaper than the others and used a bit less power, making it the clear winner. Plus, when you’re not mining, it’s a very capable gaming GPU.
GeForce RTX 3080: This is the second-fastest graphics card right now, for mining and gaming purposes. The time to break even is only slightly worse than the other GPUs, after which profitability ends up being better overall. And if you ever decide to stop mining, this is the best graphics card for gaming — especially if it paid for itself! At around 95MH/s, it will also earn money faster after you recover the cost of the hardware (if you break even, of course).
What About Ethereum ASICs?
One final topic worth discussing is ASIC mining. Bitcoin (SHA256), Litecoin (Scrypt), and many other popular cryptocurrencies have reached the point where companies have put in the time and effort to create dedicated ASICs — Application Specific Integrated Circuits. Just like GPUs were originally ASICs designed for graphics workloads, ASICs designed for mining are generally only good at one specific thing. Bitcoin ASICs do SHA256 hashing really, really fast (some can do around 25TH/s while using 1000W — that’s trillions of hashes per second), Litecoin ASICs do Scrypt hashing fast, and there are X11, Equihash, and even Ethereum ASICs.
The interesting thing with hashing is that many crypto coins and hashing algorithms have been created over the years, some specifically designed to thwart ASIC mining. Usually, that means creating an algorithm that requires more memory, and Ethereum falls into that category. Still, it’s possible to optimize hardware to hash faster while using less power than a GPU. Some of the fastest Ethereum ASICs (e.g. Innosilicon A10 Pro) can reportedly do around 500MH/s while using only 1000W. That’s about ten times more efficient than the best GPUs. Naturally, the cost of such ASICs is prohibitively expensive, and every big miner and their dog wants a bunch of them. They’re all sold out, in other words, just like GPUs.
Ethereum has actually tried to deemphasize mining, but obviously that didn’t quite work out. Ethereum 2.0 was supposed to put an end to proof of work hashing, transitioning to a proof of stake model. We won’t get into the complexities of the situation, other than to note that Ethereum mining very much remains a hot item, and there are other non-Ethereum coins that use the same hashing algorithm (though none are as popular / profitable as ETH). Eventually, the biggest cryptocurrencies inevitably end up being supported by ASICs rather than GPUs — or CPUs or FPGAs. But we’re not at that point for Ethereum yet.
PowerColor has filed a list of model names for the yet unannounced RX 6700 and RX 6700 with the Eurasian Economic Union. Interestingly, the listing shows potential memory configurations for both GPUs: The RX 6700 XT features 12GB of VRAM and 6GB for the RX 6700. This information comes just a week after ASRock shared product names with the EEU, listing a 6GB RX 6700 and a 12GB RX 6600 XT. This makes PowerColor the second company so far to suggest an RX 6700 6GB could be on its way.
Even with PowerColor and ASRock backing each other up, the only evidence for a 6GB RX 6700 comes from the EEC and EEU. We’ve seen plenty of product names enter the Eurasian Economic Commission/Union that turned out to be false. So take this info with a grain of salt. Still, there’s plenty of reason to suspect it’s viable.
Will the RX 6700 be bottlenecked due to the low VRAM capacity? We’ve discussed how low VRAM capacity cards will be affected in the past, and we’re reaching the point where 6GB is becoming a minimum requirement to run the latest titles at high or ultra detail settings. Perhaps if AMD does a ton of optimization for this card, and you run games without ray tracing (which is a VRAM hog), that might make 6GB okay for a mid-range RDNA2 card.
This could also be a temporary measure. GDDR6 — along with most computer parts — is experiencing massive shortages, forcing graphics card production to slow down for Nvidia and AMD. AMD could be decreasing VRAM capacity to the absolute minimum to keep production going as much as possible. Plus, there’s nothing stopping AMD from making a 12GB RX 6700 in the future by using higher capacity memory chips.
The good news is that it seems like the RX 6700 XT will feature 12GB of VRAM, which should be perfectly adequate for 1440p and 4K gaming without encountering memory capacity issues. Plus, given these will be mid-range (or lower high-end) cards, 4K isn’t really a major concern.
Let’s hope AMD can create product volume for these future mid-range cards once they arrive. The RX 6800, 6800 XT, and 6900 XT are already super hard to come by. If more SKUs are coming soon, AMD will need to figure out a way to produce these cards in large enough quantities to maintain at least some adequate supply. Considering everything else happening right now, including Ethereum mining, delivering adequate supply on any graphics card seems unlikely.
Finding the best graphics card at anything approaching reasonable pricing has become increasingly difficult. Just about everything we’ve tested in our GPU benchmarks hierarchy is sold out unless you go to eBay, but current eBay prices will take your breath away. If you thought things were bad before, they’re apparently getting even worse. No doubt, a lot of this is due to the recent uptick in Ethereum mining’s profitability on GPUs, compounded by component shortages, and it’s not slowing down.
A couple of weeks back, we wrote about Michael Driscoll tracking scalper sales of Zen 3 CPUs. Driscoll also covered other hot ticket items like RTX 30-series GPUs, RDNA2 / Big Navi GPUs, Xbox Series S / X, and PlayStation 5 consoles. Thankfully, he provided the code to his little project, and we’ve taken the opportunity to run the data (with some additional filtering out of junk ‘box/picture only’ sales) to see how things are going in the first six weeks of 2021. Here’s how things stand for the latest AMD and Nvidia graphics cards:
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
That’s … disheartening. Just in the past month, prices have increased anywhere from 10% to 35% on average. The increase is partly due to the recent graphics card tariffs, and as you’d expect, the jump in prices is more pronounced on the lower-priced GPUs.
For example, the RTX 3060 Ti went from average prices of $690 in the first week of January to $920 in the past week. It also represents nearly 3,000 individual sales on eBay, after filtering out junk listings — and these are actual sales, not just items listed on eBay. RTX 3080 saw the next-biggest jump in pricing, going from $1,290 to $1,593 for the same time periods, with 3,400 listings sold.
Nvidia’s RTX 3070 represents the largest number of any specific GPU sold, with nearly 5,400 units, but prices have only increased 17% — from $804 in January to $940 in February. The February price is interesting because it’s only slightly higher than the RTX 3060 Ti price, which suggests strongly that it’s Ethereum miners snapping up most of these cards. (The 3060 Ti hits roughly the same 60MH/s as the 3070 after tuning since they both have the same 8GB of GDDR6 memory.)
Wrapping up Nvidia, the RTX 3090 accounts for 2,291 units sold on eBay, with pricing increasing 14% since January. For the most expensive GPU that already had an extreme price, it’s pretty shocking to see it move up from $2,087 to a new average of $2,379. I suppose it really is the heir to the Titan RTX throne now.
We see a similar pattern on the AMD side, but at far lower volumes in terms of units sold. The RX 6900 XT had 334 listings sold, with average pricing moving up just 8% from $1,458 to $1,570 during the past six weeks. Considering it delivers roughly the same mining performance as the less expensive Big Navi GPUs, that makes sense from the Ethereum mining perspective.
Radeon RX 6800 XT prices increased 11% from $1,179 in January to $1,312 in February. It’s also the largest number of GPUs sold (on eBay) for Team Red, at 448 graphics cards. Not far behind is the RX 6800 vanilla, with 434 units. It saw the biggest jump in pricing over the same period, from $865 to $1,018 (18%). That strongly correlates with expected profits from GPU mining.
That’s both good and bad news. The good news is that gamers are most likely being sensible and refusing to pay these exorbitant markups. The bad news is that as long as mining remains this profitable, stock and pricing of graphics cards isn’t likely to recover. It’s 2017 all over again, plus the continuing effects of the pandemic.
We already knew that Nvidia was making a $329 RTX 3060 graphics card. And today, Nvidia announced that it will make its most affordable GPU in the RTX 3000 series available at retailers beginning February 25th. Retailers will open orders for the RTX 3060 starting at 9AM PT / 12PM ET. Nvidia tells The Verge it will not be creating a Founder’s Edition of the RTX 3060 graphics card.
Designed to succeed the aging GeForce GTX 1060 Pascal cards, the RTX 3060 features 12GB of GDDR6 memory. Like the other products featured in the RTX 3000 series, the RTX 3060 will also support DLSS and Nvidia’s suite of RTX applications.
With a new RTX 30 card coming later this month, we anticipate that the RTX 3060 GPU will sell out quickly. These cards have been difficult to purchase since last fall when the RTX 30 series first debuted. In January, Nvidia said that it anticipates supply for its GPUs to both consumers and partners “will likely remain lean through Q1,” which doesn’t end until late April. To alleviate the short supply and sky-high prices, Nvidia confirmed yesterday that it plans to bring back its older RTX 2060 and GTX 1050 Ti graphics cards.
As usual, Intel’s poised for a busy year. The company has already launched its new 11th Generation Tiger Lake H35 mobile chips, and 11th Gen Rocket Lake should blast into the market this year to take on the likes of AMD Ryzen 5000. This week during The Tom’s Hardware Show, Intel also discussed the role resizable BAR is playing in its efforts to boost performance for gamers opting for those chips.
Through an advanced feature available through PCIe , resizable BAR lightens the burden on a GPU’s VRAM by only transferring data, like shaders and textures, when needed and, if there are multiple requests, simultaneously. This should boost gaming performance by allowing the CPU to “efficiently access the entire frame buffer,” as Nvidia put it. AMD already tackles this with its Smart Access Memory (SAM) feature available with Radeon RX 6000 graphics cards, but Nvidia added support for RTX 30-series mobile cards in January, with desktop graphics card support beginning in March.
Intel’s GM of premium and gaming notebook segments, Fredrik Hamberger, got into support for resizable BAR on The Tom’s Hardware Show, saying Intel collaborated with graphics card makers, namely Nvidia and AMD, for implementation. The goal, he said was a “standard solution” that could be compatible with multiple vendors.
Intel’s H35-series mobile chips, which target ultraportable gaming laptops, already support resizable BAR, as do all of Intel’s Comet Lake-H series chips and upcoming H45 series, Hamberger said. It’s just up to the laptop and graphics card makers to make the feature usable.
“The final drivers, from our side, it’s already there,” Hamberger told Tom’s Hardware. “Some of the OEMs are working on finalizing exact timing on when they have the driver from the graphics vendors, so I think you’d have to ask them on the exact timing.”
The exec also pointed to some games seeing performance gains of 5-10%.
“It is a pretty nice boost by just turning on this pipeline and, again, standard implementation versus trying to do something custom and proprietary,” Hamberger said.
Of course, the more games that support resizable BAR, the better. But Hamberger has confidence that we’ll see a growing number of game developers make that possible.
“It’s a pretty late feature that … is being turned on, but since it’s following a standard, I think that the nice thing is if you’re a developer you don’t have to worry about it being like, ‘Hey, [only] these three systems have it.’ It’s gonna be available both on notebooks … it’s part of our Rocket Lake platform as well on the desktop side,” Hamberger said.
“Our expectation is that you’ll see more and more developers turn on the ability to use this, and we’ll continue to scale it.”
You can enjoy this week’s episode of The Tom’s Hardware Show via the video above, on YouTube, Facebook, or wherever you get your podcasts.
A Chinese tweaker has almost managed to break records on the notebook 3DMark scoreboard by modifying an RTX 3080 equipped ROG Zephyrus Duo 15 to run at 155W instead of the GPU’s default TDP of 130W.
The mod was made by transferring the VBIOS from another RTX 3080 equipped laptop, the MSI GE76 to the Zephyrus Duo 15. Since the GE76’s RTX 3080 comes with a 155W power limit, this change will get transferred to the Zephyrus Duo 15 when you swap VBIOSes.
The Chinese overclocker managed a 3DMark TimeSpy graphics score of 13,691 points and an overall score of 13,174. Compared to the desktop sector, this modified RTX 3080 managed to beat the best RTX 3060 Ti TimeSpy Graphics score by roughly 100 points. But it loses in the overall score by around 700 points due to the CPU differences. Overall, this means the RTX 3080 mobile is 1% faster than the desktop RTX 3060 Ti.
Image 1 of 2
Image 2 of 2
Compared to other RTX 3080 laptops, the modified Zephyrus Duo 15 is in 2nd place at the time of this writing, and has an average graphics lead of 200-300 points over other 3080 mobile chips. Most RTX 3080 laptops score an average of 13,200 points with some hitting the 13,400 mark. This means the modified RTX 3080 mobile is 1.2% faster than other RTX 3080s.
Unfortunately, we don’t know if the modified RTX 3080 was overclocked or not. The author mentions nothing in terms of a core or memory overclock, so we believe his score was from the increased power limit alone. If true, the modified RTX 3080 has more potential headroom for an even higher score through conventional overclocking.
While the performance results from this mod are quite impressive, keep in mind that doing this yourself is very risky. Increasing the power limit on your GPU will put more strain on your cooling system and, more seriously, your power delivery system (which can destroy your laptop if overloaded). So make sure you know the risks if you want to attempt the same mod on your notebook.
However, it is cool to see what a mobile RTX 3080 can do with some extra power headroom. Who knows, maybe one day we’ll see laptops with these crazy high power limits in the future.
While Intel’s NUCs come pre-built and ready to work out-of-box, some of the higher-end NUCs can get an aftermarket treatment from fanless chassis makers. Due to higher TDPs, that will apparently not be the case with the latest NUC 11 systems (codenamed Phantom Canyon). The new NUCs come powered by Intel’s quad-core 11th-Gen Core ‘Tiger Lake’ processor as well as Nvidia’s GeForce RTX 2060, but the powerful combination pushes TDPs up to 150W, making passive cooling impractical.
As it turns out, Intel’s NUC 11 for enthusiasts has a combined TDP of 150W, as noticed by FanlessTech in a Twitter post. Such a high TDP makes it impossible to build a small passively-cooled chassis for the unit. By default, the system (which measures 221 × 142 × 42 mm) has a rather sophisticated cooling system featuring five heat pipes and a fan. A third-party fanless cooling solution would be too bulky, and therefore impractical for a small form-factor PC.
In fairness, Intel’s NUC 11 Enthusiast is a rather capable system. The PC packs the quad-core Intel Core i7-1165G7 (up to 4.70GHz, 12 MB cache, 28W TDP-up) processor that is accompanied by 16 GB of DDR4-3200 memory (upgradeable to 64 GB), Nvidia’s GeForce RTX 2060 discrete GPU with 6 GB of GDDR6 memory, and Intel’s Optane Memory H10 (32 GB + 512 GB) or H20 SSD.
The CPU can be configured for a 15W or a 28W TDP, but the mobile GeForce RTX 2060 GPU can consume 80W or more (a desktop card consumes 160W). Considering that there are other components too, a 150W combined TDP seems fair for a Phantom Canyon system (especially considering various burst modes modern CPUs and GPUs have).
Without a doubt, it is disappointing for enthusiasts of ultra-quiet computing and passive cooling that it will be impossible to build a fanless PC using components of Intel’s NUC 11 Enthusiast. Meanwhile, for those who want to have a very small PC that can run modern games, this system still makes a lot of sense.
Kingdom Hearts, Square Enix’s action roleplaying mashup of Square Enix, Disney, and Pixar characters, is coming to PC for the first time. The series will be available as an Epic Store exclusive on March 30th, the company announced today.
Titles include Kingdom Hearts 1.5 + 2.5 Remix — enhanced versions of Kingdom Hearts 1 and 2 — Kingdom Hearts III Re Mind, Kingdom Hearts 2.8 Final Chapter Prologue, and Kingdom Hearts: Melody of Memory. Although the series has spanned a variety of platforms, from its beginnings as a PlayStation title to its arrival on Xbox consoles and handhelds for many of the series’ spinoffs, it has never made the leap to PC.
The series follows Sora as he travels with companions Donald Duck and Goofy through worlds based on Disney classics. Kingdom Hearts III, which acts as the conclusion to Sora’s adventure, launched for PlayStation 4 and Xbox One in 2019.
Kingdom Hearts 1.5 + 2.5 Remix is available for $49.99, while the other three games are $59.99.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.