João Silva 2 days ago Featured Tech News, Graphics
It looks like retailers are beginning to receive stock for Nvidia’s upcoming CMP 30HX mining graphics card, but early indications of pricing seem unrealistic. A Palit Nvidia CMP 30HX appeared at one retailer last night, with a hefty $700 price tag.
The Palit CMP 30HX listing appeared at Microless (via @momomo_us). The listing details a few unannounced specifications, including the fact that the card is based on the TU116-100 GPU, the same GPU found on the GTX 1660 Super. Other details include 6GB of GDDR6 memory and base/boost clock speeds of 1530MHz/1795MHz. The card has a 125W TDP and is powered with a single 8-pin connector.
The oddest part about the listing is the price tag, which works out to be around $723 USD. For reference, the original GTX 1660 Super launched with a $229 price tag. Given that this is an early listing though, it is possible that the price tag was just a placeholder.
The Nvidia CMP 30HX is expected to be capable of mining Ethereum at 26 MH/s, which could be improved on with some overclocking. The CMP 30HX is going to be Nvidia’s entry-level mining graphics card, so there will be other options available with higher hash rates. Since this is a mining card, it does not include any video outputs.
Nvidia is expected to release the CMP 30HX soon, followed by the CMP 40HX in Q2 2021. Discuss on our Facebook page, HERE.
KitGuru says: There is still a lot of mystery surrounding Nvidia’s CMP plans, but hopefully official news is coming soon. If retailers are receiving stock, then announcements shouldn’t be too far behind. Are any of you interested in Nvidia’s CMP cards at all?
Become a Patron!
Check Also
Intel NUC 11 Extreme Compute Element to feature up to Intel Core i9-11980HK
Intel’s next generation NUC is coming soon and recent leaks have given us a good …
Matthew Wilson 2 days ago Featured Tech News, Software & Gaming
One of Microsoft’s big features for backwards compatible games on Xbox Series X/S consoles has been Auto HDR, enabling High Dynamic Range across a number of SDR-only games. Now, PC gamers are also going to benefit, with Microsoft preparing to enable Auto HDR for over 1,000 PC games.
Auto HDR will be enabled in both DirectX 11 and DirectX 12 games. DirectX Program Manager, Hannah Fisher, explained the benefits of Auto HDR in a developer blog post:
“While some game studios develop for HDR gaming PCs by mastering their game natively for HDR, Auto HDR for PC will take DirectX 11 or DirectX 12 SDR-only games and intelligently expand the colour/brightness range up to HDR. It’s a seamless platform feature that will give you an amazing new gaming experience that takes full advantage of your HDR monitor’s capabilities.”
In an example image (seen above), we can see how Auto HDR impacts the luminance in a seen from Gears 5. Of course, Gears 5 already has native HDR support, so while Auto HDR doesn’t bring the same level of colour detail, it gets quite close. In games that don’t support HDR at all, Auto HDR can make an impressive difference.
Currently, Auto HDR is in preview, available to Windows Insider build testers. Since the feature is still in testing, there are some bugs to work out and there will be additional optimisation, as Auto HDR does use some GPU compute power. Just a few games support the featue for now, but as testing continues, more games will be added, with plans to enable Auto HDR across the top 1,000 DX 11 and DX 12 titles.
Discuss on our Facebook page, HERE.
KitGuru Says: If you have an HDR-capable monitor and happen to be a Windows Insider, then this is worth checking out. Auto HDR works well on the Xbox Series X, so it will be interesting to compare that experience to the same feature on PC.
Become a Patron!
Check Also
Intel NUC 11 Extreme Compute Element to feature up to Intel Core i9-11980HK
Intel’s next generation NUC is coming soon and recent leaks have given us a good …
Mustafa Mahmoud 2 days ago Console, Featured Tech News, Software & Gaming, Virtual Reality
Sony recently confirmed that it was working on a next-generation version of its popular PlayStation VR headset. At the time, the console manufacturer confirmed that not only would the headset get upgraded, but the controllers too. Now, Sony has revealed these controllers, and they appear to be a major leap in almost every way.
Making the announcement on its blog, PlayStation revealed that “Our new VR controller speaks to our mission of achieving a much deeper sense of presence and stronger feeling of immersion in VR experiences. It will build upon the innovation we introduced with the DualSense wireless controller, which changed how games ‘feel’ on PS5 by unlocking a new way to tap into the sense of touch. Now we’re bringing that innovation to VR gaming.”
These innovations include a new design, which “takes on an ‘orb’ shape that allows you to hold the controller naturally, while playing with a high degree of freedom”. The controllers will also utilise adaptive triggers, just like the Dualsense, which “when you take that kind of mechanic and apply it to VR, the experience is amplified to the next level”. For VR shooters in particular, this feature will likely prove to be very immersive.
Likewise, the controller will feature new haptics similar to that of the Dualsense, but will be “optimized for its form factor, making every sensation in the game world more impactful, textured and nuanced”. The controllers will also feature finger touch detection which “enables you to make more natural gestures with your hands during gameplay.”
Unlike the first PSVR, these controllers will be “tracked by the new VR headset through a tracking ring across the bottom of the controller” which should help with accuracy and maintaining their position within the virtual space.
Last but not least, both controllers will feature analogue sticks, something which the original controllers sorely lacked. The PSVR 2 controllers look to be a massive leap over the original in every single way. Hopefully this will serve to further the immersion that can be achieved from VR.
KitGuru says: Are you excited for PSVR 2? Will you buy it? What do you think of the controllers? Let us know down below.
Become a Patron!
Check Also
Intel NUC 11 Extreme Compute Element to feature up to Intel Core i9-11980HK
Intel’s next generation NUC is coming soon and recent leaks have given us a good …
After almost a decade of total market dominance, Intel has spent the past few years on the defensive. AMD’s Ryzen processors continue to show improvement year over year, with the most recent Ryzen 5000 series taking the crown of best gaming processor: Intel’s last bastion of superiority.
Now, with a booming hardware market, Intel is preparing to make up some lost ground with the new 11th Gen Intel Core Processors. Intel is claiming these new 11th Gen CPUs offer double-digit IPC improvements despite remaining on a 14 nm process. The top-end 8-core Intel Core i9-11900K may not be able to compete against its AMD rival Ryzen 9 5900X in heavily multi-threaded scenarios, but the higher clock speeds and alleged IPC improvements could be enough to take back the gaming crown. Along with the new CPUs, there is a new chipset to match, the Intel Z590. Last year’s Z490 chipset motherboards are also compatible with the new 11th Gen Intel Core Processors, but Z590 introduces some key advantages.
First, Z590 offers native PCIe 4.0 support from the CPU, which means the PCIe and M.2 slots powered off the CPU will offer PCIe 4.0 connectivity when an 11th Gen CPU is installed. The PCIe and M.2 slots controlled by the Z590 chipset are still PCI 3.0. While many high-end Z490 motherboards advertised this capability, it was not a standard feature for the platform. In addition to PCIe 4.0 support, Z590 offers USB 3.2 Gen 2×2 from the chipset. The USB 3.2 Gen 2×2 standard offers speeds of up to 20 Gb/s. Finally, Z590 boasts native support for 3200 MHz DDR4 memory. With these upgrades, Intel’s Z series platform has feature parity with AMD’s B550. On paper, Intel is catching up to AMD, but only testing will tell if these new Z590 motherboards are up to the challenge.
The MSI Enthusiast Gaming, or MEG for short, line of motherboards represents the best of the best MSI has to offer. Last year’s Z490 MEG line offered some of the best overclocking available on an Intel platform. Memory overclocking was particularly noteworthy due to such innovations as MSI’s tabbed memory trace layout. Those same innovations return on MSI’s new Z590 lineup with even more refinement. The MSI MEG Z590 ACE features a massive 19-phase VRM with top of the line 90 A power stages and a robust VRM cooling solution, four M.2 slots, Thunderbolt 4, and a plethora of overclocking features. The MSI MEG Z590 ACE has a premium spec sheet, let’s see if there is premium performance to match!
1x BIOS Flashback button 1x Clear CMOS button 2x SMA antenna connectors 1x HDMI port 2x USB Type-C® Thunderbolt ports 2x Mini DisplayPort input 2x USB 3.2 Gen 2 Type-A ports (red) 4x USB 3.2 Gen 1 ports 2x USB 2.0 ports 1x RJ-45 port 1x optical S/PDIF Out connector 5x audio jacks
Audio:
1x Realtek ALC4082 Codec
Fan Headers:
8x 4-pin
Form Factor:
ATX Form Factor: 12.0 x 9.6 in.; 30.5 x 24.4 cm
Exclusive Features:
8 layer PCB
AudioBoost 5 HD
DDR4 Boost with steel Armor
Thunderbolt 4
Mystic Light
Quad M.2 with M.2 Shield Frozr
Testing for this review was conducted using a 10th Gen Intel Core i9-10900K. Stay tuned for an 11th Gen update when the new processors launch!
In an odd disclosure that comes after Intel recently released the details of its 11th-Generation Core Rocket Lake-S processors, the company has unveiled a “new” Adaptive Bost Technology that allows the chip to operate at up to 100C during normal operation. This new tech will feel decidedly familiar to AMD fans, as it operates in a very similar fashion to AMD’s existing boost mechanism that’s present in newer Ryzen processors. This marks the fourth boost technology to come standard with some Intel chips, but in true Intel style, the company only offers the new feature on its pricey Core i9 K and KF processors, giving it a new way to segment its product stack.
In a nutshell, the new Adaptive Boost Technology (ABT) feature allows Core i9 processors to dynamically boost to higher all-core frequencies based upon available thermal headroom and electrical conditions, so the peak frequencies can vary. It also allows the chip to operate at 100C during normal operation.
In contrast, Intel’s other boost technologies boost to pre-defined limits (defined in a frequency lookup table) based on the number of active cores, and you’re guaranteed that the chip can hit those frequencies if it is below a certain temperature and the motherboard can supply enough power. Even though Intel has defined a 5.1 GHz peak for ABT if three or more cores are active, it doesn’t come with a guaranteed frequency – peak frequencies will vary based upon the quality of your chip, cooler, PSU, and motherboard power circuitry.
Think of ABT much like a dynamic auto-overclocking feature. Still, because the chip stays within Intel’s spec of a 100C temperature limit, it is a supported feature that doesn’t fall into the same classification as overclocking. That means the chip stays fully within warranty if you choose to enable the feature (it’s disabled by default in the motherboard BIOS).
Intel does have another boost tech, Thermal Velocity Boost, that allows the processor to shift into slightly higher frequencies if the processor remains under a certain temperature threshold (70C for desktop chips). However, like Intel’s other approaches, it also relies upon a standard set of pre-defined values and you’re guaranteed that your chip can hit the assigned frequency.
In contrast, ABT uplift will vary by chip — much of the frequency uplift depends upon the quality of your chip. Hence, the silicon lottery comes into play, along with cooling and power delivery capabilities. We’ve included a breakdown of the various Intel boost technologies a bit further below.
Image 1 of 2
Image 2 of 2
Intel’s approach will often result in higher operating temperatures during intense work, but that doesn’t differ too much from AMD’s current approach because ABT is very similar to AMD’s Precision Boost 2 technology. AMD pioneered this boosting technique for desktop PCs with its Ryzen 3000 series, allowing the chip to boost higher based upon available thermal and electrical headroom, and not based on a lookup table. Still, the company dialed up the temperature limits with its Ryzen 5000 processors to extract the utmost performance within the chips’ maximum thermal specification.
As you can see in AMD’s official guidelines above, that means the processor can run at much higher temperatures than what we would previously perceive as normal, 95C is common with stock coolers, triggering some surprise from the enthusiast community. However, the higher temperatures are fully within AMD’s specifications, just as Intel’s upper limit of 100C will fall within its own boundaries.
Here’s the breakdown of Intel’s various boost mechanisms:
Turbo Boost 2.0: Increased frequency if chip operates below power, current, and temperature specifications.
Turbo Boost Max 3.0: Fastest cores are identified during binning, then the Windows scheduler targets the fastest two active cores (favored cores) with lightly-threaded applications. Chip must be below power, current, and temperature specifications.
Single-Core Thermal Velocity Boost: Fastest active favored core can boost higher than Turbo Boost Max 3.0 if below a pre-defined temperature threshold (70C) and all other factors adhere to TB 3.0 conditions.
All-Core Thermal Velocity Boost: Increases all-core frequency when all cores are active and the chip is under 70C.
Adaptive Boost Technology: Allows dynamic adjustment of all-core turbo frequencies when four or more cores are active. This feature doesn’t have a guaranteed boost threshold — it will vary based on chip quality, your cooler, and power delivery.
Overall, AMD’s Precision Boost 2 and Intel’s Adaptive Boost Technology represent both company’s attempts to extract the maximum performance possible within the confines of their respective TDP limits. In its traditional style, AMD offers the feature as a standard on all of its newer Ryzen processors, while Intel positions it as a premium feature for its highest-end Core i9 K and KF processors. As you would imagine, we’ll have full testing of the feature in our coming review.
NUC 11 Extreme Compute Element (Image credit: Chiphell)
The Intel NUC 11 Extreme Compute Element (codename Driver Bay) might be right around the corner. A user from the Chiphell forums has shared a screenshot of the alleged specifications for the device, which appears to leverage Intel’s forthcoming 11th Generation Tiger Lake-H 45W chips.
If the information is legit, the NUC 11 Extreme Compute Element will be available with three processor options that may come in the shape of the Core i9-11980HK, Core i7-11800H or Core i5-11400H. The Core i9 and Core i7 Tiger Lake-H 45W chips will arrive with eight Willow Cove cores, while the Core i5 will stick to six cores. All three have Hyper-Threading technology, of course.
The NUC 11 Extreme Compute Element can be outfitted with up to 64GB of DDR4-3200 dual-channel memory. It also comes equipped with three PCIe 4.0 x4 M.2 slots with support for M.2 drives up to 80mm in length. One of the M.2 slots communicates directly with the Tiger Lake-H processor, while the remaining two are attached to the PCH itself. There is support for RAID 0 and RAID 1 arrays. The unit is compatible with Intel’s Optane Memory as well.
If the Tiger Lake-H chip isn’t paired with a discrete graphics option, then the Xe LP graphics engine will do all the heavy lifting. The NUC 11 Extreme Compute Element provides one HDMI 2.0b port and two Thunderbolt 4 ports so you can connect up to three 4K monitors to the device.
Depending on the SKU, the NUC 11 Extreme Compute Element may sport 2.5 Gigabit Ethernet and/or 10 Gigabit Ethernet ports. It also offers Wi-Fi 6 and Bluetooth 5 connectivity.
The NUC 11 Extreme Compute Element may be a tiny device, but it supplies plenty of USB ports. There are a total of six USB 3.1 Gen 1 Type-A ports as well as two USB 3.1 headers and two USB 2.0 headers. Its audio capabilities include 7.1 multichannel audio that’s made possible through the HDMI or DisplayPort signals.
Since the NUC 11 Extreme Compute Element is based on Tiger Lake-H 45W processors, it’s reasonable to expect the device to hit the market once Intel officially launches the aforementioned chips. Tiger Lake-H 45W laptops are expected to land in the second quarter of this year so the NUC 11 Extreme Compute Element shouldn’t be far behind.
Intel has demonstrated a laptop based on its upcoming eight-core Tiger Lake-H processor running at up to 5.0 GHz, essentially revealing some of the main selling points of its flagship CPU for notebooks. Mobile PCs based on the chip will hit the market in the second quarter, Intel said.
As a part of its GDC 2021 showcase (via VideoCardz), Intel demonstrated a pre-production enthusiast-grade notebook running a yet-to-be-announced 11th-Generation Core i9 ‘Tiger Lake-H’ processor with eight cores and Hyper-Threading technology running at 5.0 GHz ‘across multiple cores.’
The demo CPU is likely the Core i9-11980HK, which Lenovo has already listed, but without disclosing its specifications. This time around, Intel also did not reveal the base clocks of the processor and how many cores can run at 5.0 GHz, but it’s obvious that we’re talking about more than one core, implying 5.0 GHz is not its maximum single-core turbo clock.
Intel’s Tiger Lake-H processors are powered by up to eight cores featuring the Willow Cove microarchitecture equipped with up to 24 MB of L3 cache and a new DDR4 memory controller. The new CPUs also have numerous improvements over processors on the platform level, including 20 PCIe 4.0 lanes to connect to the latest GPUs and high-end SSDs, as well as built-in Thunderbolt 4 support.
To demonstrate the capabilities of the 8-core/16-thread Core i9 ‘Tiger Lake-H’ CPU, Intel used the Total War real-time strategy game that uses CPUs heavily. Unfortunately, it is unknown which GPU Intel used for the demonstration or if it was a discrete high-end notebook graphics processor or Intel’s integrated Xe-LP GPU. Since the laptop featured at least a 15.6-inch display, common sense tells us that this was a discrete graphics solution.
During the presentation, Intel said that the first notebooks based on the Tiger Lake-H processor would arrive in Q2 2021 but did not disclose whether they will show up in early April or late June.
Intel’s new Rocket Lake processors are its big answer to AMD’s Ryzen 5000 chips, and on the face of it, this is going to be a very interesting face-off once the embargo is lifted on reviews. That’s because, rather than follow the trend of more cores and a denser architecture, Intel has actually reduced the number of cores.
Intel claims that this change will lead to a 19% improvement in instruction per cycle (IPC) throughput and can lead to max speeds of 5.3GHz.
Alongside improved performance (at least in applications that aren’t heavily threaded), Rocket Lake will also include PCIe 4.0 interface adoption, AVX-512 support and a claimed 50% increase in Xe-powered integrated graphics performance.
For more in-depth coverage, check out our Intel Rocket Lake CPU news article from our CPU expert Paul Alcorn.
If you’re convinced, though, we’ve set up this page to collect all the different places where you can buy or pre-order a Rocket Lake CPU.
Intel Core i5-11600K: Where to Buy
US Intel Core i5-11600K retailers at a glance: Amazon | Best Buy | Micro Center | Newegg
Pricing across pre-order pages has been pretty inconsistent, which follows reports that some MSRPs are being jacked up for Intel’s 11th Gen CPUs. The best price comes from Best Buy, which is offering the i5-11600K for $269.99, whereas the most expensive is Micro Center at $319.
Intel Core i7-11700K: Where to Buy
US Intel Core i7-11700K retailers at a glance: Amazon | Best Buy | Micro Center | Newegg
When it comes to the 11th Gen i7 CPU, Newegg has the best price at $399 ($30 off), but once again there is a wide swath of pricing. The most expensive is Micro Center at $519, with Amazon and Best Buy in the middle at $418 and $419 respectively.
Also, if you don’t necessarily need an unlocked processor, you can get the standard 11700K from Newegg for $32 less.
Intel Core i9-11900K: Where to Buy
US Intel Core i9-11900K retailers at a glance: Amazon | Best Buy | Newegg
The 11th Gen i9 is where pre-ordering starts to get a little tricky, as it hasn’t actually started yet! The product pages are live, so keep checking back as you may get lucky.
As far as pricing goes, Amazon’s cost is not available yet, but the cheapest is Best Buy at $549. Newegg offers the CPU at a far pricier $613.
Intel’s next-generation desktop chips are finally here: after a brief preview at CES, the company is fully unveiling its 11th Gen Core desktop chips (better known by their codename, Rocket Lake-S.)
Leading the pack is Intel’s new flagship chip, the Core i9-11900K, with eight cores, 16 threads, boosted clock speeds up to 5.3GHz, support for DDR4 RAM at 3,200MHz, a total of 20 PCIe 4.0 lanes, and backwards compatibility with Intel’s 400 Series chipsets.
Eagle-eyed Intel fans might notice that the new chip is, on paper, actually a downgrade from last year’s top model, the Core i9-10900K, which offered 10 cores and 20 threads (and a similar boosted clock speed of 5.3GHz).
That’s because Intel is debuting a new desktop core architecture for the first time in over half a decade with its 11th Gen Rocket Lake-S chips called Cypress Cove. Cypress Cove finally replaces the Skylake microarchitecture, which the company has been using since its 6th Gen chips in 2015.
But the Cypress Cove design isn’t actually a whole new microarchitecture — it’s actually Intel’s Willow Cove chip designs and technologies that the company has been using on its 11th Gen 10nm Tiger Lake chips which Intel is backporting to its 14nm production process.
Since those designs were meant for 10nm chips, though, Intel is limited in the number of cores it can fit when scaling them up to a 14nm size; hence, the reduction in core count year over year. But Intel still says that the new chips will offer better performance (at least, in some cases) than the 10th Gen, with the core architecture enabling up to 19 percent IPC (instructions per cycle) than the previous generation.
Intel’s argument here is effectively that sheer core count isn’t enough on its own — frequency speed and performance matters, too, and thanks to the maturity of the 14nm production process, Intel is very good at cranking out every last drop of performance from these chips.
Intel 11th Gen Desktop Chips
Model
Cores/Threads
Base clock speed (GHz)
Boosted clock speed (GHz)
Turbo Boost Max 3.0 clock speed
Thermal Velocity Boost speed, single core / all cores (GHZ)
Smart Cache
TDP (W)
Graphics
Recommended Price
Model
Cores/Threads
Base clock speed (GHz)
Boosted clock speed (GHz)
Turbo Boost Max 3.0 clock speed
Thermal Velocity Boost speed, single core / all cores (GHZ)
Smart Cache
TDP (W)
Graphics
Recommended Price
i9-11900K
8/16
3.5
Up to 5.1
Up to 5.2
Up to
5.3 / 4.8
16M
125
Intel UHD Graphics 750
$539
i9-11900
8/16
2.5
Up to 5.0
Up to 5.1
Up to
5.2 / 4.7
16M
65
Intel UHD Graphics 750
$439
i7-11700K
8/16
3.6
Up to 4.9
Up to 5.0
NA
16M
125
Intel UHD Graphics 750
$399
i7-11700
8/16
2.5
Up to 4.8
Up to 4.9
NA
16M
65
Intel UHD Graphics 750
$323
i5-11600K
6/12
3.9
Up to 4.9
NA
NA
12M
125
Intel UHD Graphics 750
$262
i5-11600
6/12
2.8
Up to 4.8
NA
NA
12M
65
Intel UHD Graphics 750
$213
i5-11500
6/12
2.7
Up to 4.6
NA
NA
12M
65
Intel UHD Graphics 750
$192
i5-11400
6/12
2.6
Up to 4.4
NA
NA
12M
65
Intel UHD Graphics 730
$182
And Intel’s benchmarks (obviously) support that argument: head to head with last year’s Core i9-10900K, the i9-11900K offered between 8 to 14 percent better performance on games like Gears 5, Grid 2019, Microsoft Flight Simulator, and Total War: Three Kingdoms. Intel also says that its top chip outperforms AMD’s flagship Ryzen 9 5900X processor for those titles, although by slightly smaller margins (between 3 and 11 percent better, according to Intel’s benchmarks).
That said, Intel’s tests were all running at 1080p, so we’ll have to stay tuned for more comprehensive benchmarking down the line on a wider range of titles — and particularly, at 4K resolution.
The new architecture also brings other improvements, with up to 50 percent better integrated graphics compared to Gen9 thanks to the company’s new Xe graphics, with one-third more EUs than its Gen9 graphics.
Given that these are desktop chips that will almost certainly be paired with a high-end discrete graphics card, that’s not the most groundbreaking improvement, however. And while Intel will be offering several F-series models of the new chips without GPUs, the overall design is still the same on those models. That means that Intel isn’t going to be offering any niche models that ditch integrated GPUs to try to fit in more cores, at least for now.
The new chips also feature other improvements. The 11th Gen chips add Resizable BAR, for a frame rate boost on compatible Nvidia and AMD graphics cards. There’s built-in support for both USB 3.2 Gen 2×2 at 20Gbps as well as Intel’s own Thunderbolt 4, along with DDR4-3200 RAM. And Intel has added four additional Gen 4 PCIe lanes, for a total of 20.
As is traditional for a major new chip launch, Intel is also introducing its 500 series motherboards alongside the new processors, but the Rocket Lake-S CPUs will also be backwards compatible with 400 series motherboards.
Additionally, there’s some new overclocking options with the new chips for users looking to squeeze out even more power. Specifically, Intel’s Extreme Tuning Utility software is getting refreshed with a new UI and some updated features alongside the 11th Gen chips.
The new 11th Gen Intel desktop processors are available starting today.
Intel’s upcoming Rocket Lake CPUs are almost upon us, and yet again we have more leaked benchmarks pertaining to the Core i9-11900K, Core i7-11700K, and Core i5-11400. Tweeted by legendary benchmark database detective APISAK, we have CPU-Z benchmark results for these three chips, with the Core i9 and Core i7 pumping out some amazing single-threaded scores.
While these results are highly favorable to Intel, keep in mind that CPU-Z is just like most benchmarks and can be favor one CPU architecture over another, so be careful in trusting these results. We also aren’t sure if these tests were run at standard stock settings. In either case, the results paint a promising picture for Rocket Lake’s single-threaded performance.
CPU-Z Benchmark Results
CPUs:
CPU-Z Single Threaded Test
CPU-Z Multi-Threaded Test
Core i9-11900K
716
6539
Core i7-11700K
719
N/A
Core i5-11400
544
4012
Ryzen 9 5950X
658
12366
Ryzen 9 5900X
633
8841
Ryzen 7 5800X
650
6593
Ryzen 5 5600X
643
4814
Intel’s Core i9 and Core i7 Rocket Lake chips dominate in the single-threaded CPU-Z test — both chips sit comfortably above the 700 mark. Compared to AMD’s best offering, the 5950X, the Rocket Lake chips are roughly 7% faster.
Of course, Rocket Lake’s IPC gains won’t make up for reduced core counts, so it’s no surprise that the Ryzen 9 5950X and 5900X win in the multi-threading department.
But, if we limit our comparisons to just the eight-core parts, the Ryzen 5 5800X makes up a lot of ground against the 11900K, and is just 0.8% quicker. This is within the margin of error, so we can safely say both chips are equal in this test. Unfortunately, the 11700K has no multi-threaded score, so that chip is out of the picture for now.
We don’t know why the 5800X makes up all its performance losses from the single-threaded test in the multi-threaded test, but it could be due to reduced turbo frequencies on the Core i9 part, as well as architectural differences between the two chips.
Intel’s upcoming mid-range SKU, the Core i5-11400, is the weakest of the bunch being 18% slower than the 5600X (in the single and multi-threaded tests). However, like the previous 400- series Core i5s, we can expect the 11400 to have reduced clock speeds to help drive costs down.
We’ll have to wait for a Core i5-11600K result to have a fair comparison against AMD’s Ryzen 5 5600X.
If the CPU-Z benchmarks are to be trusted, Intel’s Core i9-11900K and i7-11700K could make our list of best CPUs and climb the ranks in our CPU Benchmark hierarchy for single-threaded workloads.
Although Intel has not yet officially launched its 11th-Gen Core processors for desktops codenamed Rocket Lake, these CPUs were available from a single retailer for a brief period of time, so enthusiasts have already begun experimenting. Recently, one experimenter decided to remove the Core i7-10700K’s lid (delid) to reveal the die underneath.
This week MoeBen, an enthusiast from Overclock.net forums, delidded Intel’s Core i7-11700K processor. Even though he used special tools for delidding, the CPU died as a result of his manipulations.
The main thing that strikes the eye about Intel’s Rocket Lake is its rather massive die size. A quick comparison of Rocket Lake’s silicon to delidded Intel’s previous-generation processors reveals that the die of Intel’s eight-core Core i7-11700K is both ‘taller’ and ‘wider’ than the die of Intel’s 10-core Core i9-10900K. Also, the new CPU uses a slightly different packaging with resistors placed differently.
Based on rough napkin math based on the size of Intel’s LGA115x/1200 packaging (38 mm × 38 mm), an estimate for the Rocket Lake die size puts it around 11.78 mm × 24.58 mm, or 289.5 mm2. Such a large die area puts Rocket Lake into the league of the company’s LCC high-end desktop and server processors. For example, Intel’s 10-core Skylake-SP with a massive cache is around 322 mm2.
Intel’s Rocket Lake processors pack eight cores based on the Cypress Cove microarchitecture (which is a derivative of the company’s Willow Cove microarchitecture), an integrated GPU featuring the Xe architecture, a new media encoding/decoding engine, a revamped display pipeline, and a new memory controller.
Essentially, Rocket Lake uses CPU and GPU IP designed for Intel’s 10 nm SuperFin process technology, yet since it is made using one of Intel’s 14 nm nodes, it is natural that the said IP consumes more silicon area. To that end, it is not surprising that the new CPU is substantially bigger than its predecessor despite the fact that it has fewer cores. Obviously, since these cores are larger (and faster), they take up more die space.
Intel is projected to officially launch its Rocket Lake processors on March 30, 2021.
A surge in crypto mining interest has led not only to users seeking out the best mining GPUs, but since graphics cards are so hard to find in stock and the GPU price index for cards on eBay is just crazy, mining with laptops is becoming a thing. In fact, we have even seen mining farms that only use notebooks. Now, MSI is trying to advertise its latest GE76 Raider notebook as a mobile mining machine.
In an official blog post MSI describes how it plans to use one of its latest gaming notebooks, the 17.3-inch GE76 Raider with Intel’s Core i9-10980HK processor and Nvidia’s GeForce RTX 3080 GPU inside, for mining for one month. To mine, MSI will use the NiceHash platform (see how to mine Ethereum) as well as the Excavator miner with the DaggerHashimoto algorithm.
MSI admits that one of its top-of-the-range gaming notebooks is hardly the most cost-efficient mining option, but since it is hard to get a higher-end graphics card, miners may still want to try it.
MSI claims that its GE76 Raider has a hash rate of around 52.8 MH/s, which is just a little below that of a desktop GeForce RTX 3070 graphics card. Assuming that the laptop consumes 240W of power and the cost of power is $0.12 per kWh, then the machine will bring about $134.08 in profits per month, according to CryptoCompare.
MSI does not talk about the long-term effects of using its laptop for mining and whether the components are built to endure years of 24/7 use. Yet, it sends a clear signal to to potential buyers that its gaming notebooks could be used for mining.
Puget Systems recently shared sales data showing that its sales of AMD-powered systems have now passed the 50% mark, unseating Intel from its previously-dominating 100% share of the company’s sales. This win comes even though Puget Systems previously stopped selling AMD systems back in 2015 because Team Red was no longer competitive on the performance front. Still, after resuming building AMD systems again in 2017, Puget has seen explosive growth in the number of AMD systems it ships.
Puget Systems is a boutique system vendor specializing in higher-performance systems that range from smaller system builds to the highest-end workstations. Hence, the data strongly implies that AMD’s gains in the higher-end desktop PC and workstation markets are accelerating.
Puget’s sales of AMD-powered systems took quite some time to accelerate, with the lion’s share of its sales growth occurring after November 2019 when AMD began shipping its Ryzen 3000 series processors. That makes a lot of sense, as the Ryzen 3000 processors marked the debut of 12- and 16-core processors for mainstream desktop platforms. Those platforms lend themselves to lower price points than competing Intel HEDT processors but come with enough horsepower to match Intel’s expensive HEDT chips in many workloads.
The debut of AMD’s Threadripper 3000 series in early 2020 obviously helped accelerate AMD’s gains in the workstation market, and Puget’s sales of AMD-powered systems started a long uptick at the beginning of 2020. Naturally, those processors are a good fit for the highest-performance PCs, falling right into Puget’s target audience. Intel’s Cascade Lake processors, like the Core i9-10980XE, aren’t in the same class as the Threadripper processors, and Intel hasn’t refreshed its HEDT lineup in more than a year.
Fast-forwarding to the November 2020 debut of AMD’s beastly Ryzen 5000 processors, like the Ryzen 9 5950X and 5900X, and AMD began its final push to unseat Intel as Puget’s top-selling CPU brand. That makes plenty of sense, as the Ryzen 5000 processors have completely unseated Intel’s competing processors in pretty much every metric that matters, and you can see the explosive gains in Ryzen sales right at the 5000-series launch. This isn’t surprising, the Ryzen 5000 chips have taken over the top spots on our CPU Benchmarks hierarchy by substantial margins, giving AMD a commanding performance lead. As you can see, currently there’s a pretty even split between Puget’s Ryzen 5000 and Threadripper sales, but that could change in the coming months now that Threadripper Pro has finally come to market.
AMD’s return to prominence in the desktop PC market is well known, but despite its remarkable turnaround, the company still only holds 19.3% of the overall desktop PC market share. As you can see in the chart above, which plots data from industry analyst firm Mercury Research, AMD still has plenty of room to grow.
Intel still holds a huge advantage in the mainstream OEM markets with the types of systems we see sold at big box stores, but given the trends we see at Puget Systems, we can expect AMD to begin taking over that segment, too. At least once it can work out its supply issues. Ultimately, supply is AMD’s biggest current constraint, but recent trends suggest that the company is working out the kinks as more Ryzen 5000 series processors are becoming readily available at retail. Of course, that leads one to wonder just how much more lopsided the situation at Puget would be if Ryzen chips were fully available.
After almost a decade of total market dominance, Intel has spent the past few years on the defensive. AMD’s Ryzen processors continue to show improvement year over year, with the most recent Ryzen 5000 series taking the crown of best gaming processor: Intel’s last bastion of superiority.
Now, with a booming hardware market, Intel is preparing to make up some of that lost ground with the new 11th Gen Intel Core Processors. Intel is claiming these new 11th Gen CPUs offer double-digit IPC improvements despite remaining on a 14 nm process. The top-end 8-core Intel Core i9-11900K may not be able to compete against its Ryzen 9 5900X AMD rival in heavily multi-threaded scenarios, but the higher clock speeds and alleged IPC improvements could be enough to take back the gaming crown. Along with the new CPUs, there is a new chipset to match, the Intel Z590. Last year’s Z490 chipset motherboards are also compatible with the new 11th Gen Intel Core Processors, but Z590 introduces some key advantages.
First, Z590 offers native PCIe 4.0 support from the CPU, which means the PCIe and M.2 slots powered off the CPU will offer PCIe 4.0 connectivity when an 11th Gen CPU is installed. The PCIe and M.2 slots controlled by the Z590 chipset are still PCI 3.0. While many high-end Z490 motherboards advertised this capability, it was not a standard feature for the platform. In addition to PCIe 4.0 support, Z590 offers USB 3.2 Gen 2×2 from the chipset. The USB 3.2 Gen 2×2 standard offers speeds of up to 20 Gb/s. Finally, Z590 boasts native support for 3200 MHz DDR4 memory. With these upgrades, Intel’s Z series platform has feature parity with AMD’s B550. On paper, Intel is catching up to AMD, but only testing will tell if these new Z590 motherboards are up to the challenge.
The ASRock Z590 Steel Legend WiFi 6E aims to be a durable, dependable platform for the mainstream market. The ASRock Z590 Steel Legend WiFi 6E features a respectable 14-phase VRM that takes advantage of 50 A power stages from Vishay. Additionally, ASRock has included a 2.5 Gb/s LAN controller from Realtek as well as the latest WiFi 6 connectivity. The ASRock Z590 Steel Legend WiFi 6E has all the mainstream features most users need packaged in at a reasonable price. All that is left is to see how the ASRock Z590 Steel Legend WiFi 6E stacks up against the competition!
2x Antenna Ports 1x PS/2 Mouse/Keyboard Port 1x HDMI Port 1x DisplayPort 1.4 1x Optical SPDIF Out Port 1x USB 3.2 Gen2 Type-A Port 1x USB 3.2 Gen2 Type-C Port 2x USB 3.2 Gen1 Ports 2x USB 2.0 Ports 1x RJ-45 LAN Port 5x HD Audio Jacks
Audio:
1x Realtek ALC897 Codec
Fan Headers:
7x 4-pin
Form Factor:
ATX Form Factor: 12.0 x 9.6 in.; 30.5 x 24.4 cm
Exclusive Features:
ASRock Super Alloy
XXL Aluminium Alloy Heatsink
Premium Power Choke
50A Dr.MOS
Nichicon 12K Black Caps
I/O Armor
Shaped PCB Design
Matte Black PCB
High Density Glass Fabric PCB
2oz copper PCB
2.5G LAN
Intel® 802.11ax Wi-Fi 6E
ASRock steel Slot
ASRock Full Coverage M.2 Heatsink
ASRock Hyper M.2 (PCIe Gen4x4)
ASRock Ultra USB Power
ASRock Full Spike Protection
ASRock Live Update & APP Shop
Testing for this review was conducted using a 10th Gen Intel Core i9-10900K. Stay tuned for an 11th Gen update when the new processors launch!
TechPowerUp is one of the most highly cited graphics card review sources on the web, and we strive to keep our testing methods, game selection, and, most importantly, test bench up to date. Today, I am pleased to announce our newest March 2021 VGA test system, which has one of many firsts for TechPowerUp. This is our first graphics card test bed powered by an AMD CPU. We are using the Ryzen 7 5800X 8-core processor based on the “Zen 3” architecture. The new test setup fully supports the PCI-Express 4.0 x16 bus interface to maximize performance of the latest generation of graphics cards by both NVIDIA and AMD. The platform also enables the Resizable BAR feature by PCI-SIG, allowing the processor to see the whole video memory as a single addressable block, which could potentially improve performance.
A new test system heralds completely re-testing every single graphics card used in our performance graphs. It allows us to kick out some of the older graphics cards and game tests to make room for newer cards and games. It also allows us to refresh our OS, testing tools, update games to the latest version, and explore new game settings, such as real-time raytracing, and newer APIs.
A VGA rebench is a monumental task for TechPowerUp. This time, I’m testing 26 graphics cards in 22 games at 3 resolutions, or 66 game tests per card, which works out to 1,716 benchmark runs in total. In addition, we have doubled our raytracing testing from two to four titles. We also made some changes to our power consumption testing, which is now more detailed and more in-depth than ever.
In this article, I’ll share some thoughts on what was changed and why, while giving you a first look at the performance numbers obtained on the new test system.
Hardware
Below are the hardware specifications of the new March 2021 VGA test system.
Windows 10 Professional 64-bit Version 20H2 (October 2020 Update)
Drivers:
AMD: 21.2.3 Beta NVIDIA: 461.72 WHQL
The AMD Ryzen 7 5800X has emerged as the fastest processor we can recommend to gamers for play at any resolution. We could have gone with the 12-core Ryzen 9 5900X or even maxed out this platform with the 16-core 5950X, but neither would be faster at gaming, and both would be significantly more expensive. AMD certainly wants to sell you the more expensive (overpriced?) CPU, but the Ryzen 7 5800X is actually the fastest option because of its single CCD architecture. Our goal with GPU test systems over the past decade has consistently been to use the fastest mainstream-desktop processor. Over the years, this meant a $300-something Core i7 K-series LGA115x chip making room for the $500 i9-9900K. The 5900X doesn’t sell for anywhere close to this mark, and we’d rather not use an overpriced processor just because we can. You’ll also notice that we skipped upgrading to the 10-core “Comet Lake” Core i9-10900K processor from the older i9-9900K because we saw no significant increases and negligible gaming performance gains, especially considering the large overclock on the i9-9900K. The additional two cores do squat for nearly all gaming situations, which is the second reason besides pricing that had us decide against the Ryzen 9 5900X.
We continue using our trusted Thermaltake TOUGHRAM 16 GB dual-channel memory kit that served us well for many years. 32 GB isn’t anywhere close to needed for gaming, so I didn’t want to hint at that, especially to less experienced readers checking out the test system. We’re running at the most desirable memory configuration for Zen 3 to reduce latencies inside the processor: Infinity Fabric at 2000 MHz, memory clocked at DDR4-4000, in 1:1 sync with the Infinity Fabric clock. Timings are at a standard CL19 configuration that’s easily found on affordable memory modules—spending extra for super-tight timings usually is overkill and not worth it for the added performance.
The MSI B550-A PRO was an easy choice for a motherboard. We wanted a cost-effective motherboard for the Ryzen 9 5800X and don’t care at all about RGB or other bling. The board can handle the CPU and memory settings we wanted for this test bed, the VRM barely gets warm. It also doesn’t come with any PCIe gymnastics—a simple PCI-Express 4.0 x16 slot wired to the CPU without any lane switches along the way. The slot is metal-reinforced and looks like it can take quite some abuse over time. Even though I admittedly swap cards hundreds of times each year, probably even 1000+ times, it has never been any issue—insertion force just gets a bit softer, which I actually find nice.
Software and Games
Windows 10 was updated to 20H2
The AMD graphics driver used for all testing is now 21.2.3 Beta
All NVIDIA cards use 461.72 WHQL
All existing games have been updated to their latest available version
The following titles were removed:
Anno 1800: old, not that popular, CPU limited
Assassin’s Creed Odyssey: old, DX11, replaced by Assassin’s Creed Valhalla
Hitman 2: old, replaced by Hitman 3
Project Cars 3: not very popular, DX11
Star Wars: Jedi Fallen Order: horrible EA Denuvo makes hardware changes a major pain, DX11 only, Unreal Engine 4, of which we have several other titles
Strange Brigade: old, not popular at all
The following titles were added:
Assassin’s Creed Valhalla
Cyberpunk 2077
Hitman 3
Star Wars Squadrons
Watch Dogs: Legion
I considered Horizon Zero Dawn, but rejected it because it uses the same game engine as Death Stranding. World of Warcraft or Call of Duty won’t be tested because of their always-online nature, which enforces game patches that mess with performance—at any time. Godfall is a bad game, Epic exclusive, and commercial flop.
The full list of games now consists of Assassin’s Creed Valhalla, Battlefield V, Borderlands 3, Civilization VI, Control, Cyberpunk 2077, Death Stranding, Detroit Become Human, Devil May Cry 5, Divinity Original Sin 2, DOOM Eternal, F1 2020, Far Cry 5, Gears 5, Hitman 3, Metro Exodus, Red Dead Redemption 2, Sekiro, Shadow of the Tomb Raider, Star Wars Squadrons, The Witcher 3, and Watch Dogs: Legion.
Raytracing
We previously tested raytracing using Metro Exodus and Control. For this round of retesting, I added Cyberpunk 2077 and Watch Dogs Legion. While Cyberpunk 2077 does not support raytracing on AMD, I still felt it’s one of the most important titles to test raytracing with.
While Godfall and DIRT 5 support raytracing, too, neither has had sufficient commercial success to warrant inclusion in the test suite.
Power Consumption Testing
The power consumption testing changes have been live for a couple of reviews already, but I still wanted to detail them a bit more in this article.
After our first Big Navi reviews I realized that something was odd about the power consumption testing method I’ve been using for years without issue. It seemed the Radeon RX 6800 XT was just SO much more energy efficient than NVIDIA’s RTX 3080. It definitely is more efficient because of the 7 nm process and AMD’s monumental improvements in the architecture, but the lead just didn’t look right. After further investigation, I realized that the RX 6800 XT was getting CPU bottlenecked in Metro: Last Light at even the higher resolutions, whereas the NVIDIA card ran without a bottleneck. This of course meant NVIDIA’s card consumed more power in this test because it could run faster.
The problem here is that I used the power consumption numbers from Metro for the “Performance per Watt” results under the assumption that the test loaded the card to the max. The underlying reason for the discrepancy is AMD’s higher DirectX 11 overhead, which only manifested itself enough to make a difference once AMD actually had cards able to compete in the high-end segment.
While our previous physical measurement setup was better than what most other reviewers use, I always wanted something with a higher sampling rate, better data recording, and a more flexible analysis pipeline. Previously, we recorded at 12 samples per second, but could only store minimum, maximum, and average. Starting and stopping the measurement process was a manual operation, too.
The new data acquisition system also uses professional lab equipment and collects data at 40 samples per second, which is four times faster than even NVIDIA’s PCAT. Every single data point is recorded digitally and stashed away for analysis. Just like before, all our graphics card power measurement is “card only”, not the “whole system” or “GPU chip only” (the number displayed in the AMD Radeon Settings control panel).
Having all data recorded means we can finally chart power consumption over time, which makes for a nice overview. Below is an example data set for the RTX 3080.
The “Performance per Watt” chart has been simplified to “Energy Efficiency” and is now based on the actual power and FPS achieved during our “Gaming” power consumption testing run (Cyberpunk 2077 at 1440p, see below).
The individual power tests have also been refined:
“Idle” testing is now measuring at 1440p, whereas it used 1080p previously. This is to follow the increasing adoption rates of high-res monitors.
“Multi-monitor” is now 2560×1440 over DP + 1920×1080 over HDMI—to test how well power management works with mixed resolutions over mixed outputs.
“Video Playback” records power usage of a 4K30 FPS video that’s encoded with H.264 AVC at 64 Mbps bitrate—similar enough to most streaming services. I considered using something like madVR to further improve video quality, but rejected it because I felt it to be too niche.
“Gaming” power consumption is now using Cyberpunk 2077 at 1440p with Ultra settings—this definitely won’t be CPU bottlenecked. Raytracing is off, and we made sure to heat up the card properly before taking data. This is very important for all GPU benchmarking—in the first seconds, you will get unrealistic boost rates, and the lower temperature has the silicon operating at higher efficiency, which screws with the power consumption numbers.
“Maximum” uses Furmark at 1080p, which pushes all cards into its power limiter—another important data point.
Somewhat as a bonus, and I really wasn’t sure if it’s as useful, I added another run of Cyberpunk at 1080p, capped to 60 FPS, to simulate a “V-Sync” usage scenario. Running at V-Sync not only removes tearing, but also reduces the power consumption of the graphics card, which is perfect for slower single-player titles where you don’t need the highest FPS and would rather conserve some energy and have less heat dumped into your room. Just to clarify, we’re technically running a 60 FPS soft cap so that weaker cards that can’t hit 60 FPS (GTX 1650S and GTX 1660) won’t run 60/30/20 FPS V-Sync, but go as high as able.
Last but not least, a “Spikes” measurement was added, which reports the highest 20 ms spike recorded in this whole test sequence. This spike usually appears at the start of Furmark, before the card’s power limiting circuitry can react to the new conditions. On RX 6900 XT, I measured well above 600 W, which can trigger the protections of certain power supplies, resulting in the machine suddenly turning off. This happened to me several times with a different PSU than the Seasonic, so it’s not a theoretical test.
Radeon VII Fail
Since we’re running with Resizable BAR enabled, we also have to boot with UEFI instead of CSM. When it was time to retest the Radeon VII, I got no POST, and it seemed the card was dead. Since there’s plenty of drama around Radeon VII cards suddenly dying, I already started looking for a replacement, but wanted to give it another chance in another machine, which had it working perfectly fine. WTF?
After some googling, I found our article detailing the lack of UEFI support on the Radeon VII. So that was the problem, the card simply didn’t have the BIOS update AMD released after our article. Well, FML, the page with the BIOS update no longer exists on AMD’s website.
Really? Someone on their web team made the decision to just delete the pages that contain an important fix to get the product working, a product that’s not even two years old? (launched Feb 7 2019, page was removed no later than Nov 8 2020).
Luckily, I found the updated BIOS in our VGA BIOS collection, and the card is working perfectly now.
Performance results are on the next page. If you have more questions, please do let us know in the comments section of this article.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.