The first benchmark results of Intel’s yet-to-be-announced eight-core Core i9-11950 ‘Tiger Lake-H’ processor for gaming notebooks have been published in Primate Labs’ Geekbench 5 database. The new unit expectedly beats Intel’s own quad-core Core i7-1185G7 CPU both in single and multi-thread workloads, but when it comes to comparison with other rivals, its results are not that obvious.
Intel’s Core i9-11950 processor has never been revealed in leaks, so it was surprising to see benchmark results of HP’s ZBook Studio 15.6-inch G8 laptop based on this CPU in Geekbench 5. The chip has eight cores based on the Willow Cove microarchitecture running at 2.60 GHz – 4.90 GHz, it is equipped with a 24MB cache, a dual-channel DDR4-3200 memory controller, and a basic UHD Graphics core featuring the Xe architecture.
In Geekbench 5, the ZBook Studio 15.6-inch G8 powered by the Core i9-11950H scored 1,365 points in single-thread benchmark and 6,266 points in multi-thread benchmark. The system operated in ‘HP Optimized (Modern Standby)’ power plan, though we do not know the maximum TDP that is supported in this mode.
CPU
Single-Core
Multi-Core
Cores/Threads, uArch
Cache
Clocks
TDP
Link
AMD Ryzen 9 5980HS
1,540
8,225
8C/16T, Zen 3
16MB
3.30 ~ 4.53 GHz
35W
https://browser.geekbench.com/v5/cpu/6027200
AMD Ryzen 9 4900H
1,230
7,125
8C/16T, Zen 2
8MB
3.30 ~ 4.44 GHz
35~54W
https://browser.geekbench.com/v5/cpu/6028856
Intel Core i9-11900
1,715
10,565
8C/16T, Cedar Cove
16 MB
2.50 ~ 5.20 GHz
65W
https://browser.geekbench.com/v5/cpu/7485886
Intel Core i9-11950H
1,365
6,266
8C/16T, Willow Cove
24MB
2.60 ~ 4.90 GHz
?
https://browser.geekbench.com/v5/cpu/7670672
Intel Core i9-10885H
1,335
7,900
8C/16T, Skylake
16MB
2.40 ~ 5.08 GHz
45W
https://browser.geekbench.com/v5/cpu/6006773
Intel Core i7-1185G7
1,550
5,600
4C/8T, Willow Cove
12MB
3.0 ~ 4.80 GHz
28W
https://browser.geekbench.com/v5/cpu/5644005
Apple M1
1,710
7,660
4C Firestorm + 4C Icestorm
12MB + 4MB
3.20 GHz
20~24W
https://browser.geekbench.com/v5/cpu/6038094
The upcoming Core i9-11950H processor easily defeats its quad-core Core i7-1185G7 brother for mainstream and thin-and-light laptops both in single-thread and multi-thread workloads. This is not particularly surprising as the model i7-1185G7 has a TDP of 28W. Meanwhile, the Core i9-11950H is behind AMD’s Ryzen 9 5980HS as well as Apple’s M1 in all kinds of workloads. Furthermore, its multi-thread score is behind that of its predecessor, the Core i9-10885H.
Perhaps, the unimpressive results of the Core i9-11950H in Geekbench 5 are due to a preliminary BIOS, early drivers, wrong settings, or some other anomalies. In short, since the CPU does not officially exist, its test results should be taken with a grain of salt. Yet, at this point, the product does not look too good in this benchmark.
ASRock’s Z590 PG Velocita is a full-featured Z590 motherboard that includes three M.2 sockets, Killer based networking (including Wi-Fi 6E), capable power delivery, premium audio, and more. It’s a well-rounded mid-ranger for Intel’s Z590 platform.
For
Killer based 2.5 GbE and Wi-Fi 6E Networking
10 USB ports on rearIO
Capable power delivery
Against
Last gen audio codec
No USB 3.2 Gen2x2 Type-C on rearIO
Features and Specifications
Next up out of the ASRock stable is the Z590 PG Velocita. The Z590 version of this board comes with an improved appearance, enhanced power delivery, PCIe 4.0 capability for your GPU and M.2 device, fast Killer-based networking and more. Priced around $300, the PG Velocita lands as a feature-rich mid-range option in the Z590 landscape.
ASRock’s Z590 lineup is similar to the previous-generation Z490 product stack. At the time we wrote this, ASRock has 12 Z590 motherboards listed. At the top is Z590 Taichi, followed by the PG Velocita we’re looking at here, and three Phantom Gaming boards, including a Micro-ATX option. Additionally, there are two professional boards in the Z590 Pro4 and Z590M Pro4, two Steel Legend boards, two Extreme boards (also more on the budget end), and a Mini-ITX board round out the product stack. Between price, size, looks, and features, ASRock should have a board that works for everyone looking to dive headlong into Rocket Lake.
Performance testing on the PG Velocita went well and produced scores that are as fast or faster than the other Z590 boards we’ve tested so far. The PG Velocita eschews Intel specifications, allowing the Intel Core i9-11900K to stretch its legs versus boards that more closely follow those specs. Overclocking went well, with the board able to run our CPU at both stock speeds and the 5.1 GHz overclock we’ve settled on. Memory overclocking also went well, with this board running our DDR4 3600 sticks at 1:1, and DDR4 4000 was nice and stable after a few tweaks to voltage to get it there.
The Z590 PG Velocita is an iterative update, just like most other Z590-based motherboards. The latest version uses a Killer-based 2.5 GbE and Wi-Fi 6E network stack, adds a front panel USB 3.2 Gen2x2 Type-C port, premium Realtek audio codec (though it is last generation’s flagship), three M.2 sockets and more. We’ll dig into these details and other features below. But first, here are the full specs from ASRock.
Along with the motherboard, the box includes several accessories ranging from cables to graphic card holders and an additional VRM fan. The included accessories should get you started without a trip to the store. Below is a complete list of all included extras.
Support DVD / Quick installation Guide
Graphics card holder
Wi-Fi Antenna
(4) SATA cables
(3) Screw package for M.2 sockets
(3) Standoffs for M.2 sockets
Wireless dongle USB bracket
3010 Cooling Fan with bracket
4010 Cooling Fan bracket
Image 1 of 3
Image 2 of 3
Image 3 of 3
Once you remove the Z590 PG Velocita from the box, one of the first things you’ll notice (if you’re familiar with the previous model) are the design changes. ASRock sticks with the black and red theme but forgoes the red stenciling on the black PCB from the last generation. The VRM heatsinks are large, connected via heatpipe and actively cooled out of the box by a small fan hidden in the left heatsink. ASRock includes an additional small fan and brackets for the top VRM heatsink (we did not use this in any test). The rear IO cover also sports the black and red Phantom Gaming design theme, along with the ASRock branding lit up with RGB lighting. The heatsinks on the bottom half of the board cover the three M.2 sockets and the chipset heatsink. The latter sports a PCB and chip under clear plastic for a unique look. Overall, I like the changes ASRock made to the appearance of this board, and it should fit in well with more build themes.
As we look closer at the top half of the board, we start by focusing on the VRM area. These aren’t the most robust parts below the heatsink, so additional cooling is welcomed. Just above the VRM heatsinks are two 8-pin EPS connectors (one required) to power the processor. To the right of the socket area are four unreinforced DRAM slots with latches on one side. ASRock lists supported speeds up to DDR4 4800(OC) with a maximum capacity of 128GB. As always, your mileage may vary as support depends on the CPU’s IMC and the kit you use to reach those speeds.
Located above the DRAM slots, we find the first two (of seven) 4-pin fan headers. The CPU/Water Pump and Chassis/Water Pump headers both support 24W/12A, with the remainder of the fan headers supporting 12W/1A. There are plenty of fan/pump headers on this board to support the motherboard running them all without a controller if you choose. A third 4-pin header is located in this area, while a fourth is in an odd spot, just below the left VRM heatsink. Outside of that, all headers auto-sense if a 3- or 4-pin connector is attached.
Just to the right of the fan headers up top are an ARGB (3-pin) and RGB header (4-pin). You’ll find the other two on the bottom edge of the board. The Polychrome Sync application controls these LEDs and any attached to the headers.
On the right edge are power and reset buttons, while just below those are the 24-pin ATX header for power to the board. Just below this is the first USB 3.2 Gen1 front panel header and the USB 3.2 Gen2x2 Type-C front panel header.
ASRock uses a 12-phase configuration for the CPU. Power goes through the 8-pin EPS connector(s) and is sent to the Renesas ISL69269 (X+Y+Z=12) controller. The controller then sends power to six Renesas ISL6617A phase doublers and finally onto the 12 Vishay 50A SIC654 DrMOS power stages. This provides 600A total to the CPU. While not the highest value we’ve seen, the VRM’s easily handled our CPU at stock and overclocked, with some help from the active cooling fan. This board comes with another fan, however, we chose not to use it and after testing, found there wasn’t a need for it.
Moving down to the bottom half of the board, we’ll start on the left side with audio. Hidden below the plastic shroud is the premium Realtek ALC1220 codec. ASRock chose to go with the last generation’s flagship solution instead of jumping up to the latest 4000 series Realtek codec, likely to cut costs. We also spy a few Nichicon Fine Gold audio capacitors poking through the said shroud. This board doesn’t have a fancy DAC as more expensive boards tend to, but this solution will still be satisfactory for an overwhelming majority of users.
In the middle of the board, we see three full-length reinforced PCIe slots (and an x1 slot) as well as the heatsinks that cover the three M.2 sockets. Starting with the PCIe configuration, when using 11th gen CPU, the top two slots are PCIe 4.0 capable with the slot breakdown as follows: x16/x0, x8/x8, or x8/x8/x4 (PCIe 3.0). ASRock says the PG Velocita supports Quad CrossfireX, 3-Way CrossFireX and CrossfireX. As is increasingly common, there’s no mention of SLI support. The x1 slot is connected via the chipset and runs at PCIe 3.0 x1 speeds.
Looking at M.2 storage, the top socket, M2_1, is connected directly to the CPU and offers the fastest speeds (PCIe 4.0 x4 @ 64 Gbps), supporting up to 80mm devices. The second slot down, M2_2, is chipset connected, supporting PCIe 3.0 x4 speeds and accepting SATA-based modules. The bottom socket, M2_3, is also fed from the chipset and runs both SATA-based drives and PCIe, at 3.0 x4 speeds. If M2_2 is occupied, SATA ports 0/1 will be disabled. If M2_3 has a SATA-type drive installed, SATA 3 will be disabled. In the worst-case scenario, when all M.2 sockets are populated (one with a SATA drive), you’ll still have three SATA ports available as well. The top two sockets hold up to 80 mm modules while the bottom supports up to 110 mm drives.
To the right of the PCIe socket sits the chipset heatsink and its PCB-under-plexi look. Continuing to the right edge, we spot another 4-pin fan/pump header, the second USB 3.2 Gen1 header and six SATA ports. Below that is another 4-pin fan header and finally a clear CMOS button to reset your BIOS. Around the SATA ports are the mounting holes for the included GPU support bar. Including this in the box is a great value add, especially with graphics cards seemingly getting larger and heavier as time goes on.
Across the board’s bottom are several headers, including more USB ports, fan headers and more. Below is the complete list, from left to right:
Front-panel audio
Thunderbolt header
UART header
RGB and ARGB headers
USB 2.0 header
TPM header
(2) Chassis/WP headers
Dr. Debug LED
Temperature sensor, water flow headers
Speaker
Front panel header
Flipping the board around to the rear IO area, there’s the pre-installed IO plate which matches the colors and design of the rest of the board. There are 10 USB ports: You get two USB 3.2 Gen 2 ports (Type-A and Type-C), six USB 3.2 Gen 1 ports, and two USB 2.0 ports, all of which have ESD protection. Two of these ports, outlined in red, are the Lightning ports. The ports are sourced from two different controller interfaces, allowing gamers to connect their mice/keyboard with the lowest jitter latency–according to ASRock. On the video front, the PG Velocita includes an HDMI port and DisplayPort for use with the integrated video on the processor.
Also here are the Intel (black) and Killer (blue) Ethernet ports on the networking front. The Killer LAN can communicate directly with the CPU, yielding lower latency than chipset-connected LAN–again according to ASRock. Next up are the antenna ports for Wi-Fi 6E and, finally, the gold-plated 5-plug audio stack plus SPDIF.
Chinese motherboard manufacturer Onda (via ZOL) has launched the brand’s new Chia-D32H-D4 motherboard. The model name alone is enough to tell you that this motherboard is aimed at farming Chia cryptocurrency, which has already caused hard drive price spikes in Asia.
Designed for mining, rather than to compete with the best motherboards for gaming, the Chia-D32H-D4 is most likely a rebranded version of Onda’s existing B365 D32-D4 motherboard. It measures 530 x 310mm, so the Chia-D32H-D4 isn’t your typical motherboard. In fact, Onda has produced a special case with an included power supply for this specific model. The unspecified 800W power supply arrives with the 80Plus Gold certification, while the case features five cooling fans.
The Chia-D32H-D4’s selling point is obviously the motherboard’s 32 SATA ports, allowing you to leverage up to 32 hard drives. The B365 chipset can only provide a limited amoung of SATA ports, so the Chia-D32H-D4 depends on a third-party SATA controller such as Marvell to get the count up to 32. We counted seven SATA controllers in the render of the motherboard. Assuming that each controller delivers up to four SATA ports, the remaining four should come from the B365 chipset itself.
At 18GB per drive, the motherboard can accommodate up to 576GB of storage for all your Chia farming activities — enough for around 5,760 101.4GiB plots. Based on the current Chia network stats, that would be enough for about 0.05% of the total Chia netspace, though that’s likely to decrease rapidly in the coming days if current trends continue, never mind the time required to actually generate that many plots.
In terms of power connectors, the Chia-D32H-D4 comes equipped with a standard 24-pin power connector, one 8-pin EPS connector and up to two 6-pin PCIe power connectors. The latter is designed exclusively to power the hard drives.
Image 1 of 2
Image 2 of 2
Based on the LGA1151 socket and B365 chipset, the Chia-D32H-D4 is very flexible in regards to processor support. It’s compatible with Intel’s Skylake, Kaby Lake, Coffee Lake and Coffee Lake Refresh processors. The motherboard utilizes a modest six-phase power delivery subsystem, but it should be sufficient to house processors up to the Core i9 tier.
Besides the deep storage requirements, Chia farming is reliant memory as well. A single Chia splot requires around 4GB of memory. The Chia-D32H-D4 offers four DDR4 memory slots, providing the opportunity to have up to 128GB of memory in the system. On paper, you can plot up to 32 plots in parallel.
Expansion options on the Chia-D32H-D4 are limited to one PCIe x16 slot, one PCIe x1 slot and one M.2 slot. Connectivity, however, is pretty generous. For connecting displays, you can choose between the HDMI port or VGA port. There are also four USB 3.0 ports and two Gigabit Ethernet ports. A power button is located on both ends of the motherboard.
Onda hasn’t listed the Chia-D32H-D4 motherboard on its website nor its pricing. However, rumor on the street is that motherboards are already in the hands of Chia farmers.
The best graphics cards should let you play your favorite games with stunning visual effects, including life-like reflections and shadows. Sure, ray tracing may not radically improve the look of some games, but you should be the one to decide whether or not to enable it. Getting locked out just because it doesn’t run well is no fun. But which graphics cards perform best in ray tracing, and what sort of performance should you expect from Nvidia and AMD? To find out, we tested all the ray-tracing capable GPUs from the two major graphics brands.
Ray Tracing Test Hardware
We’ve gathered all of the latest AMD RDNA2 and Nvidia Ampere GPUs into one place and commenced benchmarking. We’ve also included the fastest and slowest Nvidia Turing RTX GPUs from the previous generation, to show the full spectrum of performance. You can see the complete list of GPUs we’ve benchmarked along with specs for our test PC, which uses a Core i9-9900K paired with 32GB of DDR4-3600 memory. All of the graphics cards are reference models from AMD and Nvidia, with the exception of the RTX 3060 12GB — Nvidia doesn’t make a reference card, but the EVGA card we used does run reference clocks.
The premise sounds simple enough: Run a bunch of ray tracing benchmarks on all the GPUs. Things aren’t quite so simple, however, as not every ray tracing enabled game will run on every GPU. Most will, but Wolfenstein Youngblood unfortunately uses pre-VulkanRT extensions to the Vulkan API and thus requires an Nvidia RTX card. Maybe the game will get a patch to VulkanRT at some point, but probably not. There are likely other pre-existing games that supported RTX cards back before AMD’s RX 6000 series launched that don’t properly work, but most of the games we’ve tried are now working okay.
We’ve selected ten of the current DirectX Raytracing (DXR) games that work on both AMD and Nvidia GPUs for these ray tracing benchmarks. Given Nvidia’s pole position in the RT hardware world — its RTX 20-series cards launched in the fall of 2018, over two years before AMD’s RX 6000-series parts — it’s no surprise that most DXR games were focused on Nvidia hardware. However, we did select two of the current four AMD-promoted games with DXR, just to see how things might change. Targeted developer optimizations are certainly possible.
The ten games are: Bright Memory Infinite, Control, Cyberpunk 2077, Dirt 5, Fortnite, Godfall, Metro Exodus, Minecraft, Shadow of the Tomb Raider, and Watch Dogs Legion. Dirt 5 and Godfall are the AMD-promoted games, while most of the others are Nvidia-promoted, the exception being Bright Memory Infinite — it’s currently a standalone benchmark of the upcoming expanded version of Bright Memory. We’ve tested at ‘reasonable’ quality levels for ray tracing, which mostly means maxed out settings, though we did step down a notch or two in Cyberpunk 2077 and Fortnite.
Besides DXR, eight of the games also support Nvidia’s DLSS (Deep Learning Super Sampling) technology, which uses an AI trained network to upscale and anti-alias frames in order to boost performance while delivering similar image quality. DLSS has proven to be a critical factor in Nvidia’s ray tracing push, as rendering at a lower resolution and then upscaling can result in far better framerates. Metro Exodus and Shadow of the Tomb Raider currently use DLSS 1.0, which wasn’t quite as nice looking and had some other oddities (Metro is slated to get a DLSS 2.0 update in the near future), so we’ve confined our DLSS testing to the six remaining games that implement DLSS 2.0/2.1, and we’ve tested all of these with DLSS in Quality mode — the best image quality mode with 2X resolution upscaling, which tends to result in similar image fidelity as native rendering with temporal AA.
Because ray tracing tends to be extremely demanding, we’ve opted to stick with testing at only 1080p and 1440p. Nvidia’s cards may be able to manage playable framerates at 4K with DLSS in some cases, but most of the cards simply aren’t cut out to handle games at 4K native with DXR. We’ll start with the native benchmarks at each resolution and then move on to DLSS 2.0 Quality testing.
Ray Tracing Benchmarks at 1080p
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
Running 1080p ultra with DXR enabled already pushes several of the cards well below a steady 60 fps. Only the RTX 3060 Ti (which is about the same performance as an RTX 2080 Super) and the RX 6800 and above average more than 60 fps across our test suite. Even then, there are games where performance dips well below that mark.
Fortnite ends up as the most demanding ray tracing game right now, followed closely by Cyberpunk 2077 and Bright Memory Infinite. All of those use DXR for multiple effects, including shadows, lighting, reflections, and more, which is why they’re so demanding. Control and Minecraft also use plenty of ray tracing effects, and Minecraft actually implements what Nvidia calls “full path tracing” — the simple block graphics make it easier to do more ray tracing calculations.
Most of the games that only implement one ray tracing effect — Dirt 5, Godfall, and Shadow of the Tomb Raider only use RT for shadows, while Metro uses it for global illumination and Watch Dogs Legion uses it for reflections — perform better, though the RTX 2060 still struggles to hit 30 fps in several games. Godfall is an interesting case as well, as not only is it AMD promoted, but it appears to use more VRAM, which can tank performance on cards with less than 12GB VRAM at times.
Overall, the 3090 and 3080 take top honors, followed by the RX 6900 XT and RX 6800 XT. The RTX 2080 Ti and RTX 3070 are effectively tied, as are the RX 6800 and RTX 3060 Ti, with the RX 6700 XT and RTX 3060 12GB also landing close together. Only the RTX 2060 really falls off the pace set by the other cards. Without the two AMD-promoted games, the RX 6900 XT would have ended up closer to the RTX 3070, though it’s still interesting to see how performance varies by game — AMD’s GPUs did reasonably well in Dirt 5, Fortnite, Godfall, Metro Exodus, and Shadow of the Tomb Raider.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
As you’d expect, enabling DLSS 2x upscaling via the Quality mode changes the rankings a lot. By restricting the benchmarks to the six games with DLSS 2.0 support, suddenly AMD’s best only manages to rank at about the same level as the RTX 3060 Ti and RTX 3070 — and that’s before turning on DLSS! With DLSS Quality mode enabled, only the RTX 2060 falls behind AMD’s 6900 XT and 6800 XT in the overall rankings. Of course, as noted earlier, all of these games are inherently more Nvidia-promoted, though the level of promotion varies quite a bit.
It’s also interesting to see that the RTX 2080 Ti falls a bit further behind the RTX 3070 now. That makes sense, as Ampere’s Tensor cores have up to four times the throughput as the Turing Tensor cores (2X for raw throughput, and another 2X for sparsity). Even with more memory and memory bandwidth, the 2080 Ti is only moderately faster than the RTX 3060 Ti.
Looking at the individual charts, even at 1080p — a resolution that tends to be more CPU limited — there’s still plenty of differentiation between the various GPUs. Enabling DLSS also results in impressive performance improvements even at the top of the product stack with the RTX 3090 and 3080. Cyberpunk 2077 looks to be the most CPU-limited, topping out at just under 80 fps regardless of settings on our test system, and Watch Dogs Legion also appears to encounter a bit of CPU bottlenecking. Both have lots of NPC characters roaming around, which helps explain why they hit the CPU harder.
Bright Memory Infinite and Fortnite end up as the two biggest beneficiaries of DLSS Quality mode. The 3060 Ti with DLSS nearly matches the RTX 3090 at native in BMI, while in Fortnite the 3060 Ti and above with DLSS all beat the 3090 at native. Control also shows some significant performance gains, and even the RTX 2060 manages to clear 60 fps now.
Despite the lack of VRAM, the RTX 2060 with DLSS actually turns in better overall performance than the RX 6700 XT and RX 6800. It may not be significantly faster than the 6800, but that it’s even mentioned in the same breath shows just how much DLSS 2.0 helps, and how badly AMD needs to get its FidelityFX Super Resolution (FSR) into the hands of game developers.
There are now over 30 shipping games with DLSS 2.0 support, and you don’t need to have ray tracing enabled to see performance benefits from DLSS — there are significantly more games with DLSS support than there are games with ray tracing support right now. Sixteen shipping games have DLSS 2.0/2.1 support that don’t utilize ray tracing, for example. Plus, Unreal Engine and Unity both have built-in DLSS 2.0 support, meaning developers using either of those engines can easily enable DLSS in their games.
Ray Tracing Benchmarks at 1440p
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
Bumping the resolution up to 1440p doesn’t change the overall rankings at all at native resolution, though the margin of victory does increase quite a bit in some cases. The RTX 3080 and 3090 are the main beneficiaries of the higher resolution, while the RTX 2060 takes a pretty hard hit to performance — it’s the only GPU that couldn’t average 30 fps or more across our test suite.
Not surprisingly, multiple games do fall below 30 fps on multiple GPUs at 1440p. Only the 3080 and 3090 break 30 fps in Bright Memory Infinite and Cyberpunk 2077, and only the 3090 manages to do so in Fortnite. Godfall meanwhile clearly punishes the 2060’s lack of VRAM, where it’s about one third the performance of the RTX 3060. Several GPUs also struggled in Control, Minecraft, and Watch Dogs Legion.
It should be pretty obvious that, of the potential ray tracing effects, reflections tend to be the most demanding, with shadows being the least demanding. Not coincidentally, RT reflections often have the most noticeable effect on image fidelity. Ray traced shadows can be nice, but the various shadow mapping techniques have gotten quite good at ‘faking’ it.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
Where DLSS was a potential nice extra at 1080p, it’s almost required to get good performance on most GPUs at 1440p. Without DLSS, none of the GPUs we tested can break 60 fps in the overall average performance chart, but with DLSS even the RTX 3070 (barely) gets there. Memory bandwidth clearly becomes a differentiating factor as well, with the 3080 and 3090 really pulling ahead in the most demanding titles — which is actually all six games in our DLSS test suite.
Again, Nvidia dominates the performance charts once DLSS Quality mode gets turned on. Only the RTX 2060 fails to beat the RX 6900 XT in our overall results, and it’s basically tied with the RX 6800 XT. That applies to the individual results as well, though the RX 6900 XT does tie the RTX 3060 in Fortnite. In Minecraft, meanwhile, even the RTX 2060 comes out ahead of the RX 6900 XT — along with the non-DLSS 3070 and below.
There’s only so much DLSS can accomplish, of course. Bright Memory Infinite, Cyberpunk 2077, Fortnite, and Watch Dogs Legion still fail to break 30 fps with the RTX 2060. That’s probably the main reason why we’re not seeing an RTX 3060 6GB card — though it exists on laptops and may eventually show up on desktops. (Sigh.) If you care about image quality enough to want ray tracing, you’d be well advised to get a card with more VRAM rather than less. It’s too bad that Nvidia’s cards (outside of the 3060 and 3090) generally aren’t as generous with VRAM as AMD’s cards.
Ray Tracing Winner: Nvidia, by a lot
Considering Nvidia was the first company to begin shipping ray tracing capable GPUs, over two years ago, it’s not too surprising that it comes out ahead in the ray tracing benchmarks. Technologies like DLSS prove Nvidia wasn’t just whipping something up as quickly as possible, either. It knew how demanding ray tracing would be, and looked at how movie studios were optimizing performance for inspiration. Denoising of path traced images, which is at least somewhat similar to upscaling via DLSS, can dramatically improve performance.
Today, Nvidia has second-generation ray tracing hardware and third-generation Tensor cores in the RTX 30-series GPUs. AMD meanwhile has first-generation ray accelerators, and no direct equivalent of Nvidia’s Tensor cores or DLSS. Perhaps AMD’s FSR will eventually show up and prove that the Tensor cores aren’t strictly necessary, but after nearly six months since first hearing about FidelityFX Super Resolution, we’re becoming increasingly skeptical.
As it stands now, even without DLSS, Nvidia clearly leads in the majority of games that use DirectX Raytracing. Look at our rasterization-only GPU benchmarks and you’ll find the RX 6900 XT and RX 6800 XT in spots two and three, with the RX 6800 in fifth place. With DXR, the 3080 goes from being just barely behind the 6800 XT to leading by over 30%. The same goes for the RTX 3070 and RX 6800: Without DXR, the 6800 is about 12% faster than the 3070; with DXR, the 3070 turns the tables and leads by 15%.
Turn on DLSS Quality mode and things go from bad to worse for Team Red. The RTX 3080 more than doubles the performance of the RX 6900 XT, never mind the 6800 XT. The RTX 3070 also more than doubles the performance of the RX 6800. Heck, even the RTX 3060 12GB beats the 6900 XT by 16% at 1080p and 23% at 1440p. Bottom line: AMD needs FSR, and it really should have had a working solution before the RDNA2 GPUs and consoles even launched. Better late than never, hopefully.
Of course there’s still a bigger question of how much ray tracing really benefits the player in most games. The best ray tracing games like Control, Cyberpunk 2077, Fortnite, and Minecraft show substantial visual improvements with ray tracing, to the point where we’d much rather have it on than off. (Okay, not in Fortnite where fps matters more than visuals, though it can be nice in creative mode.) But for each of those games, there are at least five other games where ray tracing merely tanks performance without a major visual benefit.
It took half a decade or more for programmable shaders to really make a difference in the way games looked, and games of the future will eventually reach that point with ray tracing. But we’re not there yet. Bottom line: Nvidia reigns as the king of ray tracing GPUs for games. Now we just need more games where the visual benefits are worth the performance hit.
Matthew Wilson 8 hours ago Featured Announcement, Memory
Earlier this month, HyperX and MSI were able to set a new DDR4 memory overclocking world record, reaching speeds of 7156MHz. Now, just a few weeks later, this record has been broken, with MSI and HyperX hitting 7.2GHz speeds.
HyperX is of course the gaming division of Kingston (soon to be acquired by HP) and has served the memory market for decades now. This particular record-breaking overclock was achieved by the MSI OC team in Taiwan, using an 8GB HyperX 4600MHz Predator memory stick, an MSI MEG Z590 UNIFY-X motherboard and an 11th Gen Intel Core i9-11900KF running at 3.5GHz.
The hardware setup is similar to what was used to set the 7156MHz record a few weeks ago, but the motherboard has been swapped out for a different one. This paved the way for the MSI OC team to reach 7200MHz this time around.
As you would expect, HyperX is very pleased with the result, with the company’s DRAM business manager, Kristy Ernt, saying: “HyperX is thrilled to be part of this breakthrough in DDR4 overclocking history, with HyperX Predator memory used to set two world records within the past month. Our HyperX engineers continue to focus on improving high-speed yields to get faster products in the hands of our customers and push previously unattainable performance records.”
While you are unlikely to achieve an overclock this high at home using standard cooling methods, HyperX does sell a number of validated high-speed memory kits. The HyperX Predator DDR4 kit used here is available in speeds up to 4800MHz with latencies between CL12 and CL19. Single-dimm kits can be found in capacities of up to 32GB, if you get a kit with multiple dimms, you could install as much as 256GB of memory on a system.
KitGuru Says: HyperX is leading the overclocking race at this point and with DDR5 on the way, we have to wonder if this record will be broken again before we shift away from DDR4.
Become a Patron!
Check Also
Vote in the KitGuru Reader Survey and win a Zotac ZBox Magnus One w/ RTX 3070
Once a year, we ask you, our readers, to tell us what’s hot and what’s not in our unique KitGuru Reader Survey. This year, Zotac has offered up a stunning prize for one lucky participant to win: a very special ‘barebones’ PC that comes complete with an Intel Core i7-10700 processor and an Nvidia RTX 3070 graphics card!
Despite Apple’s focus on developing its own chips, it looks like the company still needs AMD’s help for higher-power workstation GPUs. That’s according to new entries on the Geekbench 5, showing an unannounced ‘Radeon Pro W6900X’ SKU powering an Apple Mac Pro 7.1.
With the launch of macOS Big Sur 11.4 Beta, Apple introduced support for Radeon consumer-grade cards on its OS. Professional Radeon cards are not yet supported, but that might change soon with the new Mac Pro 7.1.
Initially found by Benchleaks, we have spotted nine entries of a MacPro 7.1 equipped with an AMD Radeon Pro W6900X and running macOS 11.4 on the Geekbench 5 database. All the entries seem to belong to the same system, which featured a 12C/24T Intel Core i9-10920X CPU and 192GB of DDR4-2933 memory.
The entries do not show the card’s specifications, but performance-wise, it scored slightly above the Radeon RX 6900XT. It’s unclear if the card will be exclusive to Mac systems like the Radeon Pro Vega II, but compared to it, the AMD Radeon Pro W6900X scored about 66% higher.
These entries coincide with the appearance of a photo showing an undisclosed AMD graphics card. The uploader didn’t share any information about the card, but we believe it might be the AMD Radeon Pro W6900X, the OEM variant of the Radeon Pro card we have previously shared, or a combination of both.
Discuss on our Facebook page, HERE.
KitGuru says: Apple plans to become more independent from CPU and GPU manufacturers, but for now, it still depends heavily on the likes of Intel and AMD for high-powered solutions. Will Apple eventually release its own workstation-class CPUs and GPUs?
Become a Patron!
Check Also
Razer’s Orochi V2 is a compact wireless mouse with up to 900 hours of battery life
Razer is back with another gaming mouse this week. This time around, the Razer Orochi …
The Patriot Viper Steel RGB DDR4-3600 C20 is only worthy of consideration if you’re willing to invest your time to optimize its timings and if you can find the memory on sale with a big discount.
For
+ Runs at C16 with fine-tuning
+ Balanced design with RGB lighting
+ RGB compatibility with most motherboards
Against
– Very loose timings
– Overpriced
– Low overclocking headroom
Patriot, who isn’t a stranger to our list of Best RAM, has many interesting product lines in its broad repertoire. However, the memory specialist recently revamped one of its emblematic lineups to keep up with the current RGB trend. As the name conveys, the Viper Steel RGB series arrives with a redesigned heat spreader and RGB illumination.
The new series marks the second time that Patriot has incorporated RGB lighting onto its DDR4 offerings, with the first being the Viper RGB series that debuted as far back as 2018. While looks may be important, performance also plays a big role, and the Viper Steel RGB DDR4-3600 memory kit is here to show us what it is or isn’t made of.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Viper Steel RGB memory modules come with the standard black PCB with a matching matte-black heat spreader. It was nice on Patriot’s part to keep the aluminum heat spreader as clutter-free as possible. Only the golden Viper logo and the typical specification sticker is present on the heat spreader, and the latter is removable.
At 44mm (1.73 inches), the Viper Steel RGB isn’t excessively tall, so we expect it to fit under the majority of the CPU air coolers in the market. Nevertheless, we recommend you double-check that you have enough clearance space for the memory modules. The RGB light bar features five customizable lighting zones. Patriot doesn’t provide a program to control the illumination, so you’ll have to rely on your motherboard’s software. The compatibility list includes Asus Aura Sync, Gigabyte RGB Fusion, MSI Mystic Light Sync, and ASRock Polychrome Sync.
The Viper Steel RGB is a dual-channel 32GB memory kit, so you receive two 16GB memory modules with an eight-layer PCB and dual-rank design. Although Thaiphoon Burner picked up the integrated circuits (ICs) as Hynix chips, the software failed to identify the exact model. However, these should be AFR (A-die) ICs, more specifically H5AN8G8NAFR-VKC.
You’ll find the Viper Steel RGB defaulting to DDR4-2666 and 19-19-19-43 timings at stock operation. Enabling the XMP profile on the memory modules will get them to DDR4-3600 at 20-26-26-46. The DRAM voltage required for DDR4-3600 is 1.35V. For more on timings and frequency considerations, see our PC Memory 101 feature, as well as our How to Shop for RAM story.
Comparison Hardware
Memory Kit
Part Number
Capacity
Data Rate
Primary Timings
Voltage
Warranty
G.Skill Trident Z Royal
F4-4000C17D-32GTRGB
2 x 16GB
DDR4-4000 (XMP)
17-18-18-38 (2T)
1.40 Volts
Lifetime
Crucial Ballistix Max RGB
BLM2K16G40C18U4BL
2 x 16GB
DDR4-4000 (XMP)
18-19-19-39 (2T)
1.35 Volts
Lifetime
G.Skill Trident Z Neo
F4-3600C16D-32GTZN
2 x 16GB
DDR4-3600 (XMP)
16-16-16-36 (2T)
1.35 Volts
Lifetime
Klevv Bolt XR
KD4AGU880-36A180C
2 x 16GB
DDR4-3600 (XMP)
18-22-22-42 (2T)
1.35 Volts
Lifetime
Patriot Viper Steel RGB
PVSR432G360C0K
2 x 16GB
DDR4-3600 (XMP)
20-26-26-46 (2T)
1.35 Volts
Lifetime
Our Intel test system consists of an Intel Core i9-10900K and Asus ROG Maximus XII Apex on the 0901 firmware. On the opposite side, the AMD testbed leverages an AMD Ryzen 5 3600 and ASRock B550 Taichi with the 1.30 firmware. The MSI GeForce RTX 2080 Ti Gaming Trio handles the graphical duties on both platforms.
Intel Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
Things didn’t go well for the Viper Steel RGB on the Intel platform. The memory ranked at the bottom of our application RAM benchmarks and came in last place on the gaming tests. Our results didn’t reveal any particular workloads where the Viper Steel RGB stood out.
AMD Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
The loose timings didn’t substantially hinder the Viper Steel RGB’s performance. Logically, it lagged behind its DDR4-3600 rivals that have tighter timings. The Viper Steel RGB’s data rate allowed it to run in a 1:1 ratio with our Ryzen 5 3600’s FCLK so it didn’t take any performance hits, unlike the DDR4-4000 offerings. With a capable Zen 3 processor that can operate with a 2,000 MHz FCLK, the Viper Steel RGB will probably not outperform the high-frequency kits.
Overclocking and Latency Tuning
Image 1 of 3
Image 2 of 3
Image 3 of 3
Overclocking potential isn’t the Viper Steel RGB’s strongest trait. Upping the DRAM voltage from 1.35V to 1.45V only got us to DDR4-3800. Although we had to maintain the tRCD, tRP, and tRAS at their XMP values, we could drop the CAS Latency down to 17.
Lowest Stable Timings
Memory Kit
DDR4-3600 (1.45V)
DDR4-3800 (1.45V)
DDR4-4000 (1.45V)
DDR4-4133 (1.45V)
DDR4-4200 (1.45V)
G.Skill Trident Z Neo DDR4-3600 C16
13-14-14-35 (2T)
N/A
N/A
N/A
19-19-19-39 (2T)
Crucial Ballistix Max RGB DDR4-4000 C18
N/A
N/A
16-19-19-39 (2T)
N/A
20-20-20-40 (2T)
G.Skill Trident Z Royal DDR4-4000 C17
N/A
N/A
15-16-16-36 (2T)
18-19-19-39 (2T)
N/A
Klevv Bolt XR DDR4-3600 C18
16-19-19-39 (2T)
N/A
N/A
18-22-22-42 (2T)
N/A
Patriot Viper Steel RGB DDR4-3600 C20
16-20-20-40 (2T)
17-26-26-46 (2T)
N/A
N/A
N/A
As we’ve seen before, you won’t be able to run Hynix ICs at very tight timings. That’s not to say that the Viper Steel RGB doesn’t have any wiggle room though. With a 1.45V DRAM voltage, we optimized the memory to run at 16-20-20-40 as opposed to the XMP profile’s 20-26-26-46 timings.
Bottom Line
It comes as no surprise that the Viper Steel RGB DDR4-3600 C20 will not beat competing memory kits that have more optimized timings. The problem is that C20 is basically at the bottom of the barrel by DDR4-3600 standards.
The Viper Steel RGB won’t match or surpass the competition without serious manual tweaking. The memory kit’s hefty $199.99 price tag doesn’t do it any favors, either. To put it into perspective, the cheapest DDR4-3600 2x16GB memory kit on the market starts at $154.99, and it checks in with C18. Unless Patriot rethinks the pricing for the Viper Steel RGB DDR4-3600 C20, the memory kit will likely not be on anyone’s radar.
Sometimes a tiny little thing can drastically improve performance and the user experience. Atlast! has developed just this kind of thing for one of its latest designs — it now uses a heat-pipe cooling system to passively cool both an SSD and the motherboard chipset.
The performance of modern high-end SSDs depends heavily on their cooling as high-end controllers tend to throttle when they overheat under high loads. Normally, SSD makers equip their products with heat spreaders that can do the job well, assuming the drives are not installed adjacent to a high-performance graphics card, and there’s sufficient airflow inside the case.
Fanless systems by definition do not have airflow from an active cooling solution (like a fan), instead relying on the air brought in naturally from the outside. As such, higher-end SSDs can easily overheat in passive PCs, which causes performance loss and frustration.
Atlast!, a fanless PC specialist, this week has announced (via FanlessTech) that it now equips its Sigao Model B desktop with a special cooling solution that uses a heat pipe to cool the motherboard chipset and an M.2 2280 SSD. The solution is basically a specially-machined aluminum plate that covers the chipset and the SSD.
Also, the Sigao Model B now uses an Asus B560 Mini-ITX motherboard and can be powered by Intel’s 10-core Core i9-10900T ‘Comet Lake’ or 8-core Core i9-11900T ‘Rocket Lake’ processor.
According to a test conducted by the manufacturer, the tiny device works quite well. At a room temperature of 21C, the Samsung’s 980 Pro SSD idled at 36C. After a few minutes of running an ‘intense read/write test at 1GB/s,’ the temperature rose by 3C. Meanwhile, after using the drive for two hours under ‘a constant heavy load’, its temperature only rose to 45C, which is well below levels when an SSD starts to throttle. Unfortunately, Atlast! didn’t disclose the temperature of Samsung’s 980 Pro SSD in its Sigao Model B system when it uses only its graphene-based heatspreader.
It is necessary to note that since the Asus B560I motherboard used by Atlast! supports two M.2-2280 SSDs, the new Sigao Model B can be equipped with two drives. Meanwhile, the second SSD located on the underside of the motherboard is also thermally connected to the case for cooling.
The SSD cooling plate is now installed into Atlast!’s Sigao Model B desktops by default without an upcharge.
Intel’s latest low-power eight-core Core i9-11900T and Core i7-11700T ‘Rocket Lake’ desktop processors with a 35W TDP for LGA1200 motherboards are already available in Europe and Japan. But there’s no sign of them in the U.S. yet.
Most performance enthusiasts are eager to get Intel’s unlocked K-series processors with a 95W – 125W TDP that can boost their clocks sky-high and support all the latest technologies. But there are also enthusiasts who prefer small form-factor low-power builds, but would still like to have CPUs with eight or ten cores and all the latest technologies. Intel typically addresses these users with its T-series processors featuring a 35W TDP, but sometimes these chips are hard to get.
Intel formally introduced its low-power eight-core Core i9-11900T and Core i7-11700T ‘Rocket Lake’ CPUs with a 35W TDP along with their high-performance i9-11900K and i7-11700K brethren on March 30, 2021. But unlike the ‘unlocked and unleashed’ K-series processors, the new T-series products were not immediately available at launch. Fortunately, the situation is starting to change.
Akiba PC Hotline and Hermitage Akihabara report that the new 35W Core i9-11900T and Core i7-11700T CPUs in bulk and boxed versions are readily available in at least four stores in Tokyo, Japan. The higher-end i9-11900T model is sold for ¥60,478 – ¥62,700 with VAT, whereas the i7-11700T SKU is priced at ¥45,078 – ¥47,300 including tax.
Geizhals.EU, a price search engine in Europe, finds that Intel’s Core i9-11900T is available in dozens of stores in Austria, Germany, and Poland starting at €455 with VAT ($462 without taxes). Meanwhile, there are no offers for the cheaper Core i7-11700T at this point.
But at the moment, the new Rocket Lake-T CPUs are not currently available in the U.S. at Amazon and Newegg. In fact, the stores are not even taking pre-orders on these parts. The situation has already prompted enthusiasts of low-power SFF builds to start a thread at Reddit to monitor their availability.
Intel’s latest Core i9-11900T and Core i7-11700T processors indeed look quite attractive. The CPUs feature eight cores with Hyper-Threading, 16MB of cache, a modern Xe-based integrated GPU and support up to 128GB of memory. Intel’s i9-11900T and i7-11700T CPUs feature relatively low base frequencies of 2.00 GHz and 1.50 GHz (respectively), but rather high all-core boost clocks of 3.60 GHz and 3.70 GHz (respectively). When installed into compatible motherboards, they can hit high frequencies and pretty much guarantee great system responsiveness and decent performance in mainstream applications (assuming adequate cooling). So while these are definitely niche chips, it’s not surprising that demand for these CPUs is fairly high.
(Pocket-lint) – Apple has revealed its new iMac – available in a single 24-inch size, it brings Apple’s own M1 processors to the iMac lineup as well as a new, thin-bezel design and seven colour finishes.
Here we’re pitching it alongside the 2020 27-inch model featuring Intel processors. We’re expecting this version to be replaced by a new-style, Apple M1-powered model in due course, perhaps with a 32-inch size – certainly, it’s set to be bigger than the 27-inch size we believe. That model will probably have an upgraded Apple Silicon processor, maybe the M2.
The old 2019 21.5-inch iMac model seems to still be available, but we suspect Apple will just be selling off old stock.
squirrel_widget_4537609
Design
2020 iMac: Familiar aluminium design with a black display surround
2021 iMac: New thinner design, seven different colour finishes
The iMac 2021 takes the iMac design up a level. It’s still very recognisable as an iMac and has the same ‘strip’ under the display, but is significantly thinner, without the bulge around the stand. There are also much thinner bezels with a white surround instead of black.
Crucially the 2021 iMac is now available in seven different colour finishes, however, not all are available to all buyers. There are two different models with very small differences. Primarily this is in the graphics, which we’ll come onto shortly, and two additional USB-C ports on the higher-end model. But whereas the ‘two ports’ model is available in four finishes, the more expensive ‘four ports’ model is available in all seven.
The 2019/2020 iMac retains the familiar aluminium design with a black display surround.
Apple
Displays
2020 iMac: 5K 27-inch display
2021 iMac: 24-inch 4.5K display
The older 2020 iMac features a 5K 27-inch display which has been in use for several years – as we’ve said above we expect it to be replaced by a larger model at some point soon, perhaps 32-inches. The new 2021 iMac introduces a 24-inch 4.5K display with smaller bezels than the 27-inch.
The 2019 21.5-inch iMac still appears to be available, though expect it to go end-of-life soon.
Apple
Processor and graphics
2020 iMac: Various Core i5/i7 options topped out by 3.6Ghz 10-core Core i9-10900K, AMD Radeon Pro graphics
2021 iMac: 8-core Apple M1 processor with 7 or 8 core graphics
The 2020 iMac is available with Intel’s 10th Generation Core i processors (Comet Lake) in 6- and 8-core variants of the Core i5 and i7. You can also upgrade to the range-topping 3.6Ghz 10-core Core i9-10900K that’ll Turbo Boost to 5GHz. We had this in our review model and as you’d expect, it absolutely flies.
For the 2021 iMac, both two-port and four-port models have an 8-core Apple M1 processor under the hood. The graphics are where things differ slightly, with 7 or 8 core graphics respectively. The graphics options on the 2020 Intel iMac are varied, with several AMD Radeon Pro options, maxxing out at the AMD Radeon Pro 5700 XT with 16GB of GDDR6 memory.
Storage and peripherals
2020 iMac: dual USB-C/Thunderbolt 3, four USB-A ports and an SD card slot
2021 iMac: Dual Thunderbolt/USB 4 ports, extra pair of USB-C ports on four-port model
All iMacs come with a Magic Keyboard and Magic Mouse, 2021 iMac available with Touch ID version of Magic Keyboard
The two-port 2021 iMac gives you dual Thunderbolt/USB 4 ports, while the four-port version gives you two additional USB-C ports.
There are stacks of storage options on the 2020 Intel iMacs and you can specify up to a huge 8TB of storage, On the 2021 M1 iMacs though, things are a little more limited – we know the M1 chip is currently limited to 2TB of storage, and you can specify this on the four-port version. On the two-port version you can only get up to 1TB of storage.
The 2020 Intel iMac has two dual USB-C/Thunderbolt 3, four USB-A ports and an SD card slot. So the USB-A and SD slots are gone on the 2021 version. The headphone jack moves from the rear on the 2020 model to the side on the 2021 iMac and the Ethernet port moves to the power brick (yes really), as part of the magnetically attached power cable.
All iMacs come with a Magic Keyboard and Magic Mouse, but the high-end four-port 2021 iMac has a special Magic Keyboard with Touch ID. You can also upgrade the standard Magic Keyboard on the two-port version to the Touch ID model.
squirrel_widget_32084713
Verdict
The 2021 24-inch iMac is a clear step forward, but while it clearly supercedes the 2019 21.5-inch iMac, it’s not a complete replacement for the 2020 27-inch model. That’s because of the storage, processor and graphics options available on that model – and the power of the high end Core i7 and Core i9 options.
We expect there to be a new larger iMac this year to replace the 27-inch model as well, probably with a new M2 processor.
There’s no doubt that the Predator Apollo RGB DDR4-4500 is a speedy memory kit. Unfortunately, the hefty price tag will probably scare off potential buyers.
It’s hard not to know Acer – it’s one of the more prominent mainstream brands in the computer industry. However, the company’s Predator sub-brand might not ring a bell for the typical computer user that’s not into gaming. Nonetheless, the Predator label is home to Acer’s premium gaming PCs, laptops, monitors, and chairs. To further expand its reach, Acer has created Predator Storage, a new family of high-performance storage and memory products that target enthusiasts and gamers alike.
Acer won’t actually manage Predator Storage, though. Following in HP’s footsteps, Acer has handed the reins over to Chinese OEM Biwin Storage to manufacture and commercialize Predator-branded memory and SSDs on its behalf in the United States and Canadian markets. Today marks Predator Storage’s first venture into the memory market. The sub-brand debuts with its Apollo RGB series of gaming memory that offers frequencies ranging from DDR4-3200 up to DDR4-5000.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The Predator Apollo RGB memory modules sport an aluminum heat spreader for effective heat dissipation. According to the brand, the design takes after a cyberpunk theme. It features a two-tone paint job with a mixture of black and silver colors and is carved in such a way that it exposes the majority of the LED diffuser. However, one thing to consider is that the Predator Apollo RGB measures 51.4mm (2.02 inches) tall, so you’ll need to make sure you have the necessary clearance space for the memory modules, especially if you’re using a large CPU air cooler.
As with any modern-day gaming memory, the Predator Apollo RGB is equipped with RGB lighting that you can configure to your heart’s content. Software isn’t provided for such purposes, but the memory is compatible with all the major RGB ecosystems, including Asus Aura Sync, Gigabyte RGB Fusion 2.0, MSI Mystic Light Sync, and ASRock Polychrome Sync.
Our Predator Apollo RGB memory kit checks in at an unorthodox data rate of DDR4-4500. There are so few DDR4-4500 memory kits on the market that we can count them with the fingers of one hand. As you can tell by now, the Predator Apollo RGB is a dual-channel 16GB memory kit, so it consists of two DDR4 memory modules with a density of 8GB each. The memory modules are based on a single-rank design and are manufactured with a 10-layer PCB and 15μm gold-plated contacts.
Leveraging Samsung’s K4A8G085WB-BCPB (B-die) ICs, the Predator Apollo RGB is rated for DDR4-4500 at 19-19-19-39 timings with a 1.45V DRAM voltage requirement. When the XMP 2.0 profile for the advertised speed isn’t active, the memory modules default to DDR4-2133 with automatic timings at 15-15-15-36. For more on timings and frequency considerations, see our PC Memory 101 feature, as well as our How to Shop for RAM story.
Comparison Hardware
Memory Kit
Part Number
Capacity
Data Rate
Primary Timings
Voltage
Warranty
Thermaltake ToughRAM RGB
R009D408GX2-4600C19A
2 x 8GB
DDR4-4600 (XMP)
19-26-26-45 (2T)
1.50
Lifetime
Predator Apollo RGB
BL.9BWWR.255
2 x 8GB
DDR4-4500 (XMP)
19-19-19-39 (2T)
1.45
Lifetime
Patriot Viper 4 Blackout
PVB416G440C8K
2 x 8GB
DDR4-4400 (XMP)
18-26-26-46 (2T)
1.45
Lifetime
Klevv Cras XR RGB
KD48GU880-40B190Z
2 x 8GB
DDR4-4000 (XMP)
19-25-25-45 (2T)
1.40
Lifetime
TeamGroup T-Force Xtreem ARGB
TF10D416G3600HC14CDC01
2 x 8GB
DDR4-3600 (XMP)
14-15-15-35 (2T)
1.45
Lifetime
Our Intel test system consists of an Intel Core i9-10900K and Asus ROG Maximus XII Apex on the 0901 firmware. On the opposite side, the AMD testbed leverages an AMD Ryzen 5 3600 and ASRock B550 Taichi with the 1.30 firmware. The MSI GeForce RTX 2080 Ti Gaming Trio is the main graphics card in our RAM benchmarks.
Intel Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
The Apollo RGB kit performed as expected on the Intel platform. The memory struggled against rivals with lower frequencies and optimized timings. However, it was surprising to see that the Apollo RGB even bested the T-Force Xtreem ARGB DDR4-3600 C14 kit, even if it’s only by a couple of points. The Apollo RGB ranked second place in the gaming chart.
AMD Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
On the contrary, the Apollo RGB memory performed best on the AMD platform. The memory kit managed to defeat all the memory kits, except for the T-Force Dark Z FPS DDR4-4000 C16 memory kit. Gaming on the AMD platform also favored Predator Storage’s kit as it jumped up to the top of the gaming chart.
Overclocking and Latency Tuning
Image 1 of 3
Image 2 of 3
Image 3 of 3
Despite leveraging Samsung B-die ICs, increasing the DRAM voltage to 1.5V didn’t get us anywhere. Bumping it to 1.55V, however, allowed us to overclock the memory to DDR4-4600. In the process, we also dropped the timings from 19-19-19-39 to 18-18-18-38.
Lowest Stable Timings
Memory Kit
DDR4-3600 (1.46V)
DDR4-4000 (1.45V)
DDR4-4200 (1.45V)
DDR4-4400 (1.45V)
DDR4-4500 (1.55V)
DDR4-4600 (1.55V)
DDR4-4666 (1.56V)
Thermaltake ToughRAM RGB DDR4-4600 C19
N/A
N/A
N/A
N/A
N/A
18-24-24-44 (2T)
20-26-26-45 (2T)
Predator Apollo RGB DDR4-4500 C19
N/A
N/A
N/A
N/A
18-18-18-38 (2T)
18-18-18-38 (2T)
N/A
Patriot Viper 4 Blackout DDR4-4400 C18
N/A
N/A
N/A
17-25-25-45 (2T)
21-26-26-46 (2T)
N/A
N/A
Klev Cras XR RGB DDR4-4000 C19
N/A
18-22-22-42 (2T)
N/A
19-25-25-45 (2T)
N/A
N/A
N/A
TeamGroup T-Force Xtreem ARGB DDR4-3600 C14
13-14-14-35 (2T)
N/A
19-19-19-39 (2T)
N/A
N/A
N/A
N/A
If you’re perfectly satisfied with DDR4-4500, the Apollo RGB kit is very happy with a 1.5V DRAM voltage and tight timings of 18-18-18-38. That was the lowest we could push the memory before instability kicked in.
Bottom Line
The first time is always the hardest, and despite being the company’s first foray into the memory market, Predator Storage did a good job with the Apollo RGB DDR4-4500 C19 kit. We won’t delve into the memory’s aesthetics since it’s a subjective matter. Performance-wise, the Apollo RGB will not disappoint, but it will have a hard time contending with some DDR4-4000 and above memory kits with tight timings, more specifically on Intel platforms. In its favor, the Apollo RGB does feature high-quality Samsung B-die ICs, so overclocking and tweaking are definitely on the menu, but your mileage will vary.
The Apollo RGB DDR4-4500 C19 kit’s price tag will be the hardest thing to swallow for most consumers. The MSRP for the memory kit is $299.99, so it’s on the more expensive end of the spectrum. It’s hard to consider the Apollo RGB DDR4-4500 C19 at current pricing, especially when you have tough competitors, like Patriot’s Viper 4 Blackout DDR4-4400 C18 that only sets you back $134.99. However, hardware doesn’t always retail at the manufacturer’s established MSRP, especially when it comes to products like memory that tend to have volatile pricing, so it remains to be seen if the Apollo RGB DDR4-4500 C19 will maintain the $299.99 price tag when it lands at retailers this month.
With a Ryzen 9 5900X and an RTX 3080, both liquid-cooled for quiet operation in a compact case, Corsair’s One a200 is easy to recommend–if you can afford it and find it in stock. Just know that your upgrade options are more limited than larger gaming rigs.
For
+ Top-end performance
+ Space-saving, quiet shell
+ Liquid-cooled GPU and CPU
Against
– Expensive
– Limited upgrade options
For a whole host of reasons, AMD’s
Ryzen 9 5900X
and Nvidia’s
RTX 3080
have been two of the hardest-to-find PC components since late last year. But Corsair has combined them both in a handy, compact, liquid-cooled bundle it calls the Corsair One a200.
The company’s vertically-oriented One desktop
debuted in 2018
and has since been regularly updated to accommodate current high-end components. This time around, the options include either AMD or Intel’s latest processors (the latter called the One i200), and Nvidia’s penultimate consumer GPU, the RTX 3080.
Not much has changed in terms of the system’s design, other than the addition of a USB Type-C port up front (where an HDMI port was on previous models). But with liquid cooling handling thermals for both the CPU and graphics in a still-impressively compact package, there’s really little reason to change what was already one of the
best gaming PCs
for those who want something small.
The only real concern is pricing. At $3,799 as tested (including 32GB of RAM, a 1TB SSD and a 2TB HDD), you’re definitely paying a premium for the compact design and slick, quiet cooling. But with the scarcity of these core components and the RTX 3080 regularly
selling for well over $2,000 on its own on eBay
, it’s tough to discern what constitutes ‘value’ in the gaming desktop world at the moment. You may be able to find a system with similar components for less, but it won’t likely be this small or slick.
Design of the Corsair One a200
Just like the
One i160
model we looked at in 2019, the Corsair One a200 is a quite compact (14.96 x 7.87 x 6.93 inches) tower of matte-black metal with RGB LED lines running down its front. To get some sense of how small this system is compared to more traditional gaming rigs, we called
Alienware’s Aurora R11
“fairly compact” when we reviewed it, and it’s 18.9 x 17 x 8.8 inches, taking up more than twice the desk space of Corsair’s One a200.
The 750-watt SFX power supply in the a200 is mounted at the bottom, pulling in air that’s expelled at the top with the help of a fan. And the heat from the CPU and GPU will mostly be expelled out either side, as both are liquid cooled, with radiators mounted against the side panels.
The primary external difference with the updated a200 over previous models is the replacement of an HDMI port that used to live up front next to the headphone/mic combo jack and pair of USB-A ports. It’s been replaced with a USB-C port. That makes for three front-facing USB ports, a surprising amount of front-panel connectivity for a system so compact. But there are only six more USB ports around back (more on that shortly).
Overall, while the design of the One a200 is pretty familiar at this point, it still looks and feels great, with all the external panels made out of metal. Just note that the matte finish does easily pick up finger smudges.
Front: 2x USB 3.2 Gen 1 (5 Gbps) Type-A, 1 USB 3.2 Gen 2 (10 Gbps) Type-C ; Combination Mic/Headphone Jack; Rear: 4x USB USB 3.2 Gen 1 (5 Gbps) Type-A, 2x USB 3.2 Gen 2 (Type-A, Type-C), Ethernet, HD Audio, 3x DisplayPort, 1x HDMI
Video Output
(3) DisplayPort 1.4a (1) HDMI 2.1
Power Supply
750W Corsair SFX 80 Plus Platinum
Case
Corsair One Aluminum/Steel
Operating System
Windows 10 Home 64-Bit
Dimensions
14.96 x 7.87 x 6.937 inches (380 x 200 x 176 mm)
Price As Configured
$3,799
Ports and Upgradability of the Corsair One a200
Since the Corsair One a200 is built around a compact Mini-ITX motherboard (specifically the ASRock B550 Phantom Gaming-ITX/ax), you won’t quite get the same amount of ports that you would expect with a larger desktop. Since we already covered the three USB ports and audio jack up front, let’s take a look at the back.
Here you’ll find four USB 3.2 Gen 1 (5 Gbps) Type-A ports, plus two USB 3.2 Gen 2 (one Type-A and one Type-C). Also here is a 2.5 Gb Ethernet jack, three analog audio connections and connectors for the small antennae. The ASrock board also includes a pair of video connectors, but since you’ll want to use the ports on RTX 3080 instead, Corsair has blocked them off behind the I/O plate so most people wouldn’t even know they’re there.
The video connections from the RTX 3080 graphics card live next to the Corsair SF750 power supply, and come in the form of three DisplayPort 1.4a ports and a single HDMI 2.1 connector.
As for internal upgradability, you can get at most of the parts if you’re comfortable dismantling expensive PC hardware. But you can’t add any RAM or storage without swapping out what’s already there (or at least without removing the whole motherboard, more on that soon). That said, the 32GB of Corsair Vengeance LPX DDR4-3200 RAM, 1TB PCIe 4.0 Force MP600 SSD and 2TB Seagate 2.5-inch hard drive that’s already here are a potent cadre of components. If you need more RAM and storage (as well as more CPU cores), there’s a $4,199 configuration we’ll detail later.
To get inside the Corsair One a200, you don’t need any tools, but you’ll want to be a bit careful. Press a button at the rear top of the case (you have to press it quite hard) and the top, which also houses a fan, will pop up. But before you go yanking it away in haste, note that it’s attached via a fan cable that you can disconnect after first fishing the plug out from a hole inside the case.
To access the rest of the system you’ll have to remove two screws from each side. But again, don’t be careless, as radiators are attached to both side panels via short tubes, so the sides are a bit like upside-down gull-wing doors. You can’t really remove them without disconnecting the cooling plates from the CPU and GPU.
It’s fairly easy to remove the RAM, although the 32GB of Corsair Vengeance LPX DDR4-3200 occupies both of the slots. The 2TB Seagate 2.5-inch hard drive is also accessible from the left side, wedged under the PCIe riser cable that’s routed to the GPU on the other side.
At least the 1TB Force MP600 SSD on this model is mounted on the front of the motherboard under a heatsink, rather than behind the board on the i160 version we looked at a couple years ago.
You can open the right panel as well, though there’s not much to do here as the space is taken up by the GPU, a large radiator and a pair of fans mounted on the heatsink to move the RTX 3080’s heat through the radiator and out the vents on the side.
As with previous models, you should be able to replace the RTX 3080 with an air-cooled graphics card at some point, provided it has axial rather than blower-style cooling, and that it fits within the physical constraints of the chassis. But given that the RTX 3080 is the
best graphics card
you can buy, you may be ready for a whole new system by the time you start thinking about swapping out the graphics card here.
Aside from wishing there were more USB ports on the motherboard, I have no real complaints about the hardware here. If I were spending this much, I’d prefer a 2TB SSD, but at least the 1TB model Corsair has included is a PCIe 4.0 drive for the best speed possible. Technically the ASRock motherboard here has a second PCIe 3.0 M.2 slot, where you could install a second SSD. But it’s housed on the back of the motherboard, which would mean fairly major disassembly in cramped quarters, and remember that you’d have to disconnect the pump/cooling plate from the CPU before even attempting to do that.
Gaming Performance on the Corsair One a200
With AMD’s 12-core Ryzen 9 5900X and Nvidia’s RTX 3080 running the gaming show inside Corsair’s One a200 — and both of them liquid-cooled — we expected Corsair’s compact power tower to spit out impressive frame rates.
We pitted the a200 against
MSI’s Aegis RS 11th
, which also has an RX 3080 but an 8-core Intel Rocket Lake Core i7-11700K, and a couple other recent gaming rigs we’ve tested.
Alienware’s Aurora Ryzen Edition R10
sports a stepped down Ryzen 7 5800X and a
Radeon RX 6800XT
. And
HP’s Omen 30L
, which we looked at near the end of 2020, was outfitted with a last-generation Intel Core i9-10900K and an RTX 3080 to call its own.
While the Corsair One a200 didn’t walk away from the impressive competition, it was almost always in the lead in our gaming tests. And that’s all the more impressive given most of the systems it competes with are much larger.
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
On the Shadow of the Tomb Raider benchmark (highest settings), the game ran at 147 fps at 1080p on the One a200, and 57 fps at 4K. The former ties it with the Aegis for first place here, and the latter beats both the Aegis and the Omen 30L, just slightly, giving Corsair’s system an uncontested win.
In Grand Theft Auto V (very high settings), the Corsair system basically repeated its previous performance, tying the MSI machine at 1080p and pulling one frame ahead of both the Omen and the MSI at 4K.
On the Far Cry New Dawn benchmark, the MSI Aegis pulled ahead at 1080p by 11 fps, but the One a200 still managed to tie the MSI and HP systems at 4K.
After trailing a bit in Far Cry at 1080p, the One a200 pulled ahead in Red Dead Redemption 2 (medium settings) at the same resolution, with its score of 117 fps beating everything else. And at 4K, the Corsair system’s 51 fps was again one frame ahead of both the MSI and Alienware systems.
Last up in Borderlands 3 (badass settings), the Corsair system stayed true to its impressive form. Its score of 137 fps at 1080 was a frame ahead of the MSI (and ahead of everything else). And at 4K, its score of 59 fps was only tied by the HP Omen.
Aside from the One a200’s gaming performance being impressive for its size, this is also one of the quietest high-end gaming rigs I’ve tested in a long time. Lots of heat shot out of the top of the tower while I played the Ancient Gods expansion of Doom Eternal, but fan noise was a constant low-end whirr. The large fan at the top does its job without doing much to make itself known, and the radiators on either side help move heat out of the case without adding to the impressively quiet noise floor.
We also subjected the Corsair One a200 to our Metro Exodus stress test gauntlet, in which we run the benchmark at the Extreme preset 15 times to simulate roughly half an hour of gaming. The Corsair tower ran the game at an average of 71.13 fps, with very little variation. The system started out the test at 71.37 fps on the first run, and dipped just to 71.05 fps on the final run. That’s a change of just a third of a frame per second throughout our stress test. It’s clear both in terms of consistent performance and low noise levels that the One a200’s cooling system is excelling at its job.
During the Metro Exodus runs, the CPU ran at an average clock speed of 4.2 GHz and an average temperature of 74.9 degrees Celsius (166.8 degrees Fahrenheit). The GPU’s average clock speed was 1.81 GHz, with an average temperature of 68.7 degrees Celsius (155.6 degrees Fahrenheit).
Productivity Performance
While the Ryzen 9 5900X isn’t quite as potentially speedy on paper as the top-end 5950X (thanks to a slightly lower top boost clock and four fewer cores), it’s still a very powerful 12-core CPU. And paired with Nvidia’s RTX 3080, along with 32GB of RAM and a fast PCIe 4.0 SSD, the Corsair One a200 is just as potent in productivity and workstation tasks as it is playing games.
Image 1 of 3
Image 2 of 3
Image 3 of 3
On Geekbench 5, an overall performance benchmark, the Corsair system was just behind the leading systems in the single-core tests, with its score of 1,652. But on the multi-core test, it’s 11,968 was well ahead of everything else.
The Corsair PCIe Gen 4 SSD in the a200 blew past competing systems, transferring our 25GB of files at a rate of 1.27 GBps, with only the HP Omen’s WD SSD also managing to get close to the 1GBps mark.
And on our Handbrake video editing test, the Corsair One a200 transcoded a 4K video to 1080p in an impressive 4 minutes and 44 seconds, while all the other systems took well more than 5 minutes to complete the same task. Video editors in particular will be able to make good use of this system’s 12 cores and 24 threads of CPU might.
Software and Warranty for the Corsair One a200
The Corsair One a200 ships with a two-year warranty (plus lifetime customer support) and very little pre-installed software. Aside from Windows 10 Home, you get the company’s iCue software, which can be used to control both the lights as well as the system fans. The company even seems to have avoided the usual bloat of streaming apps and casual games like Candy Crush, which ship with almost all Windows machines these days.
Configuration Options for the Corsair One a200
If you’re after the AMD-powered Corsair a200 specifically, you have two configuration options. There’s the model we tested (Corsair One a200 CS-90200212), with a 12-core Ryzen 9 5900X, 32GB of RAM, a 1TB PCIe Gen 4 SSD, 2TB hard drive, and an RTX 3080 for $3,799. Or you can pay $400 more ($4,199) to step up to the 16-core Ryzen 5950X and double the RAM and SSD to 64GB and 2TB respectively (Corsair One Pro a200 CS-9040010). The latter configuration is overkill for gaming, but the extra storage, RAM and four more CPU cores are well worth the extra money if you can actually make use of them.
For those who aren’t wedded to AMD, there’s also the Intel-based Corsair One i200, which now includes 11th Gen “Rocket Lake” CPU options, with up to a Core i9-11900K and an RTX 3080, albeit running on a last-gen Z490 platform. It starts a little lower at $3,599. But that model is currently out of stock with any current-generation Intel and Nvidia components, leaving exact pricing up in the air as of publicatioon.
We tried to do some comparison pricing, and were able to find a similarly equipped HP Omen 30L, as HP often sells gaming rigs on the more-affordable side of the spectrum. But when we wrote this, all Omen 30L systems with current-generation graphics cards were sold out on HP’s site. We were able to
find an Omen 30L on Amazon
with an RTX 3080 and an Intel Core i9-10850K, along with similar RAM and storage as our Corsair a200, for $3,459. That’s about $340 less than the a200, but the Omen 30L is also much larger than the a200 and has a now last-generation CPU with fewer cores, plus a slower SSD.
Bottom Line
With one of
the best CPUs
and graphics cards, both liquid cooled and quiet, in an attractive, compact package, Corsair’s One a200 offers a whole lot to like. The $3,799 asking price is certainly daunting, but in these times when that graphics card alone is selling on eBay regularly for more than $2,000, the Ryzen 9 5900X often sells for close to $800, and even most desktops with current-gen graphics cards are mostly sold out, it’s tough to which high-end gaming rig is more or less of a bargain than something else.
If you spend some time looking you can probably find a system with similar specs as the Corsair One a200 for a bit less. But unless and until the ongoing mining craze subsides, that system probably won’t cost substantially less than Corsair’s pricing. And with its impressively compact shell, quiet operation, and top-end performance in both gaming and productivity, the a200 is easy to recommend for those who can afford it. Just know that upgrading will be a bit more difficult and limiting than with a larger desktop, and if you need lots of USB ports, you may want to invest in a hub.
11th Generation Rocket Lake Processor (Image credit: Intel)
As with every generation of Intel processor, Silicon Lottery has started selling pre-binned Intel 11th Generation Rocket Lake chips. These processors are perfect for consumers who don’t want to play the silicon lottery and are willing to pay a small premium to get a guaranteed overclock.
Silicon Lottery currently offers pre-binned Core i5-11600K and Core i9-11900K processors. The company has also listed the Core i9-11900KF, but it’s seemingly sold out. Silicon Lottery backs its pre-binned parts with a limited one-year warranty that’s eligible for a one-time replacement.
The highest-binned Core i9-11900K sells for $879.99, 63.3% over Intel’s MSRP. This particular chip offers a 5.1 GHz boost clock across all eight Cypress Cove cores. In comparison to the Core i9-11900K’s default specifications, Silicon Lottery’s version offers a 6.2% higher all-core boost clock at the expense of a 63.3% premium.
On a different note, the fastest Core i5-11600K in Silicon Lottery’s portfolio operates with a 5 GHz all-core boost clock. It represents a 8.7% upgrade but with a 29.8% higher price tag.
Intel 11th Generation Rocket Lake CPU Specifications
Processor
Price
MSRP
Cores / Threads
Binned All-Core Boost (GHz)
Default All-Core Boost (GHz)
Core i9-11900K 5.1 GHz
$879.99
$539
8 / 16
5.1
4.8
Core i9-11900K 5.0 GHz
$699.99
$539
8 / 16
5.0
4.8
Core i9-11900K 4.9 GHz
$619.99
$539
8 / 16
4.9
4.8
Core i5-11600K 5.0 GHz
$339.99
$262
6 / 12
5.0
4.6
Core i5-11600K 4.9 GHz
$259.99
$262
6 / 12
4.9
4.6
Core i5-11600K 4.8 GHz
$249.99
$262
6 / 12
4.8
4.6
However, the odds might not be too bad for consumers that want to take their chances at the silicon lottery. According to Silicon Lottery, 100% of Core i9-11900K samples can hit 4.9 GHz across all cores. Even 73% of the samples got to 5 GHz without hiccups. However, only 29% could do 5.1 GHz.
As for the Core i5-11600K, a 4.8 GHz all-core boost clock was possible on 100% of the samples, while 4.9 GHz was achievable on 81% of the chips. Only the top 17% Core i5-11600K samples managed to peak at 5 GHz though.
There’s one missing detail with Silicon Lottery’s statistics though, and that’s the sample size. Without that value, you can’t really assess on the precision of the company’s results. At a first glance, the odds do look favorable.
Unlike previous occasions, Silicon Lottery doesn’t have any plans to offer its delidding service for Rocket Lake processors. Given the risks that are involved with delidding Rocket Lake chips, it’s understandable why Silicon Lottery is hesitant to put Rocket Lake under the knife.
The Intel Core i5-11600K vs AMD Ryzen 5 5600X rivalry is a heated battle for supremacy right in the heart of the mid-range CPU market. AMD’s Ryzen 5000 processors took the lead in the desktop PC from Intel’s competing Comet Lake processors last year, upsetting our Best CPU for gaming recommendations and our CPU Benchmarks hierarchy. Intel’s response comes in the form of its Rocket Lake processors, which dial up the power to extreme levels and bring the new Cypress Cove architecture to the company’s 14nm process as Intel looks to upset AMD’s powerful Zen 3-powered Ryzen 5000 chips.
Intel has pushed its 14nm silicon to the limits as it attempts to unseat the AMD competition, and that has paid off in the mid-range where Intel’s six-core Core i5-11600K weighs in with surprisingly good performance given its $232 to $262 price point.
Intel’s aggressive pricing, and the fact that the potent Ryzen 5 5600X remains perpetually out of stock and price-gouged, has shifted the conversation entirely. For Intel, all it has to do is serve up solid pricing, have competitive performance, and make sure it has enough chips at retail to snatch away the win.
We put the Core i5-11600K up against the Ryzen 5 5600X in a six-round faceoff to see which chip takes the crown in our gaming and application benchmarks, along with other key criteria like power consumption and pricing. Let’s see how the chips stack up.
Features and Specifications of AMD Ryzen 5 5600X vs Intel Core i5-11600K
Rocket Lake Core i5-11600K vs AMD Zen 3 Ryzen 5 5600X Specifications and Pricing
Suggested Price
Cores / Threads
Base (GHz)
Peak Boost (Dual/All Core)
TDP
iGPU
L3
AMD Ryzen 5 5600X
$299 (and much higher)
6 / 12
3.7
4.6
65W
None
32MB (1×32)
Intel Core i5-11600K (KF)
$262 (K) – $237 (KF)
6 / 12
3.9
4.6 / 4.9 (TB2)
125W
UHD Graphics 750 Xe 32EU
12MB
The 7nm Ryzen 5 5600X set a new bar for the mid-range with six Zen 3 cores and twelve threads that operate at a 3.7-GHz base and 4.6-GHz boost frequency. Despite AMD’s decision to hike gen-on-gen pricing, the 5600X delivered class-leading performance at its launch, not to mention a solid price-to-performance ratio. Things have changed since then, though, due to overwhelming demand coupled with pandemic-spurred supply chain disruptions, both of which have combined to make finding the Ryzen 5 5600X a rarity at retail, let alone at the suggested $299 pricing.
Intel’s Core i5-11600K also comes with six cores and twelve threads, but Team Blue’s chips come with the new Cypress Cove architecture paired with the aging 14nm process. Intel has tuned this chip for performance; it weighs in with a 3.9-GHz base, 4.9-GHz Turbo Boost 2.0, and 4.6-GHz all-core clock rates. All of these things come at the expense of power consumption and heat generation.
Intel specs the 14nm 11600K at a 125W TDP rating, but that jumps to 182W under heavy loads, while AMD’s denser and more efficient 7nm process grants the 5600X a much-friendlier 65W TDP rating that coincides with a peak of 88W. We’ll dive deeper into power consumption a bit later, but this is important because the Core i5-11600K comes without a cooler. You’ll need a capable cooler, preferably a 280mm liquid AIO or equivalent air cooler, to unlock the best of the 11600K.
Meanwhile, the AMD Ryzen 5 5600X comes with a bundled cooler that is sufficient for most users, though you would definitely need to upgrade to a better cooler if you plan on overclocking. Additionally, a more robust cooler will unlock slightly higher performance in heavy work, like rendering or encoding. Still, you’d need to do that type of work quite regularly to see a worthwhile benefit, so most users will be fine with the bundled cooler.
Both the Core i5-11600K and Ryzen 5 5600X support PCIe 4.0, though it is noteworthy that Intel’s chipset doesn’t support the speedier interface. Instead, devices connected to Intel’s chipset operate at PCIe 3.0 speeds. That means you’ll only have support for one PCIe 4.0 m.2 SSD port on your motherboard, whereas AMD’s chipset is fully enabled for PCIe 4.0, giving you more options for a plethora of faster devices.
Both chips also support two channels of DDR4-3200 memory, but Intel’s new Gear memory feature takes a bit of the shine off Intel’s memory support. At stock settings, the 11600K supports DDR4-2933 in Gear 1 mode, which provides the best latency and performance for most tasks, like gaming. You’ll have to operate the chip in Gear 2 mode for warrantied DDR4-3200 support, but that results in performance penalties in some latency-sensitive apps, like gaming, which you can read about here.
For some users, the 11600K does have a big insurmountable advantage over the Ryzen 5 5600X: The chip comes with the new UHD Graphics 750 comes armed with 32 EUs based on the Xe graphics engine, while all Ryzen 5000 processors come without integrated graphics. That means Intel wins by default if you don’t plan on using a discrete GPU.
Notably, you could also buy Intel’s i5-11600KF, which comes with a disabled graphics engine, for $25 less. At $237, the 11600KF looks incredibly tempting, which we’ll get to a bit later.
Winner: AMD
The Ryzen 5 5600X and the Core i5-11600K are close with six cores and twelve threads (and each of those cores has comparable performance), but the 5600X gets the nod here due to its bundled cooler and native support for DDR4-3200 memory. Meanwhile, the Core i5-11600K comes without a cooler, and you’ll have to operate the memory in sub-optimal Gear 2 mode to access DDR4-3200 speeds, at least if you want to stay within the warranty.
The Core i5-11600K comes with integrated graphics, so it wins by default if you don’t plan on using a discrete GPU. Conversely, you can sacrifice the graphics for a lower price point. AMD has no high-end chips that come with integrated graphics, though that will change by the end of the year when the Ryzen 5000 Cezanne APUs arrive.
Gaming Performance on AMD Ryzen 5 5600X vs Core i9-11600K
The Ryzen 5 and Core i5 families tend to be the most popular gaming chips, and given the big architectural advances we’ve seen with both the Zen 3 and Cypress Cove architectures, these mid-range processors can push fast GPUs along quite nicely.
That said, as per usual, we’re testing with an Nvidia GeForce RTX 3090 to reduce GPU-imposed bottlenecks as much as possible, and differences between test subjects will shrink with lesser cards, which you’ll see most often with this class of chip, or higher resolutions. Below you can see the geometric mean of our gaming tests at 1080p and 1440p, with each resolution split into its own chart. PBO indicates an overclocked Ryzen configuration. You can find our test system details here.
Image 1 of 18
Image 2 of 18
Image 3 of 18
Image 4 of 18
Image 5 of 18
Image 6 of 18
Image 7 of 18
Image 8 of 18
Image 9 of 18
Image 10 of 18
Image 11 of 18
Image 12 of 18
Image 13 of 18
Image 14 of 18
Image 15 of 18
Image 16 of 18
Image 17 of 18
Image 18 of 18
At stock settings at 1080p, the Core i5-11600K notches an impressive boost over its predecessor, the 10600K, but the Ryzen 5 5600X is 7.8% faster over the full span of our test suite. Overclocking the 11600K brings it up to snuff with the stock Ryzen 5 5600X, but the overclocked 5600X configuration is still 3.6% faster.
As you would expect, those deltas will shrink tremendously with lesser graphics cards or with higher resolutions. At 1440p, the stock 5600X is 3.3% faster than the 11600K, and the two tie after overclocking.
Flipping through the individual games shows that the leader can change quite dramatically, with different titles responding better to either Intel or AMD. Our geometric mean of the entire test suite helps smooth that out to one digestible number, but bear in mind – the faster chip will vary based on the game you play.
Notably, the 11600K is 14% less expensive than the 5600X, and that’s if (a huge if) you can find the 5600X at recommended pricing. You could also opt for the graphics-less 11600KF model and pay 26% less than the 5600X, again, if you can find the 5600X at recommended pricing.
Winner: AMDOverall, the Ryzen 5 5600X is the faster gaming chip throughout our test suite, but be aware that performance will vary based on the title you play. This class of chips is often paired with lesser graphics cards, and most serious gamers play at higher resolutions. In both of those situations, you could be hard-pressed to notice the difference between the processors. However, it’s rational to expect that the Ryzen 5 5600X will leave a bit more gas in the tank for future GPU upgrades.
Pricing is the wild card, though, and the Core i5-11600K wins that category easily — even if you could find the Ryzen 5 5600X at suggested pricing. We’ll dive into that in the pricing section.
Application Performance of Intel Core i5-11600K vs Ryzen 5 5600X
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
We can boil down productivity application performance into two broad categories: single- and multi-threaded. The first slide in the above album has a geometric mean of performance in several of our single-threaded tests, but as with all cumulative measurements, use this as a general guide and be aware that performance will vary based on workload.
The Core i5-11600K takes the lead, at both stock and overclocked settings, by 3.8% and 1%, respectively. These are rather slim deltas, but it’s clear that the Rocket Lake chip holds the edge in lightly threaded work, particularly in our browser tests, which are a good indicator of general snappiness in a standard desktop PC operating system. We also see a bruising performance advantage in the single-threaded AVX-512-enabled y-cruncher.
The Core i5-11600K is impressive in single-threaded work, but the Ryzen 5 5600X isn’t far behind. It’s too bad that the 11600K’s lead in these types of tests doesn’t equate to leading performance in gaming, which has historically been the case with processors that excel at single-threaded tasks.
Image 1 of 21
Image 2 of 21
Image 3 of 21
Image 4 of 21
Image 5 of 21
Image 6 of 21
Image 7 of 21
Image 8 of 21
Image 9 of 21
Image 10 of 21
Image 11 of 21
Image 12 of 21
Image 13 of 21
Image 14 of 21
Image 15 of 21
Image 16 of 21
Image 17 of 21
Image 18 of 21
Image 19 of 21
Image 20 of 21
Image 21 of 21
Here we take a closer look at performance in heavily-threaded applications, which has long been the stomping grounds of AMD’s core-heavy Ryzen processors. Surprisingly, in our cumulative measurement, the Core i5-11600K is actually 2.5% faster than the 5600X at stock settings and is 1.8% faster after we overclocked both chips.
These are, again, slim deltas, and the difference between the chips will vary based on workload. However, the Core i5-11600K is very competitive in threaded work against the 5600X, which is an accomplishment in its own right. The substantially lower pricing is even more impressive.
Winner: Intel
Based on our cumulative measurement, Intel’s Core i5-11600K comes out on top in both single- and multi-threaded workloads, but by slim margins in both categories of workloads, and that can vary based on the application. However, given that the Core i5-11600K has significantly lower pricing and pulls out a few hard-earned wins on the application front, this category of the Core i5-11600K vs Ryzen 5 5600X competition goes to Intel.
Overclocking of Ryzen 5 5600X vs Core i5-11600K
We have reached the land of diminishing returns for overclocking the highest-end chips from both AMD and Intel, largely because both companies are engaged in a heated dogfight for performance superiority. As a result, much of the overclocking frequency headroom is rolled into standard stock performance, leaving little room for tuners, making memory and fabric overclocking all the more important. There’s still plenty of advantages with overclocking the midrange models though in today’s Ryzen 5 5600X vs Core i5-11600K battle, but be aware that your mileage may vary.
Intel benefits from higher attainable clock rates, especially if you focus on overclocking a few cores instead of the standard all-core overclock, and exposes a wealth of tunable parameters with its Rocket Lake chips. That includes separate AVX offsets for all three flavors of AVX, and the ability to set voltage guardbands. Intel also added an option to completely disable AVX, though that feature is primarily geared for professional overclockers. Rocket also supports per-core frequency and hyper-threading control (enable/disable) to help eke out more overclocking headroom.
The Core i5-11600K supports real-time memory frequency adjustments, though motherboard support will vary. For example, this feature allows you to shift from DDR4-2933 to DDR4-3200 from within Windows 10 without rebooting (or any other attainable memory frequency). Intel also supports live memory timing adjustments from within the operating system.
Intel has long locked overclocking to its pricey K-series models, while AMD freely allows overclocking with all SKUs on almost any platform. However, we see signs of some improvement here from Intel, as it has now enabled memory overclocking on its B560 and H570 chipsets across the board. That said, Intel’s new paradigm of Gear 1 and Gear 2 modes does reduce the value of memory overclocking, which you can read more about in our review.
AMD’s Ryzen 5000 chips come with innovative boost technology that largely consumes most of the available frequency headroom, so there is precious little room for bleeding-edge all-core overclocks. In fact, all-core overclocking with AMD’s chips is lackluster; you’re often better off using its auto-overclocking Precision Boost Overdrive 2 (PBO2) feature that boosts multi-threaded performance. AMD also has plenty of Curve Optimization features that leverage undervolting to increase boost activity.
Much of the benefit of the Ryzen 500 series0 comes from its improved fabric overclocking, which then allows you to tune in higher memory overclocks. We hit a 1900-MHz fabric on our chip, allowing us to run the memory in a 1:1 mode at a higher DDR4-3800 memory speed than we could pull off with the 11600K with the same 1:1 ratio. It also isn’t uncommon to see enthusiasts hit DDR4-4000 in 1:1 mode with Ryzen 5000 processors. There’s no doubt that Intel’s new Gear 1 and 2 memory setup isn’t that refined — you can adjust the 5600X’s fabric ratio to expand the 1:1 window to higher frequencies, while Intel does not have a comparable adjustable parameter.
Winner: Tie
Both the Ryzen 5 5600X and the Core i5-11600K have a bit more overclocking headroom than their higher-end counterparts, meaning that there is still some room for gains in the mid-range. Both platforms have their respective overclocking advantages and a suite of both auto-overclocking and software utilities, meaning this contest will often boil down to personal preference.
Power Consumption, Efficiency, and Cooling of Intel Core i5-11600K vs AMD Ryzen 5 5600X
Image 1 of 12
Image 2 of 12
Image 3 of 12
Image 4 of 12
Image 5 of 12
Image 6 of 12
Image 7 of 12
Image 8 of 12
Image 9 of 12
Image 10 of 12
Image 11 of 12
Image 12 of 12
The Core i5-11600K comes with the same 125W TDP rating as its predecessor, but that rating is a rough approximation of power consumption during long-duration workloads. To improve performance in shorter-term workloads, Intel increased the PL2 rating (boost) to 251W, a whopping 69W increase over the previous-gen 10600K that also came with six cores.
Power consumption and heat go hand in hand, so you’ll have to accommodate that power consumption with a robust cooler. We didn’t have any issues with the Core i5-11600K and a 280mm liquid cooler (you could get away with less), but we did log up to 176W of power consumption at stock settings during our Handbrake benchmark.
In contrast, the Ryzen 5 5600X sips power, reaching a maximum of 76W at stock settings during a Blender benchmark. In fact, a quick look at the renders-per-day charts reveals that AMD’s Ryzen 5 5600X is in another league in terms of power efficiency — you get far more performance per watt consumed, which results in lower power consumption and heat generation.
The 5600X’s refined power consumption comes via TSMC’s 7nm process, while Intel’s 14nm process has obviously reached the end of the road in terms of absolute performance and efficiency.
Winner: AMD
AMD wins this round easily with lower power consumption, higher efficiency, and less thermal output. Intel has turned the power up to the extreme to stay competitive with AMD’s 7nm Ryzen 5000 chips, and as a result, the Core i5-11600K pulls more power and generates more heat than the Ryzen 5 5600X. Additionally, the Core i5-11600K doesn’t come with a bundled cooler, so you’ll need to budget in a capable model to unlock the best the chip has to offer, while the Ryzen 5 5600X comes with a bundled cooler that is good enough for the majority of users.
Pricing and Value of AMD Ryzen 5 5600X vs Intel Core i5-11600K
AMD was already riding the pricing line with the Ryzen 5 5600X’s suggested $299 price tag, but supply of this chip is volatile as of the time of writing, to put it lightly, leading to price gouging. This high pricing comes as a byproduct of a combination of unprecedented demand and pandemic-spurred supply chain issues, but it certainly destroys the value proposition of the Ryzen 5 5600X, at least for now.
The Ryzen 5 5600X currently retails for $370 at Microcenter, which is usually the most price-friendly vendor, a $69 markup over suggested pricing. The 5600X is also $450 from Amazon (not a third-party seller). Be aware that the pricing and availability of these chips can change drastically in very short periods of time, and they go in and out of stock frequently, reducing the accuracy of many price tracking tools.
In contrast, the Core i5-11600K can be found for $264 at Amazon, and $260 at Microcenter, which is surprisingly close to the $262 suggested tray pricing. Additionally, you could opt for the graphics-less Core i5-11600KF if you don’t need a discrete GPU. That chip is a bit harder to find than the widely-available 11600K, but we did find it for $240 at Adorama (near suggested pricing).
Here’s the breakdown (naturally, this will vary):
Suggested Price
Current (volatile for 5600X)
Price Per Core
Core i5-11600K
$262
$262 to $264
~$32.75
Ryzen 5 5600X
$299
$370 to $450
~$46.25 to $56.25
Core i5-11600KF
$237
$240 (spotty availability)
~$29.65
The Core i5-11600K doesn’t come with a cooler, so you’ll have to budget that into your purchasing decision.
Winner: Intel
Even at recommended pricing for both chips, Intel’s aggressive pricing makes the Core i5-11600K a tempting proposition, but the company wins this stage of the battle convincingly based on one almost insurmountable advantage: You can actually find the chip readily available at retail for very close to its suggested tray pricing. With much cheaper pricing both on a per-core and absolute basis, the Core i5-11600K is the better buy, and if you’re looking for an even lower cost of entry, the Core i5-11600KF is plenty attractive if you don’t need integrated graphics.
AMD’s premium pricing for the Ryzen 5 5600X was a bit of a disappointment for AMD fans at launch, but the chip did offer enough advantages to justify the price tag. However, the arrival of the Core i5-11600K with its disruptive pricing and good-enough performance would probably merit a slight pricing adjustment from AMD, or the release of a non-X model, if these were normal times. These aren’t normal times, though, and instead of improving its value proposition, AMD is facing crippling supply challenges.
Bottom Line
Intel Core i5-11600K
AMD Ryzen 5 5600X
Features and Specifications
X
Gaming
X
Application Performance
X
Overclocking
X
X
Power Consumption, Efficiency, and Cooling
X
Pricing and Value Proposition
X
Total
3
4
Here’s the tale of the tape: AMD wins this Ryzen 5 5600X vs Intel Core i5-11600K battle with a tie in one category and a win in three others, marking a four to three victory in favor of Team Red. Overall, the Ryzen 5 5600X offers up a superior blend of gaming performance, power consumption and efficiency, and a bundled cooler to help offset the higher suggested retail pricing, remaining our go-to chip recommendation for the mid-range. That is if you can find it at or near suggested pricing.
Unfortunately, in these times of almost unimaginably bad chip shortages, the chip that you can actually buy, or even find anywhere even near recommended pricing, is going to win the war at the checkout lane. For now, Intel appears to be winning the supply battle, though that could change in the coming months. As a result, the six-core twelve-thread Core i5-11600K lands with a friendly $262 price point, making it much more competitive with AMD’s $300 Ryzen 5 5600X that currently sells far over suggested pricing due to shortages.
The Core i5-11600K has a very competitive price-to-performance ratio compared to the Ryzen 5 5600X in a broad swath of games and applications. The 11600K serves up quite a bit of performance for a ~$262 chip, and the graphics-less 11600KF is an absolute steal if you can find it near the $237 tray pricing. If you don’t need an integrated GPU, the KF model is your chip.
Even if we compare the chips at AMD’s and Intel’s standard pricing, the Core i5-11600K is a potent challenger with a solid value proposition due to its incredibly aggressive pricing. While the Core i5-11600K might not claim absolute supremacy, its mixture of price and performance makes it a solid buy if you’re willing to overlook the higher power consumption.
Most gamers would be hard-pressed to notice the difference when you pair these chips with lesser GPUs or play at higher resolutions, though the Ryzen 5 5600X will potentially leave you with more gas in the tank for future GPU upgrades. The Ryzen 5 5600X is the absolute winner, though, provided you can find it anywhere close to the suggested retail price.
Nowadays there are loads of small form-factor (SFF) systems featuring fairly high performance, there are also fanless PCss that can offer performance of regular desktops. Unfortunately, SFF and fanless worlds rarely intersect and passively cooled compact desktops are extremely rare. Yet, they exist. Recently Atlast! Solutions introduced its Sigao Model B, which packs Intel’s 10-core Comet Lake CPU into a fairly small fanless chassis.
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
The Atlast! Sigao Model B is based around Intel’s 10-core Core i9-10900T processor as well as an Asus H470-I Mini-ITX motherboard. The CPU features a 35W TDP and has a base clock of 1.9 GHz as well as a maximum turbo frequency of up to 4.6 GHz, though we would not expect the processor to hit very high clocks in a fanless system powered by a 200W PSU. The motherboard comes with all the essentials, including Wi-Fi 6 and Bluetooth module, two Gigabit Ethernet ports, three display outputs (DisplayPort, HDMI, USB 3.2 Gen 2 Type-C), one USB 3.2 Gen 2 Type-A connector, four USB 3.2 Gen 1 Type-A ports, and 5.1-channel audio.
The Sigao measures 12.6 x 12.6 x 3.4 inches (320 × 322 × 87.5 mm) without feet, so while it is definitely not as compact as Intel’s NUC or Apple’s Mac Mini, it can still be considered a small form-factor PC.
Atlast! builds its fanless systems to order, so it can equip its Sigao Model B with up to 64GB of DDR4-2666 memory, one Samsung 970 Evo Plus M.2 SSD with a PCIe 3.0 x4 interface and up to 2TB capacity, and two 2.5-inch HDDs or SSDs.
The motherboard has a PCIe 3.0 x16 slot and the system can accommodate a single slot wide add-in card using a riser, though finding a decent mini-ITX 75W single slot graphics card with passive (or even active) cooling is close to impossible, so it is unlikely that the system can be equipped with a standalone AIB. Unfortunately, the motherboard also lacks a Thunderbolt 4 port for an external graphics solution, so it looks like the Sigao Model B has to rely on Intel’s built-in UHD Graphics 630 based on the previous-generation architecture. Meanwhile, if the Asus H470-I motherboard gains Rocket Lake-S support, it should be possible to install a more up-to-date CPU with Xe Graphics featuring leading-edge media playback capabilities.
The Atlast! Sigao Model B is not cheap at all. Even the basic model featuring a Core i9-10900T, 16GB of RAM, and a 250GB SSD costs €1,922 ($2304) with taxes and €1,602 without ($1,920), which is quite expensive even by SFF standards. But a desktop PC that brings together compact dimensions and passive cooling is hard to come by, so its price seems to be justified for those who want both features.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.