Parallels has released a new version of its Parallels Desktop for Mac virtualization software that features full native support for Mac computers equipped with either Apple M1 or Intel processors. The program allows users to run Windows 10 Arm Insider Preview as well as various Linux distributions on systems running the M1 SoC at native speeds.
Running Windows on Apple’s Mac computers may not be a priority for most of their owners, but there are still quite a lot of users who need to run Windows applications from time to time. Since the latest Apple MacBook Air/Pro 13 and MacMini are based on the Arm-powered M1 SoC, it’s impossible to install regular Windows 10 as the second OS on them. Furthermore, unlike other programs for Mac, virtualization machines did not run well on M1-based Macs via the Rosetta layer, so Parallels had to redesign its Parallels Desktop to make it run on an Apple’s M1 SoC natively.
Parallels Desktop for Mac 16.5 supports all the capabilities that that users of PDM are used to on Apple M1 systems, including coherence mode, shared profile, and touch bar controls, just to name a few.
In addition to Windows 10 for Arm, Parallels Desktop for Mac 16.5 also supports guest operating systems on M1 Macs,including Linux distributives Ubuntu 20.04, Kali Linux 2021.1, Debian 10.7, and Fedora Workstation 33-1.2.
To ensure flawless operation of its Parallels Desktop for Mac virtual machine, Parallel used help of more than 100,000 Mac M1 users who ran Microsoft’s Windows 10 on Arm Insider Preview along with various software from PowerBI to Visual Studio and from SQL server to Meta Trader. In addition, engineers from Parallels did not forget games and ensured that titles like Rocket League, Among Us, Roblox, The Elder Scrolls V: Skyrim, and Sam & Max Save the World worked well on Parallels Desktop for Mac 16.5 and Apple M1-powered systems.
Right now, Parallels Desktop for Mac 16.5 is good enough to launch it commercially, according to the company.
There are some interesting findings about performance of Apple M1 and Parallels Desktop 16.5 for Mac:
An M1-based Mac running Parallels Desktop 16.5 and Windows 10 Arm consumes 2.5 times less energy than a 2020 Intel-based MacBook Air.
An Apple M1 machine running Parallels Desktop 16.5 and Windows 10 Arm performs 30% better in Geekbench 5 than a MacBookPro with Intel Core i9-8950HK in the same conditions.
Apple M1’s integrated GPU appears to be 60% faster than AMD’s Radeon Pro 555X discrete graphics processor in DirectX 11 applications when running Windows using the Parallels Desktop 16.5.
“Apple’s M1 chip is a significant breakthrough for Mac users,” said Nick Dobrovolskiy, Parallels Senior Vice President of Engineering and Support. “The transition has been smooth for most Mac applications, thanks to Rosetta technology. However, virtual machines are an exception and thus Parallels engineers implemented native virtualization support for the Mac with M1 chip. This enables our users to enjoy the best Windows-on-Mac experience available.”
Update: The Shuffle has ended. Did you get selected? If so, let us know in the comments, you lucky dog!Original Story: The Newegg Shuffle continues, with another chance to potentially buy one of the best graphics cards — or one of the best CPUs. Today’s Shuffle has several options for GeForce RTX 3070 and GeForce RTX 3060 graphics cards, one Radeon RX 6700 XT and mobo bundle, along with AMD’s Ryzen 7 5800X and Intel’s Core i7-10700. and bundles on tap. The graphics cards rank in the upper segment of our GPU benchmarks hierarchy, and prices are at least a bit lower than what we’ve seen in our eBay GPU pricing index.
For those unfamiliar with the process, Newegg Shuffle uses a lottery format. You select the component(s) you’d like to potentially buy. Then there’s a drawing later today, and the ‘winners’ get notified by email with the chance to purchase the part (only one) within a several hour period. Based on our experience, you won’t get selected most of the time. But hey, it’s free to try.
Today’s options and prices consist of the following:
Asus ROG Strix GeForce RTX 3070 White for $840
Asus TUF Gaming GeForce RTX 3070 for $770
Asus GeForce RTX 3060 Ultimate KO for $520
Asus TUF Gaming GeForce RTX 3060 for $510
Gigabyte Aorus RX 6700 XT Elite with X570 Aorus Elite WiFi for $1,010
Gigabyte RTX 3060 Eagle with B550 Aorus Elite for $615
AMD Ryzen 7 5800X for $429
Intel Core i7-10700 for $255
All of the graphics card prices are roughly 50-60% higher than the official launch MSRPs from AMD and Nvidia, though these are third-party custom cards that may come with extra features. The RTX 3070 cards are perhaps the best of the bunch, with performance rivaling the previous generation RTX 2080 Ti for a lower price. And if you like RGB and bling, the ROG Strix card certainly has you covered.
The two CPUs are perhaps a bit less exciting, except they’re both selling for less than MSRP. AMD’s Ryzen 7 5800X isn’t quite as difficult to find in stock as the more sought after Ryzen 9 5900X and 5950X, but it’s still a good choice, particularly with a price that’s $20 below AMD’s official MSRP. Intel’s previous generation Core i7-10700 on the other hand is merely a decent CPU but without any overclocking support — basically the Comet Lake equivalent of the Core i9-9900 — but it’s also a viable pick at just $255.
With component shortages plaguing the PC industry, not to mention the smartphone and automotive industries, the latest word is that prices aren’t likely to return to ‘normal’ throughout 2021. If you can keep chugging along with whatever your PC currently has, that’s the best option, as otherwise prices are painful for all of the Nvidia Ampere and AMD RDNA2 GPUs.
The current Newegg shuffle ends in just over an hour. Good luck!
Courtesy of Phoronix; It appears that the latest build of Windows 10 is the most optimal operating system to use for Intel’s new Core i9-11900K Rocket Lake CPU. Tests show the i9 wining more benchmarks in a Windows 10 environment compared to Linux Ubuntu.
For the test bench, Phoronix ran a core i9-119000K with 32GB of 3200MHz RAM, with 1TB of SSD storage. on a Maximus XIII Hero.
As for the operating systems, Phoronix used the latest build of Windows 10 Pro, version 19042, and the latest version of Ubuntu, version 20.10, and version 5.12 of Linux.
Performance Chart Windows 10 vs Ubuntu
Test:
Ubuntu Score
Windows 10 Score
WebP Image Encode 1.1: Encode Time
15.21
13.37
Zstd Compression 1.4.9: Decompression Speed
4784.2
4422.9
Crafty 25.2: Nodes Per Seccond
9976038
11303083
Blender 2.92: BMW: Render Time
132.49
155.59
NeatBench 5: FPS
17.4
18.2
IndigoBench 4.4
4.737
4.911
Selenium: StyleBench Chrome: Runs Per Minute
46.02
50.25
Selenium: Speedometer Chrome: Runs Per minute
186.8
174.7
The benchmarks posted above are just a few of the tests Phoronix conducted on both Windows 10 (see how to get Windows 10 for free) and Ubuntu. Overall, however, comparing all of Phoronix’s tests shows that Windows 10 Pro wins 61.5% of the overall tests compared to Ubuntu which netted a score of just 38.5%.
Phoronix also tested the 11900K’s integrated Xe graphics on both operating systems, and Windows 10 came out with an even higher win rate. In the eight graphics tests conducted, Ubuntu Linux managed only a single win, though in either case the integrated GPU it’s nowhere close to matching the best graphics cards.
This is unusual behavior coming from Intel’s processors; due to Linux’s superior resource management, we normally see Linux operating systems take the win compared to Windows 10. But with Rocket Lake, it appears the opposite is now true.
We don’t know why the tests came out this way, but presumably, Microsoft has added some extra optimizations to Windows 10 we don’t know about. We will have to do our own research into the matter to see what is really going on.
In our tests, the Core i9-11900K is faster for gaming than most of the best CPUs, but is outpaced by the AMD Ryzen 9 5900X. When we compared the AMD Ryzen 9 5900X vs the Core i9-11900K in a seven-round face-off, the Ryzen took five rounds.
If I were the Microsoft marketing executive pitching a once-in-a-lifetime giveaway of an incredible custom-built Flight Simulator PC, I would do two things:
1) I would probably make it look like a full jet engine, not half an exposed jet engine, to avoid reminding people how airplanes can rarely (but terrifyingly) fail:
2) I would make sure it has the very best parts on the market, both for wow factor and so my one-of-a-kind Flight Simulator PC can hopefully play the notoriously demanding game at max settings someday.
Weirdly, this PC will come with an Nvidia GeForce RTX 3070 (not a 3080 or 3090!), as well as a Core i7-11700K rather than Intel’s new flagship Core i9-11900K. We recently tested that Core i9 with an RTX 3090, and it still wasn’t enough to hit 60fps in Flight Simulator at max settings, though I imagine the Core i7 won’t be far off our results with its very similar specs.
Does this PC still have great specs? Absolutely. Should you question them if you’re the lucky winner of this Microsoft France / Gigabyte Aorus collab? Definitely not. That RTX 3070 is worth upward of $1,200 all by its lonesome. I’m just telling you what I would do.
Oh, and 3) I would absolutely make that giant fan go all the way around and spin, so it can serve as an epic, brag-worthy case fan for the entire PC.
Speaking of epic Microsoft giveaway items, do you remember the Xbox Series X fridge? Not only is Microsoft actually now putting a real Xbox Series X mini-fridge into production, the company’s apparently going to be designing them from scratch. That’s according to Xbox marketing head Aaron Greenberg, who dropped the tidbit in a Clubhouse room yesterday evening where my colleague Taylor Lyles was listening.
Corsair has just announced two all-new models of its Corsair One pre-built, named the a200 and i200. Both models will be upgraded with the latest hardware from Intel, AMD, and Nvidia.
Despite measuring in at just 12 liter’s, Corsair promises an uncompromised desktop experience with the Corsair One. Thanks to dual liquid cooling solutions for both the CPU and GPU, you can expect high performance out of the system’s components.
You also get the same amount of I/O as you would on a standard computer tower, with the front panel including a 3.5mm audio jack, two USB 3.0 ports and a single USB 3.2 Gen 2 Type-C port.
Meanwhile, the rear I/O will change depending on which model you choose, but either way, you will get the same amount of connectivity as you would on a standard mini ITX desktop, so expect plenty of display outputs, and plenty of USB ports as well as WiFi 6.
Corsair One a200 & i200 Specifications
a200
i200
CPU:
Up to a Ryzen 9 5900X
Up to a Core i9-11900K
Motherboard:
AMD B550 Mini-ITX Board
Intel Z490
Memory:
Up to 32GB
Up to 32GB
Graphics Card:
GeForce RTX 3080
GeForce RTX 3080
SSD:
Up to a 1TB NVME Gen 4.0 Drive
Up to a 1TB NVME Gen 4.0 Drive
Hard Drive:
Up to 2TB
Up to 2TB
Power Supply
750W 80 Plus Platinum
750W 80 Plus Platinum
The a200 will be based on AMD’s latest hardware and will come with a B550 chipset motherboard and your choice of a Ryzen 5 5600X, Ryzen 7 5800X, or Ryzen 9 5900X. You will also get up to 32GB of RAM, up to 3TB of SSD and hard disk storage, and a 750W SFX PSU.
The i200 on the other hand will feature Intel’s latest Rocket Lake platform, powered by a Z490 motherboard and up to a Core i9-11900K. The memory, storage, and PSU configuration remain the same here as is on the a200.
Both models will also be getting an RTX 3080 for graphics horsepower featuring a massive 10240 CUDA cores and 12GB of GDDR6X, all in a form factor measuring just 12 liters.
Corsair is currently listing a model of the a200 at $3,799.99 and the i200 at $3,599.99, though it’s possible there may be more options later.
The Corsair One has been one of the most compact high-performance PCs you can buy on the market today, so it’s great to see Corsair updating the chassis with the latest CPUs and GPUs, and we expect to see it in ours labs soon.
Gigabyte’s Aorus Z590 Master is a well-rounded upper mid-range motherboard with a VRM rivaled by boards that cost twice as much. Between the Wi-Fi 6E and 10 GbE, three M.2 sockets and six SATA ports for storage, plus its premium appearance, the Z590 Master is an excellent option to get into the Z590 platform if you’re willing to spend around $400.
For
+ Fast Networking, Wi-Fi 6E/10 GbE
+ Superior 18-phase 90A VRM
+ 10 USB ports
Against
– No PCIe x1 slot(s)
– Audible VRM fan
– Price
Features and Specifications
Editor’s Note: A version of this article appeared as a preview before we had a Rocket Lake CPU to test with Z590 motherboards. Now that we do (and Intel’s performance embargo has passed), we have completed testing (presented on page 3) with a Core i9-11900K and have added a score and other elements (as well as removing some now-redundant sentences and paragraphs) to make this a full review.
Gigabyte’s Z590 Aorus Master includes an incredibly robust VRM, ultra-fast Wi-Fi and wired networking, premium audio, and more. While its price of roughly $410 is substantial, it’s reasonable for the features you get, and far from the price of the most premium models in recent generations. If you don’t mind a bit of audible VRM fan noise and like lots of USB and fast wired and wireless networking, it’s well worth considering.
Gigabyte’s current Z590 product stack consists of 13 models. There are familiar SKUs and a couple of new ones. Starting with the Aorus line, we have the Aorus Xtreme (and potentially a Waterforce version), Aorus Master, Aorus Ultra, and the Aorus Elite. Gigabyte brings back the Vision boards (for creators) and their familiar white shrouds. The Z590 Gaming X and a couple of boards from the budget Ultra Durable (UD) series are also listed. New for Z590 is the Pro AX board, which looks to slot somewhere in the mid-range. Gigabyte will also release the Z590 Aorus Tachyon, an overbuilt motherboard designed for extreme overclocking.
On the performance front, the Gigabyte Z590 Aorus Master did well overall, performing among the other boards with raised power limits. There wasn’t a test where it did particularly poorly, but the MS Office and PCMark tests on average were slightly higher than most. Overall, there is nothing to worry about when it comes to stock performance on this board. Overclocking proceeded without issue as well, reaching our 5.1 GHz overclock along with the memory sitting at DDR4 4000.
The Z590 Aorus Master looks the part of a premium motherboard, with brushed aluminum shrouds covering the PCIe/M.2/chipset area. The VRM heatsink and its NanoCarbon Fin-Array II provide a nice contrast against the smooth finish on the board’s bottom. Along with Wi-Fi 6E integration, it also includes an Aquantia based 10GbE, while most others use 2.5 GbE. The Aorus Master includes a premium Realtek ALC1220 audio solution with an integrated DAC, three M.2 sockets, reinforced PCIe and memory slots and 10 total USB ports, including a rear USB 3.2 Gen2x2 Type-C port. We’ll cover those features and much more in detail below. But first, here are full the specs from Gigabyte.
Specifications – Gigabyte Z590 Aorus Master
Socket
LGA 1200
Chipset
Z590
Form Factor
ATX
Voltage Regulator
19 Phase (18+1, 90A MOSFETs)
Video Ports
(1) DisplayPort v1.2
USB Ports
(1) USB 3.2 Gen 2×2, Type-C (20 Gbps)
(5) USB 3.2 Gen 2, Type-A (10 Gbps)
(4) USB 3.2 Gen 1, Type-A (5 Gbps)
Network Jacks
(1) 10 GbE
Audio Jacks
(5) Analog + SPDIF
Legacy Ports/Jacks
✗
Other Ports/Jack
✗
PCIe x16
(2) v4.0 x16, (x16/x0 or x8/x8
(1) v3.0 x4
PCIe x8
✗
PCIe x4
✗
PCIe x1
✗
CrossFire/SLI
AMD Quad GPU Crossfire and 2-Way Crossfire
DIMM slots
(4) DDR4 5000+, 128GB Capacity
M.2 slots
(1) PCIe 4.0 x4 / PCIe (up to 110mm)
(1) PCIe 3.0 x4 / PCIe + SATA (up to 110mm)
(1) PCIe 3.0 x4 / PCIe + SATA (up to 110mm)
U.2 Ports
✗
SATA Ports
(6) SATA3 6 Gbps (RAID 0, 1, 5 and 10)
USB Headers
(1) USB v3.2 Gen 2 (Front Panel Type-C)
(2) USB v3.2 Gen 1
(2) USB v2.0
Fan/Pump Headers
(10) 4-Pin
RGB Headers
(2) aRGB (3-pin)
(2) RGB (4-pin)
Legacy Interfaces
✗
Other Interfaces
FP-Audio, TPM
Diagnostics Panel
Yes, 2-character debug LED, and 4-LED ‘Status LED’ display
As we open up the retail packaging, along with the board, we’re greeted by a slew of included accessories. The Aorus Master contains the basics (guides, driver CD, SATA cables, etc.) and a few other things that make this board complete. Below is a full list of all included accessories.
Installation Guide
User’s Manual
G-connector
Sticker sheet / Aorus badge
Wi-Fi Antenna
(4) SATA cables
(3) Screws for M.2 sockets
(2) Temperature probes
Microphone
RGB extension cable
Image 1 of 3
Image 2 of 3
Image 3 of 3
After taking the Z590 Aorus Master out of the box, its weight was immediately apparent, with the shrouds, heatsinks and backplate making up the majority of that weight. The board sports a matte-black PCB, with black and grey shrouds covering the PCIe/M.2 area and two VRM heatsinks with fins connected by a heatpipe. The chipset heatsink has the Aorus Eagle branding lit up, while the rear IO shroud arches over the left VRM bank with more RGB LED lighting. The Gigabyte RGB Fusion 2.0 application handles RGB control. Overall, the Aorus Master has a premium appearance and shouldn’t have much issue fitting in with most build themes.
Looking at the board’s top half, we’ll first focus on the VRM heatsinks. They are physically small compared to most boards, but don’t let that fool you. The fin array uses a louvered stacked-fin design Gigabyte says increases surface area by 300% and improves thermal efficiency with better airflow and heat exchange. An 8mm heat pipe also connects them to share the load. Additionally, a small fan located under the rear IO shroud actively keeps the VRMs cool. The fan here wasn’t loud, but was undoubtedly audible at default settings.
We saw a similar configuration in the previous generation, which worked out well with an i9-10900K, so it should do well with the Rocket Lake flagship, too. We’ve already seen reports indicating the i9-11900K has a similar power profile to its predecessor. Feeding power to the VRMs is two reinforced 8-pin EPS connectors (one required).
To the right of the socket, things start to get busy. We see four reinforced DRAM slots supporting up to 128GB of RAM. Oddly enough, the specifications only list support up to DDR4 3200 MHz, the platform’s limit. But further down the webpage, it lists DDR4 5000. I find it odd it is listed this way, though it does set up an expectation that anything above 3200 MHz is overclocking and not guaranteed to work.
Above the DRAM slots are eight voltage read points covering various relevant voltages. This includes read points for the CPU Vcore, VccSA, VccIO, DRAM, and a few others. When you’re pushing the limits and using sub-ambient cooling methods, knowing exactly what voltage the component is getting (software can be inaccurate) is quite helpful.
Above those on the top edge are four fan headers (next to the EPS connectors is a fifth) of 10. According to the manual, all CPU fan and pump headers support 2A/24W each. You shouldn’t have any issues powering fans and a water cooling pump. Gigabyte doesn’t mention if these headers use auto-sensing (for DC or PWM control), but they handled both when set to ‘auto’ in the BIOS. Both a PWM and DC controlled fan worked without intervention.
The first two (of four) RGB LED headers live to the fan headers’ right. The Z590 Aorus Master includes two 3-pin ARGB headers and two 4-pin RGB headers. Since this board takes a minimal approach to RGB lighting, you’ll need to use these to add more bling to your rig.
We find the power button and 2-character debug LED for troubleshooting POST issues on the right edge. Below is a reinforced 24-pin ATX connector for power to the board, another fan header and a 2-pin temperature probe header. Just below all of that are two USB 3.2 Gen1 headers and a single USB 3.2 Gen2x2 Type-C front-panel header for additional USB ports.
Gigabyte chose to go with a 19-phase setup for the Vcore and SOC on the power delivery front. Controlling power is an Intersil ISL6929 buck controller that manages up to 12 discrete channels. The controller then sends the power to ISL6617A phase doublers and the 19 90A ISL99390B MOSFETs. This is one of the more robust VRMs we’ve seen on a mid-range board allowing for a whopping 1,620A available for the CPU. You won’t have any trouble running any compatible CPU, including using sub-ambient overclocking.
The bottom half of the board is mostly covered in shrouds hiding all the unsightly but necessary bits. On the far left side, under the shrouds, you’ll find the Realtek ALC1220-VB codec along with an ESS Sabre ESS 9118 DAC and audiophile-grade WIMA and Nichicon Fine Gold capacitors. With the premium audio codec and DAC, an overwhelming majority of users will find the audio perfectly acceptable.
We’ll find the PCIe slots and M.2 sockets in the middle of the board. Starting with the PCIe sockets, there are a total of three full-length slots (all reinforced). The first and second slots are wired for PCIe 4.0, with the primary (top) slot wired for x16 and the bottom maxes out at x8. Gigabyte says this configuration supports AMD Quad-GPU Cand 2-Way Crossfire. We didn’t see a mention of SLI support even though the lane count supports it. The bottom full-length slot is fed from the chipset and runs at PCIe 3.0 x4 speeds. Since the board does without x1 slots, this is the only expansion slot available if you’re using a triple-slot video card. Anything less than that allows you to use the second slot.
Hidden under the shrouds around the PCIe slots are three M.2 sockets. Unique to this setup is the Aorus M.2 Thermal Guard II, which uses a double-sided heatsink design to help cool M.2 SSD devices with double-sided flash. With these devices’ capacities rising and more using flash on both sides, this is a good value-add.
The top socket (M2A_CPU) supports up to PCIe 4.0 x4 devices up to 110mm long. The second and third sockets, M2P_SB and M2M_SB, support both SATA and PCIe 3.0 x3 modules up to 110mm long. When using a SATA-based SSD on M2P_SB, SATA port 1 will be disabled. When M2M_SB (bottom socket) is in use, SATA ports 4/5 get disabled.
To the right of the PCIe area is the chipset heatsink with the Aorus falcon lit up with RGB LEDs from below. There’s a total of six SATA ports that support RAID0, 1, 5 and 10. Sitting on the right edge are two Thunderbolt headers (5-pin and 3-pin) to connect to a Gigabyte Thunderbolt add-in card. Finally, in the bottom-right corner is the Status LED display. The four LEDs labeled CPU, DRAM, BOOT and VGA light up during the POST process. If something hangs during that time, the LED where the problem resides stays lit, identifying the problem area. This is good to have, even with the debug LED at the top of the board.
Across the board’s bottom are several headers, including more USB ports, fan headers and more. Below is the full list, from left to right:
Front-panel audio
BIOS switch
Dual/Single BIOS switch
ARGB header
RGB header
TPM header
(2) USB 2.0 headers
Noise sensor header
Reset button
(3) Fan headers
Front panel header
Clear CMOS button
The Z590 Aorus Master comes with a pre-installed rear IO panel full of ports and buttons. To start, there are a total of 10 USB ports out back, which should be plenty for most users. You have a USB 3.2 Gen2x2 Type-C port, five USB 3.2 Gen2 Type-A ports and four USB 3.2 Gen1 Type-A ports. There is a single DisplayPort output for those who would like to use the CPU’s integrated graphics. The audio stack consists of five gold-plated analog jacks and a SPDIF out. On the networking side is the Aquantia 10 GbE port and the Wi-Fi antenna. Last but not least is a Clear CMOS button and a Q-Flash button, the latter designed for flashing the BIOS without a CPU.
ASRock has quietly introduced one of the industry’s first Intel Z590-based Mini-ITX motherboards with a Thunderbolt 4 port. The manufacturer positions its Z590 Phantom Gaming-ITX/TB4 platform as its top-of-the-range offering for compact gaming builds for enthusiasts that want to have all the capabilities of large tower desktops and then some, so it is packed with advanced features.
The ASRock Z590 Phantom Gaming-ITX/TB4 motherboard supports all of Intel’s 10th and 11th Generation Comet Lake and Rocket Lake processors, including the top-of-the-range Core i9-11900K with a 125W TDP.
One of the main selling points of the Z590 Phantom Gaming-ITX/TB4 motherboard is of course its Thunderbolt 4 port, which supports a 40 Gb/s throughput when attached to appropriate TB3/TB4 devices (or 10 Gb/s when connected to a USB 3.2 Gen 2) such as high-end external storage subsystems (in case internal storage is not enough on a Mini-ITX build) and can handle two 4K displays or one 8K monitor (albeit with DSC). Furthermore, the motherboard has five USB 3.2 Gen 2 ports on the back as well as an internal header to connect a front panel USB 3.2 Gen 2×2 port which supports transfer rates up to 20 Gb/s.
The platform relies on a 10-layer PCB and is equipped with a 10-phase VRM featuring 90A solid-state coils, 90A DrMOS power stage solutions, and solid-state Nichicon 12K capacitors to ensure maximum performance, reliable operation, and some additional overclocking potential. Interestingly, the motherboard’s CPU fan header provides a maximum 2A power to support water pumps.
The Z590 Phantom Gaming-ITX/TB4 also has a PCIe 4.0 x16 slot for graphics cards, two slots for up to 64 GB of DDR4-4266+ memory, two M.2-2280 slots for SSDs (with a PCIe 4.0 x4 as well as a PCIe 3.0 x4/SATA interface), and three SATA connectors. To guarantee the consistent performance and stable operation of high-end SSDs, ASRock supplies its own heat spreaders for M.2 drives that match its motherboard’s design.
Being a top-of-the-range product, the ASRock Z590 Phantom Gaming-ITX/TB4 naturally has support for addressable RGB lighting (using the ASRock Polychrome Sync/Polychrome RGB software) and has a very sophisticated input/output department that has a number of unique features, such as three display outputs and multi-gig networking.
In addition, the mainboard has a DisplayPort 1.4 as well as an HDMI 2.0b connector. Keeping in mind that Intel’s desktop UHD Graphics has three display pipelines, the motherboard can handle three monitors even without a discrete graphics card. Meanwhile, integrated Intel’s Xe-LP architecture used in Rocket Lake’s UHD Graphics 730 has very advanced media playback capabilities (e.g., a hardware-accelerated 12-bit video pipeline for wide-color 8K60 with HDR playback), so it can handle Ultra-HD Blu-ray, contemporary video services that use modern codecs, and next-generation 8Kp60 video formats.
Next up is networking. The Z590 Phantom Gaming-ITX/TB4 comes with an M.2-2230 Killer AX1675x WiFi 6E + Bluetooth 5.2 PCIe module that supports up to 2.4 Gbps throughput when connected to an appropriate router. Also, the motherboard is equipped with a Killer E3100G 2.5GbE adapter. The adapters can be used at the same time courtesy of Killer’s DoubleShot Pro technology that aggregates bandwidth and prioritizes high-priority traffic, so the maximum networking performance can be increased up to 4.9 Gbps.
The audio department of the Z590 Phantom Gaming-ITX/TB4 is managed by the Realtek ALC1220 audio codec withNahimic Audio software enhancements and includes 7.1-channel analog outputs as well as an S/P DIF digital output.
ASRock’s Z590 Phantom Gaming-ITX/TB4 motherboard will be available starting from April 23 in Japan, reports Hermitage Akihabara. In the Land of the Rising Sun, the unit will cost ¥38,000 (around $345) without taxes and ¥41,800 with taxes.
A week ago, German overclocker der8auer published a video showcasing his findings on delidding a Core i9-11900K, Intel’s new flagship Rocket Lake CPU. It’s not one of the best CPUs, based on our testing, but it’s still a formidable opponent in the right workloads. der8auer found that delidding Rocket Lake yielded very impressive temperature results; however, the process is so difficult that it might not be worth the effort.
The problem with the 11900K is its more complex PCB layout of components. Next to the IHS are a bunch of tiny SMD (surface mounted device) capacitors that are incredibly delicate. The capacitors are so close to the IHS that you can easily hit one of them while removing the IHS, which would likely render the CPU nonfunctional.
This is unlike the earlier 9th and 10th Gen chips that don’t have any SMDs anywhere near the IHS, allowing for a (relatively) safe delidding process if you have the right tools. But der8auer is a professional overclocker and skilled at delidding, so he took the chance on his 11900K.
Not only do the SMDs pose problems, but what’s even worse is the amount of pressure you need to apply to the 11900K during the delidding. der8auer had to upgrade his CPU delidding tool with a torque wrench to get the IHS to move at all, where past CPUs only needed an Allen wrench. You can see the strain of trying to twist the tool while keeping the delidding box stable in the above video.
Needless to say, this adds significant risk to the delidding process. Even with the torque wrench, the IHS didn’t want to come off, so der8auer had to resort to warming the CPU up in an oven first. In the end, he was able to successfully remove the IHS, though he mentions several times that he would not recommend most people attempt to do so.
The good news is that the end results are quite impressive. der8uer noted a 10C–12C reduction in temperatures, purely from scraping off the solder on the IHS and replacing it with liquid metal.
This is very impressive on a chip that already has solder. Using solder (as opposed to some form of thermal paste) between the CPU die and the IHS is already a good solution for thermal dissipation. Upgrading to liquid metal normally only results in about a 5C reduction in temperatures, not 10-12C.
It’s rather unfortunate that the delidding process is so incredibly risky on Comet Lake CPUs. We’d love to see more delidded 11900K testing to see if der8auer’s results are typical, but the likelihood of damaging the CPU is so great that it’s not worth the risk for the vast majority of users — even for an impressive 10C drop in temperatures.
Intel last week debuted the 11th Gen Core “Rocket Lake” desktop processor family, and we had launch-day reviews of the Core i9-11900K flagship and the mid-range Core i5-11600K. Today we bring you the Core i5-11400F—probably the most interesting model in the whole stack. The often-ignored SKU among Intel desktop processors among the past many generations, the Core i5-xx400, is also its most popular among gamers. Popular chips of this kind included the i5-8400, the i5-9400F, and the i5-10400F.
These chips feature the entire Core i5 feature-set at prices below $200, albeit lower clock speeds and locked overclocking. Even within these, Intel introduced a sub-segment of chips that lack integrated graphics, denoted by “F” in the model number; which shave a further $15-20 off the price. The Core i5-11400F starts at just $160, which is an impressive value proposition for gamers who use graphics cards and don’t need the iGPU anyway.
The new “Rocket Lake” microarchitecture brings four key changes that make it the company’s first major innovation for client desktop in several years. First, Intel is introducing the new “Cypress Cove” CPU core that promises an IPC gain of up to 19% over the previous-generation. Next up, is the new UHD 750 integrated graphics powered by the Intel Xe LP graphics architecture, promising up to 50% performance uplift over the UHD 650 Gen9.5 iGPU of the previous generation. Thirdly, a much needed update to the processor’s I/O, including PCI-Express 4.0 for graphics and a CPU-attached NVMe slot; and lastly, an updated memory controller that allows much higher memory overclocking potential, thanks to the introduction of a Gear 2 mode.
The Core i5-11400F comes with a permanently disabled iGPU and a locked multiplier. Intel has still enabled support for memory frequencies of up to DDR4-3200, which is now possible on even the mid-tier H570 and B560 motherboard chipsets. The i5-11400F is a 6-core/12-thread processor clocked at 2.60 GHz, with up to 4.40 GHz Turbo Boost frequency. Each of the processor’s six “Cypress Cove” CPU cores include 512 KB dedicated L2 cache, and the cores share 12 MB of L3 cache. Intel is rating the processor’s TDP at 65 W, just like the other non-K SKUs, although it is possible to tweak these power limits—adjusting PL1 and PL2 is not considered “overclocking” by Intel, so it is not locked.
At $170, the Core i5-11400F has no real competitor from AMD. The Ryzen 5 3600 starts around $200, and the company didn’t bother (yet?) with cheaper Ryzen 5 SKUs based on “Zen 3”. In this review, we take the i5-11400F for a spin to show you if this is really all you need for a mid-priced contemporary gaming rig.
We present several data sets in our Core i5-11400F review: “Gear 1” and “Gear 2” show performance results for the processor operating at stock, with the default power limit setting active, respecting a 65 W TDP. Next up we have two runs with the power limit setting raised to maximum: “Max Power Limit / Gear 1” and “Max Power Limit / Gear 2”. Last but not least, signifying the maximum performance you can possible achieve on this CPU, we have a run “Max Power + Max BCLK”, which operates at 102.9 MHz BCLK—the maximum allowed by the processor, at Gear 1 DDR4-3733, the memory controller won’t run higher.
use? It’s an important question, and while the performance we show in our
GPU benchmarks
hierarchy is useful, one of the true measures of a GPU is how efficient it is. To determine GPU power efficiency, we need to know both performance and power use. Measuring performance is relatively easy, but measuring power can be complex. We’re here to press the reset button on GPU power measurements and do things the right way.
There are various ways to determine power use, with varying levels of difficulty and accuracy. The easiest approach is via software like
GPU-Z
, which will tell you what the hardware reports. Alternatively, you can measure power at the outlet using something like a
Kill-A-Watt
power meter, but that only captures total system power, including PSU inefficiencies. The best and most accurate means of measuring the power use of a graphics card is to measure power draw in between the power supply (PSU) and the card, but it requires a lot more work.
We’ve used GPU-Z in the past, but it had some clear inaccuracies. Depending on the GPU, it can be off by anywhere from a few watts to potentially 50W or more. Thankfully, the latest generation AMD Big Navi and Nvidia Ampere GPUs tend to report relatively accurate data, but we’re doing things the right way. And by “right way,” we mean measuring in-line power consumption using hardware devices. Specifically, we’re using Powenetics software in combination with various monitors from TinkerForge. You can read our Powenetics project overview for additional details.
Image 1 of 2
Image 2 of 2
Tom’s Hardware GPU Testbed
After assembling the necessary bits and pieces — some soldering required — the testing process is relatively straightforward. Plug in a graphics card and the power leads, boot the PC, and run some tests that put a load on the GPU while logging power use.
We’ve done that with all the legacy GPUs we have from the past six years or so, and we do the same for every new GPU launch. We’ve updated this article with the latest data from the GeForce RTX 3090, RTX 3080, RTX 3070, RTX 3060 Ti, and RTX 3060 12GB from Nvidia; and the Radeon RX 6900 XT, RX 6800 XT, RX 6800, and RX 6700 XT from AMD. We use the reference models whenever possible, which means only the EVGA RTX 3060 is a custom card.
If you want to see power use and other metrics for custom cards, all of our graphics card reviews include power testing. So for example, the RX 6800 XT roundup shows that many custom cards use about 40W more power than the reference designs, thanks to factory overclocks.
Test Setup
We’re using our standard graphics card testbed for these power measurements, and it’s what we’ll use on graphics card reviews. It consists of an MSI MEG Z390 Ace motherboard,
Intel Core i9-9900K CPU
, NZXT Z73 cooler, 32GB Corsair DDR4-3200 RAM, a fast M.2 SSD, and the other various bits and pieces you see to the right. This is an open test bed, because the Powenetics equipment essentially requires one.
There’s a PCIe x16 riser card (which is where the soldering came into play) that slots into the motherboard, and then the graphics cards slot into that. This is how we accurately capture actual PCIe slot power draw, from both the 12V and 3.3V rails. There are also 12V kits measuring power draw for each of the PCIe Graphics (PEG) power connectors — we cut the PEG power harnesses in half and run the cables through the power blocks. RIP, PSU cable.
Powenetics equipment in hand, we set about testing and retesting all of the current and previous generation GPUs we could get our hands on. You can see the full list of everything we’ve tested in the list to the right.
From AMD, all of the latest generation Big Navi / RDNA2 GPUs use reference designs, as do the previous gen RX 5700 XT, RX 5700 cards,
Radeon VII
,
Vega 64
and
Vega 56
. AMD doesn’t do ‘reference’ models on most other GPUs, so we’ve used third party designs to fill in the blanks.
For Nvidia, all of the Ampere GPUs are Founders Edition models, except for the EVGA RTX 3060 card. With Turing, everything from the
RTX 2060
and above is a Founders Edition card — which includes the 90 MHz overclock and slightly higher TDP on the non-Super models — while the other Turing cards are all AIB partner cards. Older GTX 10-series and GTX 900-series cards use reference designs as well, except where indicated.
Note that all of the cards are running ‘factory stock,’ meaning there’s no manual
overclocking
or
undervolting
is involved. Yes, the various cards might run better with some tuning and tweaking, but this is the way the cards will behave if you just pull them out of their box and install them in your PC. (RX Vega cards in particular benefit from tuning, in our experience.)
Our testing uses the Metro Exodus benchmark looped five times at 1440p ultra (except on cards with 4GB or less VRAM, where we loop 1080p ultra — that uses a bit more power). We also run Furmark for ten minutes. These are both demanding tests, and Furmark can push some GPUs beyond their normal limits, though the latest models from AMD and Nvidia both tend to cope with it just fine. We’re only focusing on power draw for this article, as the temperature, fan speed, and GPU clock results continue to use GPU-Z to gather that data.
GPU Power Use While Gaming: Metro Exodus
Due to the number of cards being tested, we have multiple charts. The average power use charts show average power consumption during the approximately 10 minute long test. These charts do not include the time in between test runs, where power use dips for about 9 seconds, so it’s a realistic view of the sort of power use you’ll see when playing a game for hours on end.
Besides the bar chart, we have separate line charts segregated into groups of up to 12 GPUs, and we’ve grouped cards from similar generations into each chart. These show real-time power draw over the course of the benchmark using data from Powenetics. The 12 GPUs per chart limit is to try and keep the charts mostly legible, and the division of what GPU goes on which chart is somewhat arbitrary.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
Kicking things off with the latest generation GPUs, the overall power use is relatively similar. The 3090 and 3080 use the most power (for the reference models), followed by the three Navi 10 cards. The RTX 3070, RX 3060 Ti, and RX 6700 XT are all pretty close, with the RTX 3060 dropping power use by around 35W. AMD does lead Nvidia in pure power use when looking at the RX 6800 XT and RX 6900 XT compared to the RTX 3080 and RTX 3090, but then Nvidia’s GPUs are a bit faster so it mostly equals out.
Step back one generation to the Turing GPUs and Navi 1x, and Nvidia had far more GPU models available than AMD. There were 15 Turing variants — six GTX 16-series and nine RTX 20-series — while AMD only had five RX 5000-series GPUs. Comparing similar performance levels, Nvidia Turing generally comes in ahead of AMD, despite using a 12nm process compared to 7nm. That’s particularly true when looking at the GTX 1660 Super and below versus the RX 5500 XT cards, though the RTX models are closer to their AMD counterparts (while offering extra features).
It’s pretty obvious how far AMD fell behind Nvidia prior to the Navi generation GPUs. The various Vega and Polaris AMD cards use significantly more power than their Nvidia counterparts. RX Vega 64 was particularly egregious, with the reference card using nearly 300W. If you’re still running an older generation AMD card, this is one good reason to upgrade. The same is true of the legacy cards, though we’re missing many models from these generations of GPU. Perhaps the less said, the better, so let’s move on.
GPU Power with FurMark
FurMark, as we’ve frequently pointed out, is basically a worst-case scenario for power use. Some of the GPUs tend to be more aggressive about throttling with FurMark, while others go hog wild and dramatically exceed official TDPs. Few if any games can tax a GPU quite like FurMark, though things like cryptocurrency mining can come close with some algorithms (but not Ehterium’s Ethash, which tends to be limited by memory bandwidth). The chart setup is the same as above, with average power use charts followed by detailed line charts.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
The latest Ampere and RDNA2 GPUs are relatively evenly matched, with all of the cards using a bit more power in FurMark than in Metro Exodus. One thing we’re not showing here is average GPU clocks, which tend to be far lower than in gaming scenarios — you can see that data, along with fan speeds and temperatures, in our graphics card reviews.
The Navi / RDNA1 and Turing GPUs start to separate a bit more, particularly in the budget and midrange segments. AMD didn’t really have anything to compete against Nvidia’s top GPUs, as the RX 5700 XT only matched the RTX 2070 Super at best. Note the gap in power use between the RTX 2060 and RX 5600 XT, though. In gaming, the two GPUs were pretty similar, but in FurMark the AMD chip uses nearly 30W more power. Actually, the 5600 XT used more power than the RX 5700, but that’s probably because the Sapphire Pulse we used for testing has a modest factory overclock. The RX 5500 XT cards also draw more power than any of the GTX 16-series cards.
With the Pascal, Polaris, and Vega GPUs, AMD’s GPUs fall toward the bottom. The Vega 64 and Radeon VII both use nearly 300W, and considering the Vega 64 competes with the GTX 1080 in performance, that’s pretty awful. The RX 570 4GB (an MSI Gaming X model) actually exceeds the official power spec for an 8-pin PEG connector with FurMark, pulling nearly 180W. That’s thankfully the only GPU to go above spec, for the PEG connector(s) or the PCIe slot, but it does illustrate just how bad things can get in a worst-case workload.
The legacy charts are even worse for AMD. The R9 Fury X and R9 390 go well over 300W with FurMark, though perhaps that’s more of an issue with the hardware not throttling to stay within spec. Anyway, it’s great to see that AMD no longer trails Nvidia as badly as it did five or six years ago!
Analyzing GPU Power Use and Efficiency
It’s worth noting that we’re not showing or discussing GPU clocks, fan speeds or GPU temperatures in this article. Power, performance, temperature and fan speed are all interrelated, so a higher fan speed can drop temperatures and allow for higher performance and power consumption. Alternatively, a card can drop GPU clocks in order to reduce power consumption and temperature. We dig into this in our individual GPU and graphics card reviews, but we just wanted to focus on the power charts here. If you see discrepancies between previous and future GPU reviews, this is why.
The good news is that, using these testing procedures, we can properly measure the real graphics card power use and not be left to the whims of the various companies when it comes to power information. It’s not that power is the most important metric when looking at graphics cards, but if other aspects like performance, features and price are the same, getting the card that uses less power is a good idea. Now bring on the new GPUs!
Here’s the final high-level overview of our GPU power testing, showing relative efficiency in terms of performance per watt. The power data listed is a weighted geometric mean of the Metro Exodus and FurMark power consumption, while the FPS comes from our GPU benchmarks hierarchy and uses the geometric mean of nine games tested at six different settings and resolution combinations (so 54 results, summarized into a single fps score).
This table combines the performance data for all of the tested GPUs with the power use data discussed above, sorts by performance per watt, and then scales all of the scores relative to the most efficient GPU (currently the RX 6800). It’s a telling look at how far behind AMD was, and how far it’s come with the latest Big Navi architecture.
Efficiency isn’t the only important metric for a GPU, and performance definitely matters. Also of note is that all of the performance data does not include newer technology like ray tracing and DLSS.
The most efficient GPUs are a mix of AMD’s Big Navi GPUs and Nvidia’s Ampere cards, along with some first generation Navi and Nvidia Turing chips. AMD claims the top spot with the Navi 21-based RX 6800, and Nvidia takes second place with the RTX 3070. Seven of the top ten spots are occupied by either RDNA2 or Ampere cards. However, Nvidia’s GDDR6X-equipped GPUs, the RTX 3080 and 3090, rank 17 and 20, respectively.
Given the current GPU shortages, finding a new graphics card in stock is difficult at best. By the time things settle down, we might even have RDNA3 and Hopper GPUs on the shelves. If you’re still hanging on to an older generation GPU, upgrading might be problematic, but at some point it will be the smart move, considering the added performance and efficiency available by more recent offerings.
Popular CPU temperature monitoring utility Core Temp recently got patched to version 1.17 with a few new updates, including support for not only Intel and AMD’s latest and best CPUs, but some chips yet to be released as well.
Core Temp version 1.17 adds full support for Rocket-Lake-S, which includes chips like the new Core i9-11900K. It also brings preliminary support for both Intel’s 12th Gen Alder Lake desktop CPUs, which aren’t expected to arrive until late 2021 or early 2022, and Intel’s 3D-stacked Meteor Lake chips, poised for a 2023 release.
It’s very interesting to see Alder Lake support already because this new CPU architecture is radically different from anything Intel has produced so far. That means it will need some new types of monitoring to measure CPU core temperatures. Similar to the design of Arm chips, like the Apple M1 and other smartphone chips from Qualcomm and Samsung, the line will introduce a hybrid architecture. In the case of Alder Lake, this is a hybrid x86 architecture with two tiers of CPU cores: One set of high-performance cores and one set of high-efficiency cores.
Alder Lake will also be Intel’s first desktop architecture to finally move off the super mature 14nm process and will instead use Intel’s newly refined 10nm SuperFin process. SuperFin promises to achieve much higher performance-per-watt compared to 14nm.
So to monitor temperatures, programs like CoreTemp will have to monitor both the high-performance cores and the low-power cores, plus all the other sensors like CPU die and package temperature. Hopefully, CoreTemp will figure out a way to give users all this data without being overwhelming or confusing.
This same logic should also apply to Meteor Lake, which (for now) is also believed to have a hybrid x86 architecture featuring low-power and high-performance cores. Unlike Alder Lake, Meteor Lake is expected to be shipped on Intel’s new 7nm EUV process. We don’t know the exact details on this node, but it should provide a hefty efficiency upgrade over 10nm SuperFin if Intel wants to maintain its competitive edge against TSMC’s 7nm and 5nm nodes.
Speaking of TSMC, Intel is planning on using some of TSMC’s silicon for select Meteor Lake products in the future to offset recent delays related to its homebrewed 7nm node. To make this a reality, Meteor Lake will come with a 3D-stacked architecture, which will allow Intel to swap out Intel silicon for TSMC silicon and vise-versa.
Other Updates and Bug Fixes
In addition to new support for Intel CPUs, Core Temp version 1.17 also adds full support for AMD’s latest Ryzen 5000 processors, as well as its for Zen 2 -based APUs, which include all of AMD’s Ryzen 4000 mobile CPUs and some of AMD’s Ryzen 5000 mobile processors.
The update also brings numerous bug fixes, which we’ve listed below.
“Unsupported CPU” message when only some cores have HT enabled
AMD Epyc Rome/Threadripper 3rd gen platform detection
Gemini Lake platform detection
Whiskey Lake codename
Incorrect VID reporting on some Celeron/Pentium processors
Crash on Intel Banias based (Pentium/Celeron M) processors
Turbo multiplier detection on Nehalem/Westmere
Bugs related to response to DPI changesVID reporting on some AMD Athlon64 processors
Changes:
AMD Bulldozer based processors now display the amount of modules/threads instead of cores/threads
Improve accuracy of information on unsupported Intel CPUs
Sourced from Twitter; The day has finally come where higher core count six-core and eight-core CPUs are ready to overtake aging dual-core and quad-core solutions in market share. Thanks to Steam’s hardware survey, we are able to get precise details on how popular both AMD and Intel’s hexacore and octa-core processors have become over the past several years.
This huge uptick in six and eight-core popularity is in part thanks to AMD’s strategy of bringing as many cores as possible to both desktop and mobile platforms over the past few years with the Zen architecture, with Intel following suit.
As of this moment, Steam’s chart reveals that quad cores are still in the lead, by roughly 10%. However, since the end of 2017, the popularity of six-core processors has been growing consistently, by a whopping 10% of market share per year. Easily surpassing dual-core popularity in mid-2019.
It makes a lot of sense. Over the past few years in the desktop market; AMD and Intel’s six-core processors have become some of the best CPUs on the market, with excellent price-to-performance ratios and excellent gaming performance.
In fact, the gaming performance of modern six-core chips like AMD’s Ryzen 5 5600X and Intel’s Core i5-10600K is so good that each chip is just a couple of percentage points lower than both Intel and AMD’s flagship 10 core and 16 core parts.
Plus, the recent rise of mobile hexa-core CPUs from both AMD and Intel have boosted 6-core adoption rates significantly, as laptops have a much larger market share overall compared to desktops.
At this rate, hexa-core and quad-core processors should attain equal market share by the end of this year, with hexa-core CPUs continuing to gain popularity well into 2022.
8 Core Popularity
Regarding the market performance of octo-core CPUs, the popularity is definitely lower than six-core parts. However, they are still on a consistent uptrend that is very aggressive.
As of right now, 8-core chips are neck and neck with dual-core CPUs in popularity, and should surpass dual-core market share very soon. However, octo-core chips are still well away from competing against quad-core CPUs, which still maintain the lead in market share.
The popularity of 8-core chips skyrocketed in late 2018, which coincides with Coffee
Lake Refresh, which is where we saw Intel’s first-ever mainstream 8 core CPU arrive on the scene, the Core i9-9900K.
Plus, at this time AMD was also releasing a new eight-core chip, the Ryzen 7 2700X. This was built on the new (at the time) Zen+ architecture.
In 2018, the mobile market also saw a major change as well, with Intel pushing out mobile 8 core chips for the first time in history. With AMD following suit less than a year later.
From late 2018 to 2020, octo-core chips gained about 5-6% of market share in Steam’s hardware survey. But during 2020-2021, that changed from 5-6% to almost 15%.
Be aware that these results are for all types of CPUs, including both desktop and mobile chips. While us DIY PC builders like to think we own the show, in terms of market share, we really don’t. In fact, it’s a small margin at best.
In fact, about 46% of the total market share belongs to desktops, and this includes both OEM and DIY markets. Around 50% of traffic belongs to laptops.
Are We Headed Towards A Six-Core vs. Eight-Core Popularity Contest?
Overall, it’s good to see quad cores and dual cores dying out, as their capabilities have become less and less useful in a world pushing towards more and more multi-threaded workloads.
Now, it remains to be seen whether six-core market share and eight-core market share will start competing against each other. Not to mention additional competitors like 10, 12, and 16 core processors, which will undoubtedly gain mainstream popularity at some point in the future.
For some very awkward reason, Intel has not posted a version of its Xe-LP graphics driver for its Rocket Lake processors on its website. The drivers are also not available from Intel’s motherboard partners, which causes quite some confusion as this essentially renders the new Rocket Lake’s Intel UHD Graphics featuring the Xe-LP architecture useless. However, there is a workaround for those who need it.
Intel’s main angle with its Rocket Lake processors for desktops is gaming, which is why it praises its Core i9-11900K and Core i7-11700K CPUs with a 125W TDP. Such systems rarely use integrated graphics, so enthusiasts do not really care about the availability of Xe-LP drivers for their chips. But the Rocket Lake family also includes multiple 65W and 35W processors that are designed for mainstream or low-power machines that use integrated GPUs.
For some reason, there are no drivers for Rocket Lake’s Intel UHD Graphics integrated GPU based on the Xe-LP architecture on Intel’s website, as noticed by AdoredTV. Intel’s motherboard partners do offer different versions of Intel’s Graphics Driver (which adds to the confusion) released in 2021, but none of them officially supports Rocket Lake’s integrated graphics, according to VideoCardz.
The absence of the Xe-LP drivers for Rocket Lake processors from official websites is hardly a big problem as there is an easy workaround for the problem. Instead of using an automatic driver installation wizard that comes in an .exe format, you can download a .zip file with the same driver (version 27.20.100.9316 at press time), then install it using Windows 10’s Update Driver feature with a Have Disk option, then hand pick the Intel Iris Xe Graphics.
Since Rocket Lake’s integrated GPU is based on the same architecture as Tiger Lake’s GPU, the graphics core will work just as it should. This option will work with experienced DIYers, but it might be tricky for an average user.
Unlike do-it-yourselfers, OEMs and PC makers will not use a workaround as the latest official driver has never been validated for the Rocket Lake family. Fortunately, at least some OEMs have access to ‘official’ Rocket Lake graphics drivers.
“We have drivers flowing to OEMs for a while now,” said Lisa Pierce, Intel vice president and director of graphics software engineering, in a Twitter post on April 2. “The delay was in a public posting with our unified graphics driver flow and we will work it post ASAP.”
She did not elaborate on when exactly the driver will be posted to Intel.com and whether it needs to pass WHQL validation before that. Meanwhile, on April 1 Pierce said that Rocket Lake drivers were several weeks away.
Razer has been a loyal supporter of Team Blue. However, the tech giant may have finally bitten the bullet and joined up with Team Red. If the recently discovered 3DMark submissions (via _rogame) are accurate, Razer will release the company’s first-ever AMD-powered gaming laptop soon.
The mysterious laptop emerged as the Razer PI411. There is speculation that the codename may allude to the Razer Blade 14, which debuted back in 2013. The last time Razer updated the Razer Blade 14 was in 2016, so a well-deserved update is due. Nevertheless, we can’t discard the possibility that PI411 could just be a codename for any other Razer device.
The Razer PI411 features AMD’s top-tier Ryzen 9 5900HX (Cezanne) processor. The Ryzen 9 5900HX is AMD’s first overclockable mobile processor, and the chipmaker designed it to take the fight to Intel’s HK-series of mobile chips, such as the Core i9-10900HK or the looming Core i9-11980HK.
Armed with eight Zen 3 cores and 16MB of L3 cache, the Ryzen 9 5900HX comes with a 3.3 GHz base clock and a 4.6 GHz boost clock. It has a generous cTDP (configurable thermal design power) between 35W and 54W. The last Razer Blade 14 (2016) employed the Core i7-6700HQ, a 45W processor from the Skylake days. The gaming laptop is no stranger to housing hot chips. If Razer wants to work the Ryzen 9 5900HX into the Razer Blade 14, the new iteration will likely have to rely on a more robust cooling solution than its predecessors to leave enough thermal headroom for manual overclocking.
Image 1 of 2
Image 2 of 2
The Razer PI411 is also equipped with 16GB of DDR4-3200 memory and a 512GB NVMe SSD. However, it’s probably just an engineering sample, so the final product could arrive with more memory and a bigger SSD. So far, we’ve seen the Razer PI411 with two discrete graphics card options from Nvidia. As a quick reminder, the chipmaker’s latest mobile GeForce RTX 3000 (Ampere) offerings are available at different TDP limits, which adds a lot of confusion if the vendor doesn’t specifically list the value.
The first Razer PI411 unit employs a GeForce RTX 3060. The 14 Gbps memory confirms that the Razer PI411 uses the GeForce RTX 3060 Mobile or Max-P variant as opposed to the Max-Q variant. The 900 MHz base clock points to the 80W version.
The second and most recent Razer PI411 unit, on the other hand, leverages the more powerful GeForce RTX 3070. The memory is clocked at 12 Gbps, meaning it’s the Max-Q variant. This particular GeForce RTX 3070 Max-Q sports a 780 MHz base clock, so it coincides with the 80W version as well.
The 3DMark submissions aren’t conclusive evidence that Razer is sold on the idea. We hope Razer does go through with it, though, since the laptop market could use another high-end AMD-based laptop.
After almost a decade of total market dominance, Intel has spent the past few years on the defensive. AMD’s Ryzen processors continue to show improvement year over year, with the most recent Ryzen 5000 series taking the crown of best gaming processor: Intel’s last bastion of superiority.
Now, with a booming hardware market, Intel is preparing to make up some of that lost ground with the new 11th Gen Intel Core Processors. Intel is claiming these new 11th Gen CPUs offer double-digit IPC improvements despite remaining on a 14 nm process. The top-end 8-core Intel Core i9-11900K may not be able to compete against its Ryzen 9 5900X AMD rival in heavily multi-threaded scenarios, but the higher clock speeds and alleged IPC improvements could be enough to take back the gaming crown. Along with the new CPUs, there is a new chipset to match, the Intel Z590. Last year’s Z490 chipset motherboards are also compatible with the new 11th Gen Intel Core Processors, but Z590 introduces some key advantages.
First, Z590 offers native PCIe 4.0 support from the CPU, which means the PCIe and M.2 slots powered off the CPU will offer PCIe 4.0 connectivity when an 11th Gen CPU is installed. The PCIe and M.2 slots controlled by the Z590 chipset are still PCI 3.0. While many high-end Z490 motherboards advertised this capability, it was not a standard feature for the platform. In addition to PCIe 4.0 support, Z590 offers USB 3.2 Gen 2×2 from the chipset. The USB 3.2 Gen 2×2 standard offers speeds of up to 20 Gb/s. Finally, Z590 boasts native support for 3200 MHz DDR4 memory. With these upgrades, Intel’s Z series platform has feature parity with AMD’s B550. On paper, Intel is catching up to AMD, but only testing will tell if these new Z590 motherboards are up to the challenge.
The ASRock Z590 Phantom Gaming Velocita is a recent addition to ASRock’s arsenal. The Phantom Gaming Velocita targets the gamer market with Killer Networking for both wired and wireless connectivity and even an option to route the network traffic straight from the Killer LAN controller to the CPU. The ASRock Z590 Phantom Gaming Velocita features a dependable 14-phase VRM that takes advantage of 50 A power stages from Vishay. The ASRock Z590 Phantom Gaming Velocita has all the core features for a great gaming motherboard. All that is left is to see how the ASRock Z590 Phantom Gaming Velocita stacks up against the competition!
1x Killer E3100G 2.5 Gb/s LAN 1x Intel I219V Gigabit LAN 1x Killer AX1675x WiFi 6E
Rear Ports:
2x Antenna Ports 1x HDMI Port 1x DisplayPort 1.4 1x Optical SPDIF Out Port 1x USB 3.2 Gen2 Type-A Port 1x USB 3.2 Gen2 Type-C Port 6x USB 3.2 Gen1 Type-A Ports 2x USB 2.0 Ports 2x RJ-45 LAN Ports 5x 3.5 mm HD Audio Jacks
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.