Silicon Power has launched a new series of DDR4 memory kits under the company’s Xpower gaming brand. Consumers will be happy to know that the Zenith memory kits are available in both RGB and non-RGB flavors.
Designed to compete with the best RAM on the market, the Zenith and Zenith RGB come bearing a 10-layer PCB that’s passively cooled with an iron-grey aluminium heat spreader. Regardless of the format, the memory module stands 38.5 mm tall so compatibility with air coolers shouldn’t be an issue. In the case of the Zenith RGB, it features an user controllable RGB light bar that plays nice with with the four major motherboard brands, including Asus Aura Sync, Gigabyte RGB Fusion, MSI Mystic Light Sync, and ASRock Polychrome Sync.
Silicon Power commercializes the Zenith and Zenith RGB in a single module and dual-channel packages. The first is available from 8GB to 32GB, while the latter spans from 16GB (2x8GB) to 64GB (2x32GB).
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Memory frequency options on the Zenith and Zenith RGB are very limited. Consumers can only pick from three data rates: DDR4-3200, DDR4-3600 or DDR4-4133. Silicon Power didn’t reveal the entire specification sheet for the memory kits so only their CAS Latency (CL) value is known.
The DDR4-3200 and DDR4-3600 memory kits arrive with CL16 and CL18, while the DDR4-4133 memory kit features CL19. The voltage requirement for the DDR4-3200 and DDR4-3600 memory kits is 1.35V and 1.4V on the DDR4-4133 memory kit. They support XMP 2.0 so setup is a breeze.
Silicon Power backs its Zenith and Zenith RGB memory kits with a limited lifetime warranty. The company didn’t reveal the pricing or availability for the new memory though.
Hackers have found a new exploit in Counter-Strike: Global Offensive that could allow a hacker to take control of your computer if you click on a Steam invite to play the popular first-person shooter.
The bug was discovered by The Secret Club, a white hat hacking group, which found that the hackers can exploit the bug by using Steam’s invite system. Should a victim click on the link, a hacker could acquire private information from anyone who accepts it.
The exploit was discovered in the Source game engine, which Valve developed and is used in several game Valve titles, including Counter-Strike: Global Offensive. While some games that use the engine no longer have the bug, the exploit is still present in Counter-Strike: Global Offensive as seen in the video below.
According to The Secret Club, one of its members and security researcher named Florian flagged the bug to Valve in 2019. Florian told Motherboard that he reached out to Valve about the bug via HackerOne, a bug bounty platform that the studio uses. Despite Valve classifying the bug as “critical,” Florian told Motherboard that the studio admitted it was “slow to respond” in threads regarding the bug.
The revelation about this bug is concerning for Counter-Strike: Global Offensive players. Although the game is almost 10 years old, it is still very popular on Steam. More recently, the game switched to a free-to-play model in 2018 and is one of the world’s biggest esports leagues.
Corsair has just announced two all-new models of its Corsair One pre-built, named the a200 and i200. Both models will be upgraded with the latest hardware from Intel, AMD, and Nvidia.
Despite measuring in at just 12 liter’s, Corsair promises an uncompromised desktop experience with the Corsair One. Thanks to dual liquid cooling solutions for both the CPU and GPU, you can expect high performance out of the system’s components.
You also get the same amount of I/O as you would on a standard computer tower, with the front panel including a 3.5mm audio jack, two USB 3.0 ports and a single USB 3.2 Gen 2 Type-C port.
Meanwhile, the rear I/O will change depending on which model you choose, but either way, you will get the same amount of connectivity as you would on a standard mini ITX desktop, so expect plenty of display outputs, and plenty of USB ports as well as WiFi 6.
Corsair One a200 & i200 Specifications
a200
i200
CPU:
Up to a Ryzen 9 5900X
Up to a Core i9-11900K
Motherboard:
AMD B550 Mini-ITX Board
Intel Z490
Memory:
Up to 32GB
Up to 32GB
Graphics Card:
GeForce RTX 3080
GeForce RTX 3080
SSD:
Up to a 1TB NVME Gen 4.0 Drive
Up to a 1TB NVME Gen 4.0 Drive
Hard Drive:
Up to 2TB
Up to 2TB
Power Supply
750W 80 Plus Platinum
750W 80 Plus Platinum
The a200 will be based on AMD’s latest hardware and will come with a B550 chipset motherboard and your choice of a Ryzen 5 5600X, Ryzen 7 5800X, or Ryzen 9 5900X. You will also get up to 32GB of RAM, up to 3TB of SSD and hard disk storage, and a 750W SFX PSU.
The i200 on the other hand will feature Intel’s latest Rocket Lake platform, powered by a Z490 motherboard and up to a Core i9-11900K. The memory, storage, and PSU configuration remain the same here as is on the a200.
Both models will also be getting an RTX 3080 for graphics horsepower featuring a massive 10240 CUDA cores and 12GB of GDDR6X, all in a form factor measuring just 12 liters.
Corsair is currently listing a model of the a200 at $3,799.99 and the i200 at $3,599.99, though it’s possible there may be more options later.
The Corsair One has been one of the most compact high-performance PCs you can buy on the market today, so it’s great to see Corsair updating the chassis with the latest CPUs and GPUs, and we expect to see it in ours labs soon.
I would like to thank Antec for supplying the review sample.
These days many brands have gone the route of buying OEM frames and adjusting the tooling slightly to accommodate their needs. This is actually not bad since OEMs have really stepped up their game in recent years, and many of the enclosures offered by well-known brands are actually quite solid as a result. Thus, seeing a unique chassis is getting rarer. The Antec Dark Cube is a unique chassis from the brand that tries to walk a line most other brands avoid for fear of doing wrong: an upside, custom-tooled ITX chassis, which is a challenging prospect in any case regardless of size because the potent air-cooling of GPUs usually isn’t purpose-built for it as their fans push air into the heatsink. On top of that, Antec made the Dark Cube spacious enough to even handle an M-ATX motherboard as well.
Specifications
Antec Dark Cube
Case Type:
Mid-Cube
Material:
Steel, plastic, aluminium alloy, and glass
Weight:
10.2 kg
Slots:
4
Drive Bays:
1x Internal 3.5″ 1x Internal 2.5″
Motherboard Form Factors:
Mini-ITX or Micro-ATX
Dimensions:
512 x 240 x 406 mm
Front Door/Cover:
Air cover or glass cover
Front Fans:
2x 120 or 140 mm (optional)
Rear Fans:
1x 120 mm (optional)
Top Fans:
N/A
Bottom Fans:
N/A
Side Fans:
N/A
Front Radiator:
240 mm
Rear Radiator:
120 mm
Top Radiator:
N/A
Bottom Radiator:
N/A
Side Radiator:
N/A
I/O:
1x USB 3.2 Gen2 Type-C 2x USB 3.0 1x Headphone 1x Microphone
Now that Intel has finally launched its 3rd Generation Xeon Scalable ‘Ice Lake’ processors for servers, it is only a matter of time before the company releases its Xeon W-series CPUs featuring the same architecture for workstations. Apparently, some of these upcoming processors are already in the wild evaluated by workstation vendors.
Puget Systems recently built a system based on the yet-to-be-announced Intel Xeon W-3335 processor clocked at 3.40 GHz using Gigabyte’s single-socket MU72-SU0 motherboard, 128 GB of DDR4 memory (using eight 16GB modules), and Nvidia’s Quadro RTX 4000 graphics card. Exact specifications of the CPU are unknown, but given its ’3335‘ model number, we’d speculate that this is an entry-level model. The workstation vendor is obviously evaluating the new Ice Lake platform for workstations from every angle, yet it has published a benchmark result of the machine in its PugetBench for Premiere Pro 0.95.1.
The Intel Xeon W-3335-based system scored 926 overall points (standard export: 88.2; standard live playback: 126.1; effects: 63.6; GPU score: 63.6). For comparison, a system powered by AMD’s 12-core Ryzen 5900X equipped with 16GB of RAM and a GeForce RTX 3080 scored 915 overall points (standard export: 100.9; standard live playback: 79.6; effects: 93.9; GPU score: 100.7).
Given that we do not know exact specifications of the Intel X-3335 CPU, it is hard to make any conclusions about its performance, especially keeping in mind that the platform drivers may not be ready for an Ice Lake-W. Yet, at least we can now make some assumptions about ballpark performance of the CPU.
Intel has not disclosed what to expect from its Xeon W-series ‘Ice Lake’ processors, but in general the company tends to offer key features of its server products to its workstation customers as well. In case of the Xeon W-3335 it is evident that the CPU maintained an eight-channel memory subsystem, though we do not know anything about the number of PCIe lanes it supports.
In any case, since workstation vendors are already testing the new Xeon-W CPUs, expect them to hit the market shortly.
Adata’s XPG Gammix S70 is fast and features almost everything you could want from a high-end PCIe Gen4 NVMe SSD, but the heatsink is a bit restrictive and not quite as refined as our current best picks.
For
+ Very fast sequential performance
+ High endurance
+ AES 256-bit hardware encryption
+ Black PCB + Heatsink
+ 5-year warranty
Against
– It may be physically incompatible with some motherboards
– High idle power consumption on the desktop
– Slow write speeds after the SLC cache fills
– Pricey
Features and Specifications
Dishing out blisteringly fast sequential speeds of up to 7.4 / 6.4 GBps, Gammix S70 touts some of the fastest performance ratings that we have seen from an NVMe SSD. Yet, it isn’t produced by Samsung or WD, and surprisingly, it isn’t even powered by a Phison controller. Instead, Adata’s XPG Gammix S70 uses a high-end NVMe SSD controller from InnoGrit, a much smaller fabless IC design company.
InnoGrit isn’t a big name when most think of flash controllers, at least not compared to Phison, Silicon Motion, and Marvell. However, the company is far from inexperienced in controller architecture design and engineering. In fact, its co-founders have years of experience in the industry and have created a compelling product line of SSD controllers since opening in 2016.
Thanks to InnoGrit’s IG5236, a robust PCIe 4.0 eight-channel NVMe SSD controller, the company secured a contract with Adata to create the XPG Gammix S70. With this beast of a controller at its core, the S70 could potentially be the fastest SSD on the market. But it faces tough competition from Samsung, WD, and other competitors that pack Phison’s competing E18 SSD controller, like the Corsair MP600 Pro and Sabrent Rocket 4 Plus, to name a few.
Specifications
Product
Gammix S70 1TB
Gammix S70 2TB
Pricing
$199.99
$399.99
Capacity (User / Raw)
1024GB / 1024GB
2048GB / 2048GB
Form Factor
M.2 2280
M.2 2280
Interface / Protocol
PCIe 4.0 x4 / NVMe 1.4
PCIe 4.0 x4 / NVMe 1.4
Controller
InnoGrit IG5236
InnoGrit IG5236
DRAM
DDR4
DDR4
Memory
Micron 96L TLC
Micron 96L TLC
Sequential Read
7,400 MBps
7,400 MBps
Sequential Write
5,500 MBps
6,400 MBps
Random Read
350,000 IOPS
650,000 IOPS
Random Write
720,000 IOPS
740,000 IOPS
Security
AES 256-bit encryption
AES 256-bit encryption
Endurance (TBW)
740 TB
1,480 TB
Part Number
AGAMMIXS70-1T-C
AGAMMIXS70-2T-C
Warranty
5-Years
5-Years
Adata’s XPG Gammix S70 is available in capacities of 1TB and 2TB, priced at $200 and $400, respectively. The S70 is rated to deliver sequential performance of up to 7.4 / 6.4 GBps and to sustain upwards of up to 650,000 / 740,000 random read/write IOPS with the 2TB model. Like most modern SSDs, the S70 uses SLC caching to absorb the majority of inbound write requests, and in this case, the cache measures one-third of the available capacity.
The controller implements InnoGrit’s proprietary 4K LDPC ECC, end-to-end data protection, and even a RAID engine to ensure reliability and data integrity. As a result, the S70 can endure up to 1,480 TB of data writes within its five-year warranty. Additionally, the S70 supports AES 256-bit hardware-accelerated encryption for those who need both speed and data security too.
Adata has also said this drive will feature a fixed build of materials, so the components, like the NAND and SSD controller, will remain the same throughout the life of the product.
A Closer Look
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Adata’s XPG Gammix S70 comes in an M.2 2280 double-sided form factor and is equipped with a very large aluminum heatsink to keep “cool in the heat of battle,” as the company’s marketing department says. Adata claims that the heatsink reduces the SSD’s temperatures by up to 30%. While potentially effective, with the heatsink measuring 24.3 x 70 x 15 mm, its tall and wide footprint may lead to compatibility issues, as was the case with our Asus ROG X570 Crosshair VIII Hero (WiFi).
The S70’s heatsink prevents the SSD from fitting into the motherboard’s secondary M.2 slot and also prevents the PCIe slot latch below it from locking to secure your add-in card (like a GPU) when placed in the first M.2 slot. Furthermore, if placed in an M.2 slot under a PCI slot, the S70’s thick heatsink may also prevent AICs from slotting completely into the PCIe slot.
Making matters worse, the base of the heatsink is held onto the PCB with a very strong adhesive. If you were planning to remove the heatsink for better compatibility, the adhesive might cause you to damage the PCB by cracking it in half. We don’t recommend doing so.
Image 1 of 2
Image 2 of 2
Unlike Adata’s XPG Gammix S50 Lite, the S70 comes with a much faster NVMe SSD controller. The InnoGrit IG5236, dubbed Rainier, is a capable multi-core PCIe 4.0 x4 NVMe 1.4-compliant SSD controller that’s fabbed on TSMC’s 16/12nm FinFET process, which is important to help control power consumption when achieving multi-GB performance figures. It also features client-oriented power management schemes, and Adata claims it consumes as low as 2mW in the L1.2 sleep state.
Image 1 of 2
Image 2 of 2
To achieve its fast performance, the S70 leverages a DRAM-based architecture. The controller interfaces with two SK hynix DDR4-3200 DRAM ICs for FTL table mapping and Micron’s 96-layer TLC flash at NV-DDR3 speeds of up to 1,200 MTps spread over eight flash channels. Our 2TB sample contains 32 dies in total — each die has a four-plane architecture that responds very fast to random requests.
Micon’s flash architecture places the periphery circuitry under the flash cell arrays, differing from Samsung’s V6 V-NAND and WD’s BiCS4 to enable high array efficiency and bit density. The CuA architecture also enables redundancies while splitting the page into multiple tiles and groups, enabling fast and efficient random read performance.
Gigabyte’s Aorus Z590 Master is a well-rounded upper mid-range motherboard with a VRM rivaled by boards that cost twice as much. Between the Wi-Fi 6E and 10 GbE, three M.2 sockets and six SATA ports for storage, plus its premium appearance, the Z590 Master is an excellent option to get into the Z590 platform if you’re willing to spend around $400.
For
+ Fast Networking, Wi-Fi 6E/10 GbE
+ Superior 18-phase 90A VRM
+ 10 USB ports
Against
– No PCIe x1 slot(s)
– Audible VRM fan
– Price
Features and Specifications
Editor’s Note: A version of this article appeared as a preview before we had a Rocket Lake CPU to test with Z590 motherboards. Now that we do (and Intel’s performance embargo has passed), we have completed testing (presented on page 3) with a Core i9-11900K and have added a score and other elements (as well as removing some now-redundant sentences and paragraphs) to make this a full review.
Gigabyte’s Z590 Aorus Master includes an incredibly robust VRM, ultra-fast Wi-Fi and wired networking, premium audio, and more. While its price of roughly $410 is substantial, it’s reasonable for the features you get, and far from the price of the most premium models in recent generations. If you don’t mind a bit of audible VRM fan noise and like lots of USB and fast wired and wireless networking, it’s well worth considering.
Gigabyte’s current Z590 product stack consists of 13 models. There are familiar SKUs and a couple of new ones. Starting with the Aorus line, we have the Aorus Xtreme (and potentially a Waterforce version), Aorus Master, Aorus Ultra, and the Aorus Elite. Gigabyte brings back the Vision boards (for creators) and their familiar white shrouds. The Z590 Gaming X and a couple of boards from the budget Ultra Durable (UD) series are also listed. New for Z590 is the Pro AX board, which looks to slot somewhere in the mid-range. Gigabyte will also release the Z590 Aorus Tachyon, an overbuilt motherboard designed for extreme overclocking.
On the performance front, the Gigabyte Z590 Aorus Master did well overall, performing among the other boards with raised power limits. There wasn’t a test where it did particularly poorly, but the MS Office and PCMark tests on average were slightly higher than most. Overall, there is nothing to worry about when it comes to stock performance on this board. Overclocking proceeded without issue as well, reaching our 5.1 GHz overclock along with the memory sitting at DDR4 4000.
The Z590 Aorus Master looks the part of a premium motherboard, with brushed aluminum shrouds covering the PCIe/M.2/chipset area. The VRM heatsink and its NanoCarbon Fin-Array II provide a nice contrast against the smooth finish on the board’s bottom. Along with Wi-Fi 6E integration, it also includes an Aquantia based 10GbE, while most others use 2.5 GbE. The Aorus Master includes a premium Realtek ALC1220 audio solution with an integrated DAC, three M.2 sockets, reinforced PCIe and memory slots and 10 total USB ports, including a rear USB 3.2 Gen2x2 Type-C port. We’ll cover those features and much more in detail below. But first, here are full the specs from Gigabyte.
Specifications – Gigabyte Z590 Aorus Master
Socket
LGA 1200
Chipset
Z590
Form Factor
ATX
Voltage Regulator
19 Phase (18+1, 90A MOSFETs)
Video Ports
(1) DisplayPort v1.2
USB Ports
(1) USB 3.2 Gen 2×2, Type-C (20 Gbps)
(5) USB 3.2 Gen 2, Type-A (10 Gbps)
(4) USB 3.2 Gen 1, Type-A (5 Gbps)
Network Jacks
(1) 10 GbE
Audio Jacks
(5) Analog + SPDIF
Legacy Ports/Jacks
✗
Other Ports/Jack
✗
PCIe x16
(2) v4.0 x16, (x16/x0 or x8/x8
(1) v3.0 x4
PCIe x8
✗
PCIe x4
✗
PCIe x1
✗
CrossFire/SLI
AMD Quad GPU Crossfire and 2-Way Crossfire
DIMM slots
(4) DDR4 5000+, 128GB Capacity
M.2 slots
(1) PCIe 4.0 x4 / PCIe (up to 110mm)
(1) PCIe 3.0 x4 / PCIe + SATA (up to 110mm)
(1) PCIe 3.0 x4 / PCIe + SATA (up to 110mm)
U.2 Ports
✗
SATA Ports
(6) SATA3 6 Gbps (RAID 0, 1, 5 and 10)
USB Headers
(1) USB v3.2 Gen 2 (Front Panel Type-C)
(2) USB v3.2 Gen 1
(2) USB v2.0
Fan/Pump Headers
(10) 4-Pin
RGB Headers
(2) aRGB (3-pin)
(2) RGB (4-pin)
Legacy Interfaces
✗
Other Interfaces
FP-Audio, TPM
Diagnostics Panel
Yes, 2-character debug LED, and 4-LED ‘Status LED’ display
As we open up the retail packaging, along with the board, we’re greeted by a slew of included accessories. The Aorus Master contains the basics (guides, driver CD, SATA cables, etc.) and a few other things that make this board complete. Below is a full list of all included accessories.
Installation Guide
User’s Manual
G-connector
Sticker sheet / Aorus badge
Wi-Fi Antenna
(4) SATA cables
(3) Screws for M.2 sockets
(2) Temperature probes
Microphone
RGB extension cable
Image 1 of 3
Image 2 of 3
Image 3 of 3
After taking the Z590 Aorus Master out of the box, its weight was immediately apparent, with the shrouds, heatsinks and backplate making up the majority of that weight. The board sports a matte-black PCB, with black and grey shrouds covering the PCIe/M.2 area and two VRM heatsinks with fins connected by a heatpipe. The chipset heatsink has the Aorus Eagle branding lit up, while the rear IO shroud arches over the left VRM bank with more RGB LED lighting. The Gigabyte RGB Fusion 2.0 application handles RGB control. Overall, the Aorus Master has a premium appearance and shouldn’t have much issue fitting in with most build themes.
Looking at the board’s top half, we’ll first focus on the VRM heatsinks. They are physically small compared to most boards, but don’t let that fool you. The fin array uses a louvered stacked-fin design Gigabyte says increases surface area by 300% and improves thermal efficiency with better airflow and heat exchange. An 8mm heat pipe also connects them to share the load. Additionally, a small fan located under the rear IO shroud actively keeps the VRMs cool. The fan here wasn’t loud, but was undoubtedly audible at default settings.
We saw a similar configuration in the previous generation, which worked out well with an i9-10900K, so it should do well with the Rocket Lake flagship, too. We’ve already seen reports indicating the i9-11900K has a similar power profile to its predecessor. Feeding power to the VRMs is two reinforced 8-pin EPS connectors (one required).
To the right of the socket, things start to get busy. We see four reinforced DRAM slots supporting up to 128GB of RAM. Oddly enough, the specifications only list support up to DDR4 3200 MHz, the platform’s limit. But further down the webpage, it lists DDR4 5000. I find it odd it is listed this way, though it does set up an expectation that anything above 3200 MHz is overclocking and not guaranteed to work.
Above the DRAM slots are eight voltage read points covering various relevant voltages. This includes read points for the CPU Vcore, VccSA, VccIO, DRAM, and a few others. When you’re pushing the limits and using sub-ambient cooling methods, knowing exactly what voltage the component is getting (software can be inaccurate) is quite helpful.
Above those on the top edge are four fan headers (next to the EPS connectors is a fifth) of 10. According to the manual, all CPU fan and pump headers support 2A/24W each. You shouldn’t have any issues powering fans and a water cooling pump. Gigabyte doesn’t mention if these headers use auto-sensing (for DC or PWM control), but they handled both when set to ‘auto’ in the BIOS. Both a PWM and DC controlled fan worked without intervention.
The first two (of four) RGB LED headers live to the fan headers’ right. The Z590 Aorus Master includes two 3-pin ARGB headers and two 4-pin RGB headers. Since this board takes a minimal approach to RGB lighting, you’ll need to use these to add more bling to your rig.
We find the power button and 2-character debug LED for troubleshooting POST issues on the right edge. Below is a reinforced 24-pin ATX connector for power to the board, another fan header and a 2-pin temperature probe header. Just below all of that are two USB 3.2 Gen1 headers and a single USB 3.2 Gen2x2 Type-C front-panel header for additional USB ports.
Gigabyte chose to go with a 19-phase setup for the Vcore and SOC on the power delivery front. Controlling power is an Intersil ISL6929 buck controller that manages up to 12 discrete channels. The controller then sends the power to ISL6617A phase doublers and the 19 90A ISL99390B MOSFETs. This is one of the more robust VRMs we’ve seen on a mid-range board allowing for a whopping 1,620A available for the CPU. You won’t have any trouble running any compatible CPU, including using sub-ambient overclocking.
The bottom half of the board is mostly covered in shrouds hiding all the unsightly but necessary bits. On the far left side, under the shrouds, you’ll find the Realtek ALC1220-VB codec along with an ESS Sabre ESS 9118 DAC and audiophile-grade WIMA and Nichicon Fine Gold capacitors. With the premium audio codec and DAC, an overwhelming majority of users will find the audio perfectly acceptable.
We’ll find the PCIe slots and M.2 sockets in the middle of the board. Starting with the PCIe sockets, there are a total of three full-length slots (all reinforced). The first and second slots are wired for PCIe 4.0, with the primary (top) slot wired for x16 and the bottom maxes out at x8. Gigabyte says this configuration supports AMD Quad-GPU Cand 2-Way Crossfire. We didn’t see a mention of SLI support even though the lane count supports it. The bottom full-length slot is fed from the chipset and runs at PCIe 3.0 x4 speeds. Since the board does without x1 slots, this is the only expansion slot available if you’re using a triple-slot video card. Anything less than that allows you to use the second slot.
Hidden under the shrouds around the PCIe slots are three M.2 sockets. Unique to this setup is the Aorus M.2 Thermal Guard II, which uses a double-sided heatsink design to help cool M.2 SSD devices with double-sided flash. With these devices’ capacities rising and more using flash on both sides, this is a good value-add.
The top socket (M2A_CPU) supports up to PCIe 4.0 x4 devices up to 110mm long. The second and third sockets, M2P_SB and M2M_SB, support both SATA and PCIe 3.0 x3 modules up to 110mm long. When using a SATA-based SSD on M2P_SB, SATA port 1 will be disabled. When M2M_SB (bottom socket) is in use, SATA ports 4/5 get disabled.
To the right of the PCIe area is the chipset heatsink with the Aorus falcon lit up with RGB LEDs from below. There’s a total of six SATA ports that support RAID0, 1, 5 and 10. Sitting on the right edge are two Thunderbolt headers (5-pin and 3-pin) to connect to a Gigabyte Thunderbolt add-in card. Finally, in the bottom-right corner is the Status LED display. The four LEDs labeled CPU, DRAM, BOOT and VGA light up during the POST process. If something hangs during that time, the LED where the problem resides stays lit, identifying the problem area. This is good to have, even with the debug LED at the top of the board.
Across the board’s bottom are several headers, including more USB ports, fan headers and more. Below is the full list, from left to right:
Front-panel audio
BIOS switch
Dual/Single BIOS switch
ARGB header
RGB header
TPM header
(2) USB 2.0 headers
Noise sensor header
Reset button
(3) Fan headers
Front panel header
Clear CMOS button
The Z590 Aorus Master comes with a pre-installed rear IO panel full of ports and buttons. To start, there are a total of 10 USB ports out back, which should be plenty for most users. You have a USB 3.2 Gen2x2 Type-C port, five USB 3.2 Gen2 Type-A ports and four USB 3.2 Gen1 Type-A ports. There is a single DisplayPort output for those who would like to use the CPU’s integrated graphics. The audio stack consists of five gold-plated analog jacks and a SPDIF out. On the networking side is the Aquantia 10 GbE port and the Wi-Fi antenna. Last but not least is a Clear CMOS button and a Q-Flash button, the latter designed for flashing the BIOS without a CPU.
I have been in the modding scene since 2005, creating mostly scratch build projects out of wood, acrylic and aluminum. The most notable of these have been Sangaku, Yuugou and Chiaroscuro with Chiaroscuro having been completed back in 2008. After a long hiatus, I completed Morphosis for the Cooler Master World Series 2019 and, for the Cooler Master World Series 2020 contest, which just announced winners in March 2021, I built something really special.
Meet Ikigai (生き甲斐) a Japanese concept meaning “a reason for being”, my latest case mod project. The word refers to having a meaningful direction or purpose in life, constituting the sense of one’s life being made worthwhile, with actions (spontaneous and willing) taken towards achieving one’s ikigai resulting in satisfaction and sense of meaning to life. In other words, It means I really enjoy building computer cases and I devoted four months of my life to bringing this case to fruition working most nights and weekends. It’s a passion project in every sense.
The case started as a simple concept, like most of my cases. I wanted a vertical tower style case with less than 20 liters of volume that would take up little space on my desk, one that is water cooled and combines my love of handmade wood joinery, and Japanese design aesthetics. It also uses CNC machining techniques and integrates the water cooling and electrical systems. Like I said, simple. I also wanted to keep the case open to show off every component, making sure that every angle of the case was aesthetically pleasing.
Components
Motherboard
MSI B550I Gaming Edge Wifi
CPU
AMD 5600X
GPU
MSI AMD Radeon 5700 Gaming X
PSU
Cooler Master 650 SFX
Memory
G Skill Ripjaws V 3600mhz 32GB
Storage
Western Digital SN750 1 TB, SN550 1 TB
Watercooling
Alphacool GPU Block
Radiator Optiumus CPU Block
EKWB fittings and tubing
Fans
Cooler Master SF360R
Proof of Concept Models
Before I began my build, I prototyped with some basic, non-functional wooden models. While the models might not be functional, they are to scale. I wanted to stay under 20L so I needed to be sure to make use of every mm of space. I decided on a central acrylic panel which would contain the watercooling distribution panel, hide the cabling, and allow the components to be attached. The top section would hold a SFX power supply and the back would have room for a 360mm radiator with full size fans to provide ample cooling power. I went through several iterations of these wood models because, even though I was modeling in cad, things change once you have the real hardware, in the real world and it’s all part of my design process.
Image 1 of 2
Image 2 of 2
With some of the final components arriving, I could mock up the case more accurately. Here I have the radiator and fan assembly in along with the motherboard and graphics card to check for clearances in the watercooling. This would all be hard piped PETG tubing and I was trying to avoid any surprises later on by planning ahead.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Wenge Dovetail Joints
After at least 3 practice mockup cases, I finally had my final dimensions nailed down and it was time to start the final case design. I wanted a wood that was beautiful in its own right with a modern-looking grain that didn’t distract from the clean lines of the case design. I decided on Wenge as my wood of choice, a very hard, dense, and brittle wood which seems to be a cross between charcoal and concrete. It was difficult to work with by hand but sharp tools plus perseverance made it happen.
Case Joinery
I started with the main mitered dovetails of the case by first making a practice joint out of cherry. By doing this, I not only made a visual reference that I could use later to avoid confusion but I dusted off the mental cobwebs; it’s been a long time since I’ve done a joint like this.
I wanted the grain to flow around the case so I cut the entire frame out of one piece of wood, matching the grain around the case as it went along. This also meant my joints would need to be good the first time around or the grain wouldn’t match up.
Image 1 of 2
Image 2 of 2
I used a dovetail guide by Lee Valley to make the dovetail cutting easier. Here I am cutting the tails first.
Image 1 of 2
Image 2 of 2
With the first side cut, I transferred the lines to the next piece with a marking knife. By using the kerf of the joint as a guide, I can be sure the knife marks will be exact.
Next, I repeated this process to make the pins of the dovetail joint, making sure I am cutting on the correct side of the line. A little pencil marking helps with this also.
Once all of the cuts were made, I used a coping saw to remove the bulk of the waste. Then I used a guide block and chisels to creep up to my marking knife lines.
Once the main part of the joinery was done, I cut the miters on all four corners with a crosscut handsaw.
To ensure the accuracy of the miters I made a guide block and used a chisel to sneak up on my lines, ensuring a perfect 45 degree angle. Given how hard this wood was, I had to resharpen my chisels multiple times for this to work well.
After quite a bit of time spent cleaning up the joints, testing and refitting as I went, while trying not to break them, I ended up with tight fitting joints. This process took about two days and a lot of patience.
ASRock has quietly introduced one of the industry’s first Intel Z590-based Mini-ITX motherboards with a Thunderbolt 4 port. The manufacturer positions its Z590 Phantom Gaming-ITX/TB4 platform as its top-of-the-range offering for compact gaming builds for enthusiasts that want to have all the capabilities of large tower desktops and then some, so it is packed with advanced features.
The ASRock Z590 Phantom Gaming-ITX/TB4 motherboard supports all of Intel’s 10th and 11th Generation Comet Lake and Rocket Lake processors, including the top-of-the-range Core i9-11900K with a 125W TDP.
One of the main selling points of the Z590 Phantom Gaming-ITX/TB4 motherboard is of course its Thunderbolt 4 port, which supports a 40 Gb/s throughput when attached to appropriate TB3/TB4 devices (or 10 Gb/s when connected to a USB 3.2 Gen 2) such as high-end external storage subsystems (in case internal storage is not enough on a Mini-ITX build) and can handle two 4K displays or one 8K monitor (albeit with DSC). Furthermore, the motherboard has five USB 3.2 Gen 2 ports on the back as well as an internal header to connect a front panel USB 3.2 Gen 2×2 port which supports transfer rates up to 20 Gb/s.
The platform relies on a 10-layer PCB and is equipped with a 10-phase VRM featuring 90A solid-state coils, 90A DrMOS power stage solutions, and solid-state Nichicon 12K capacitors to ensure maximum performance, reliable operation, and some additional overclocking potential. Interestingly, the motherboard’s CPU fan header provides a maximum 2A power to support water pumps.
The Z590 Phantom Gaming-ITX/TB4 also has a PCIe 4.0 x16 slot for graphics cards, two slots for up to 64 GB of DDR4-4266+ memory, two M.2-2280 slots for SSDs (with a PCIe 4.0 x4 as well as a PCIe 3.0 x4/SATA interface), and three SATA connectors. To guarantee the consistent performance and stable operation of high-end SSDs, ASRock supplies its own heat spreaders for M.2 drives that match its motherboard’s design.
Being a top-of-the-range product, the ASRock Z590 Phantom Gaming-ITX/TB4 naturally has support for addressable RGB lighting (using the ASRock Polychrome Sync/Polychrome RGB software) and has a very sophisticated input/output department that has a number of unique features, such as three display outputs and multi-gig networking.
In addition, the mainboard has a DisplayPort 1.4 as well as an HDMI 2.0b connector. Keeping in mind that Intel’s desktop UHD Graphics has three display pipelines, the motherboard can handle three monitors even without a discrete graphics card. Meanwhile, integrated Intel’s Xe-LP architecture used in Rocket Lake’s UHD Graphics 730 has very advanced media playback capabilities (e.g., a hardware-accelerated 12-bit video pipeline for wide-color 8K60 with HDR playback), so it can handle Ultra-HD Blu-ray, contemporary video services that use modern codecs, and next-generation 8Kp60 video formats.
Next up is networking. The Z590 Phantom Gaming-ITX/TB4 comes with an M.2-2230 Killer AX1675x WiFi 6E + Bluetooth 5.2 PCIe module that supports up to 2.4 Gbps throughput when connected to an appropriate router. Also, the motherboard is equipped with a Killer E3100G 2.5GbE adapter. The adapters can be used at the same time courtesy of Killer’s DoubleShot Pro technology that aggregates bandwidth and prioritizes high-priority traffic, so the maximum networking performance can be increased up to 4.9 Gbps.
The audio department of the Z590 Phantom Gaming-ITX/TB4 is managed by the Realtek ALC1220 audio codec withNahimic Audio software enhancements and includes 7.1-channel analog outputs as well as an S/P DIF digital output.
ASRock’s Z590 Phantom Gaming-ITX/TB4 motherboard will be available starting from April 23 in Japan, reports Hermitage Akihabara. In the Land of the Rising Sun, the unit will cost ¥38,000 (around $345) without taxes and ¥41,800 with taxes.
Aaeon, a leading maker of embedded and commercial systems, has quietly unveiled a rather unique 3.5-inch single-board computer (SBC) that supports socketed Intel’s Comet Lake processors. The SBC is designed mainly for embedded applications, but with some luck and DIY skills, you could use it to build an ultra-compact form-factor (UCFF) desktop with up to eight high-performance cores as well as advanced media playback capabilities.
Aaeon’s Gene-CML5 subcompact motherboard is based on Intel’s Q470E/H420E/Q470 chipset (depending on the SKU) and comes with an LGA 1200 socket that can support various Comet Lake processors with two, four, or eight cores as well as a 35W TDP (i.e., up to Core i7-10700TE with eight cores clocked at 2.0 GHz ~ 4.40 GHz).
For some reason, the manufacturer decided not to officially support 10-core CPUs with a 35W TDP, perhaps because the bundled cooling system cannot handle it. The motherboard has two slots for up to 64 GB of DDR4-2933 memory, an M.2-2280 slot for an SSD featuring a PCIe 3.0 x4 or SATA interface, and two SATA ports.
For DIY enthusiasts, it is not going to be easy to find a proper chassis for a 3.5-inch motherboard, but there are companies like Supermicro that offer them, so it is doable.
Intel designed its Comet Lake processors primarily with high-performance systems in mind, so these CPUs are widely used on Intel’s gaming platforms for desktops and notebooks. Meanwhile, the family also includes low-power T and low-power TE SKUs for UCFF and low-power embedded applications, respectively. So far, we have not heard of many UCFF LGA 1200 systems in general, so Aaeon might be the first company to offer a 3.5-inch SBC that can handle an eight-core socketed Comet Lake processor. It is noteworthy that the company has not made any formal announcements about the product — LinuxGizmos found this board in an ad.
Not many embedded systems can benefit from an eight-core CPU today, but a lot of new applications are emerging, so some of them might take advantage of the combination of performance offered by Intel’s Comet Lake and the diminutive system size enabled by the Aaeon Gene-CML5. PC makers who have access to custom PC cases can also use the SBC to build tiny systems that boast up to eight cores and potential upgradeability.
The miniature 3.5-inch Gene-CML5 SBC — which measures 146×101.7mm — has an essential choice of connectivity that includes two GbE ports (managed by Intel controllers with or without vPro), three display outputs (one DisplayPort++ with MST support, one D-Sub, one LVDS header), two USB 3.2 Gen 2 Type-A connectors, four USB 2.0 ports using an onboard header, two internal RS-232/422/485 headers, a header for audio in/audio out jacks, and a PCIe 3.0 x4 through Flexible Printed Circuit interface (on Q470/Q470E SKUs only).
Of course, since the Gen-CML5 SBC is aimed at embedded and commercial applications, the board is equipped with a TPM module, a watchdog timer, and other perks. As for operating temperatures, the SBC can function in a 0°C ~ 60°C(32°F ~ 122°F) range, so it is not suitable for industrial or outdoor applications.
Intel last week debuted the 11th Gen Core “Rocket Lake” desktop processor family, and we had launch-day reviews of the Core i9-11900K flagship and the mid-range Core i5-11600K. Today we bring you the Core i5-11400F—probably the most interesting model in the whole stack. The often-ignored SKU among Intel desktop processors among the past many generations, the Core i5-xx400, is also its most popular among gamers. Popular chips of this kind included the i5-8400, the i5-9400F, and the i5-10400F.
These chips feature the entire Core i5 feature-set at prices below $200, albeit lower clock speeds and locked overclocking. Even within these, Intel introduced a sub-segment of chips that lack integrated graphics, denoted by “F” in the model number; which shave a further $15-20 off the price. The Core i5-11400F starts at just $160, which is an impressive value proposition for gamers who use graphics cards and don’t need the iGPU anyway.
The new “Rocket Lake” microarchitecture brings four key changes that make it the company’s first major innovation for client desktop in several years. First, Intel is introducing the new “Cypress Cove” CPU core that promises an IPC gain of up to 19% over the previous-generation. Next up, is the new UHD 750 integrated graphics powered by the Intel Xe LP graphics architecture, promising up to 50% performance uplift over the UHD 650 Gen9.5 iGPU of the previous generation. Thirdly, a much needed update to the processor’s I/O, including PCI-Express 4.0 for graphics and a CPU-attached NVMe slot; and lastly, an updated memory controller that allows much higher memory overclocking potential, thanks to the introduction of a Gear 2 mode.
The Core i5-11400F comes with a permanently disabled iGPU and a locked multiplier. Intel has still enabled support for memory frequencies of up to DDR4-3200, which is now possible on even the mid-tier H570 and B560 motherboard chipsets. The i5-11400F is a 6-core/12-thread processor clocked at 2.60 GHz, with up to 4.40 GHz Turbo Boost frequency. Each of the processor’s six “Cypress Cove” CPU cores include 512 KB dedicated L2 cache, and the cores share 12 MB of L3 cache. Intel is rating the processor’s TDP at 65 W, just like the other non-K SKUs, although it is possible to tweak these power limits—adjusting PL1 and PL2 is not considered “overclocking” by Intel, so it is not locked.
At $170, the Core i5-11400F has no real competitor from AMD. The Ryzen 5 3600 starts around $200, and the company didn’t bother (yet?) with cheaper Ryzen 5 SKUs based on “Zen 3”. In this review, we take the i5-11400F for a spin to show you if this is really all you need for a mid-priced contemporary gaming rig.
We present several data sets in our Core i5-11400F review: “Gear 1” and “Gear 2” show performance results for the processor operating at stock, with the default power limit setting active, respecting a 65 W TDP. Next up we have two runs with the power limit setting raised to maximum: “Max Power Limit / Gear 1” and “Max Power Limit / Gear 2”. Last but not least, signifying the maximum performance you can possible achieve on this CPU, we have a run “Max Power + Max BCLK”, which operates at 102.9 MHz BCLK—the maximum allowed by the processor, at Gear 1 DDR4-3733, the memory controller won’t run higher.
ServeTheHome has just confirmed that Lenovo is fully utilizing AMD’s Platform Secure Boot (or PSB) in its server and workstation pre-built machines. This feature locks AMD’s Ryzen Pro, Threadripper Pro, and EPYC processors out from being used in other systems in an effort to reduce CPU theft.
More specifically, this feature effectively cancels out a CPU’s ability to be used in another motherboard, or at least a motherboard not from the original OEM. If a thief wanted to steal these chips, they would have to hack the PSB hardware and firmware to get the chip functioning in other hardware.
But that would be super difficult to do. AMD’s Platform Secure Boot runs on a 32-bit AMD secure ARM SoC with its own operating system. The hardware isolation is another layer of security for the system, as it’s nearly impossible to access FSB since the system won’t be able to detect the ARM processor in the main operating system.
In theory, this feature is an excellent idea. It effectively makes these chips OEM exclusive, which can help reduce CPU theft. On the other hand, this feature will prevent current owners of these pre-builts from using the chips in other systems down the road.
It’s not much of a problem today, but suppose the system gets a CPU upgrade in the future. The old CPU effectively becomes e-waste, unless it ends up in the hands of someone who already has a compatible Lenovo system. Alternatively, if a motherboard fails, it locks the user into using a replacement motherboard from the original vendor.
Thankfully, this feature has to be enabled by an OEM in the first place, so you can still go out and buy an EPYC, Ryzen Pro, or Threadripper Pro CPU/system that isn’t using this feature specifically. Still, this feature can be a double edged sword. Most people buying servers aren’t going to be swapping chips out and using them in other systems, so this potential issue should be quite rare.
Perhaps more worrisome is that Ryzen Pro processors from the Renoir and Cezanne families also support PSB. Enabling it on that sort of hardware and the resulting vendor lock-in would limit the ability to part out such PCs in the future.
Today, Gigabyte (via Komachi_Ensaka) registered a ton of X570S motherboards with the Eurasian Economic Commission (EEC). The new submissions hint to a potential chipset refresh on AMD’s part.
EEC listings are a bit hit or miss. Vendors often register their products with the organization, but not all of them make it to the market. Sometimes companies just want to get dibs on the model names. This is the first time that the X570S chipset has come up, so take the information with some salt. However, there are a few theories to existence of the X570S chipset.
Manufacturers, including chipmakers, typically release a new chipset when a new wave of processors are on the horizon. Since AMD’s Zen 4 chips are still a long way off, the closest candidate that could warrant a new chipset is the rumored Zen 3+ (Warhol) refresh. We haven’t seen anything concrete that backs the rumor, but there is talk that Zen 3+ could drop before the year is over. If Zen 3+ is real, it will in all likelihood retain the same feature set as Zen 3. The improvements will probably come in the shape of a small frequency bump.
Gigabyte X570S Motherboards
Motherboard
Form Factor
Chipset
Gigabyte X570S Aorus Master
ATX
X570S
Gigabyte X570S Aorus Pro AX
ATX
X570S
Gigabyte X570S Aorus Elite
ATX
X570S
Gigabyte X570S Aorus Elite AX
ATX
X570S
Gigabyte X570S Gaming X
ATX
X570S
Gigabyte X570S UD
ATX
X570S
Gigabyte X570S Aero G
ATX
X570S
Gigabyte X570SI Aorus Pro AX
Mini-ITX
X570S
Another theory is that X570S may just be an optimized version of the X570 chipset. One of X570’s novelties is PCIe 4.0 support. However, it also means that the X570 chipset draws more power in comparison to the previous X470 chipset. Consequently, X570 motherboards come with active cooling for the chipset in the form of a small fan. Of course, many enthusiasts aren’t fond of that solution, and AMD later released a new AGESA code that ushered in support for passive chipset solutions.
Obviously, the biggest unknown is what the “S” in X570S stands for. By logic, one would think that it means Silence or Silent and maybe alludes to the possibility of an X570 chipset that doesn’t require a cooling fan to keep the temperature under control. The idea isn’t far-fetched, considering that are already X570 motherboards on the market that don’t rely on a chipset fan, such as Asus’ ROG Crosshair VIII Dark Hero or Gigabyte’s own X570 Aorus Xtreme. But those tend to rely on larger heat sinks, rather than a power optimized chip that produces less heat to begin with.
Apple’s computers have been notorious for their lack of upgradeability, particularly since the introduction of Apple’s M1 chip that integrates memory directly into the package. But as spotted via Twitter, if you want to boost the power of your Mac, it may be possible with money, skill, time and some real desire by removing the DRAM and NAND chips and adding more capacious versions, much like we’ve seen multiple times with enthusiasts soldering on more VRAM to graphics cards.
With the ongoing transition to custom Apple system-on-chips (SoCs), it will get even harder to upgrade Apple PCs. But one Twitter user points to “maintenance engineers” that did just that.
By any definition, such modifications void the warranty, so we strongly do not recommend doing them on your own: It obviously takes a certain level of skill, and patience, to pull off this type of modification.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
With a soldering station (its consumer variant is not that expensive at $60), DRAM memory chips and NAND flash memory chips, (which are close to impossible to buy on the consumer level), the engineers reportedly upgraded the Apple M1-based Mac Mini with 8GB of RAM and 256GB of storage to 16GB and 1TB, respectively, by de-soldering the existing components and adding more capacious chips. According to the post, no firmware modifications were necessary.
Chinese maintenance engineers can already expand the capacity of the Apple M1. The 8GB memory has been expanded to 16GB, and the 256GB hard drive has been expanded to 1TB. pic.twitter.com/2Fyf8AZfJRApril 4, 2021
See more
Using their soldering station, the engineers removed 8GB of GDDR4X memory and installed chips with a 16GB capacity. Removing the NAND chips from the motherboard using the same method was not a problem. The chips were then replaced with higher-capacity devices.
The details behind the effort are slight, though the (very) roughly translated Chinese text in one of the images reads, “The new Mac M1 whole series the first time 256 and upgrade to 1TB, memory is 8L 16G, perfect! This is a revolutionary period the companies are being reshuffled. In the past, if you persevered, there was hope, but today, if you keep on the original way, a lot of them will disappear unless we change our way of thinking. We have to evolve, update it, and start again. Victory belongs to those who adapt; we have to learn to make ourselves more valuable.”
Of course, Apple is not the only PC maker to opt for SoCs and soldered components. Both Intel and AMD offer PC makers SoCs, and Intel even offers reference designs for building soldered down PC platforms.
use? It’s an important question, and while the performance we show in our
GPU benchmarks
hierarchy is useful, one of the true measures of a GPU is how efficient it is. To determine GPU power efficiency, we need to know both performance and power use. Measuring performance is relatively easy, but measuring power can be complex. We’re here to press the reset button on GPU power measurements and do things the right way.
There are various ways to determine power use, with varying levels of difficulty and accuracy. The easiest approach is via software like
GPU-Z
, which will tell you what the hardware reports. Alternatively, you can measure power at the outlet using something like a
Kill-A-Watt
power meter, but that only captures total system power, including PSU inefficiencies. The best and most accurate means of measuring the power use of a graphics card is to measure power draw in between the power supply (PSU) and the card, but it requires a lot more work.
We’ve used GPU-Z in the past, but it had some clear inaccuracies. Depending on the GPU, it can be off by anywhere from a few watts to potentially 50W or more. Thankfully, the latest generation AMD Big Navi and Nvidia Ampere GPUs tend to report relatively accurate data, but we’re doing things the right way. And by “right way,” we mean measuring in-line power consumption using hardware devices. Specifically, we’re using Powenetics software in combination with various monitors from TinkerForge. You can read our Powenetics project overview for additional details.
Image 1 of 2
Image 2 of 2
Tom’s Hardware GPU Testbed
After assembling the necessary bits and pieces — some soldering required — the testing process is relatively straightforward. Plug in a graphics card and the power leads, boot the PC, and run some tests that put a load on the GPU while logging power use.
We’ve done that with all the legacy GPUs we have from the past six years or so, and we do the same for every new GPU launch. We’ve updated this article with the latest data from the GeForce RTX 3090, RTX 3080, RTX 3070, RTX 3060 Ti, and RTX 3060 12GB from Nvidia; and the Radeon RX 6900 XT, RX 6800 XT, RX 6800, and RX 6700 XT from AMD. We use the reference models whenever possible, which means only the EVGA RTX 3060 is a custom card.
If you want to see power use and other metrics for custom cards, all of our graphics card reviews include power testing. So for example, the RX 6800 XT roundup shows that many custom cards use about 40W more power than the reference designs, thanks to factory overclocks.
Test Setup
We’re using our standard graphics card testbed for these power measurements, and it’s what we’ll use on graphics card reviews. It consists of an MSI MEG Z390 Ace motherboard,
Intel Core i9-9900K CPU
, NZXT Z73 cooler, 32GB Corsair DDR4-3200 RAM, a fast M.2 SSD, and the other various bits and pieces you see to the right. This is an open test bed, because the Powenetics equipment essentially requires one.
There’s a PCIe x16 riser card (which is where the soldering came into play) that slots into the motherboard, and then the graphics cards slot into that. This is how we accurately capture actual PCIe slot power draw, from both the 12V and 3.3V rails. There are also 12V kits measuring power draw for each of the PCIe Graphics (PEG) power connectors — we cut the PEG power harnesses in half and run the cables through the power blocks. RIP, PSU cable.
Powenetics equipment in hand, we set about testing and retesting all of the current and previous generation GPUs we could get our hands on. You can see the full list of everything we’ve tested in the list to the right.
From AMD, all of the latest generation Big Navi / RDNA2 GPUs use reference designs, as do the previous gen RX 5700 XT, RX 5700 cards,
Radeon VII
,
Vega 64
and
Vega 56
. AMD doesn’t do ‘reference’ models on most other GPUs, so we’ve used third party designs to fill in the blanks.
For Nvidia, all of the Ampere GPUs are Founders Edition models, except for the EVGA RTX 3060 card. With Turing, everything from the
RTX 2060
and above is a Founders Edition card — which includes the 90 MHz overclock and slightly higher TDP on the non-Super models — while the other Turing cards are all AIB partner cards. Older GTX 10-series and GTX 900-series cards use reference designs as well, except where indicated.
Note that all of the cards are running ‘factory stock,’ meaning there’s no manual
overclocking
or
undervolting
is involved. Yes, the various cards might run better with some tuning and tweaking, but this is the way the cards will behave if you just pull them out of their box and install them in your PC. (RX Vega cards in particular benefit from tuning, in our experience.)
Our testing uses the Metro Exodus benchmark looped five times at 1440p ultra (except on cards with 4GB or less VRAM, where we loop 1080p ultra — that uses a bit more power). We also run Furmark for ten minutes. These are both demanding tests, and Furmark can push some GPUs beyond their normal limits, though the latest models from AMD and Nvidia both tend to cope with it just fine. We’re only focusing on power draw for this article, as the temperature, fan speed, and GPU clock results continue to use GPU-Z to gather that data.
GPU Power Use While Gaming: Metro Exodus
Due to the number of cards being tested, we have multiple charts. The average power use charts show average power consumption during the approximately 10 minute long test. These charts do not include the time in between test runs, where power use dips for about 9 seconds, so it’s a realistic view of the sort of power use you’ll see when playing a game for hours on end.
Besides the bar chart, we have separate line charts segregated into groups of up to 12 GPUs, and we’ve grouped cards from similar generations into each chart. These show real-time power draw over the course of the benchmark using data from Powenetics. The 12 GPUs per chart limit is to try and keep the charts mostly legible, and the division of what GPU goes on which chart is somewhat arbitrary.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
Kicking things off with the latest generation GPUs, the overall power use is relatively similar. The 3090 and 3080 use the most power (for the reference models), followed by the three Navi 10 cards. The RTX 3070, RX 3060 Ti, and RX 6700 XT are all pretty close, with the RTX 3060 dropping power use by around 35W. AMD does lead Nvidia in pure power use when looking at the RX 6800 XT and RX 6900 XT compared to the RTX 3080 and RTX 3090, but then Nvidia’s GPUs are a bit faster so it mostly equals out.
Step back one generation to the Turing GPUs and Navi 1x, and Nvidia had far more GPU models available than AMD. There were 15 Turing variants — six GTX 16-series and nine RTX 20-series — while AMD only had five RX 5000-series GPUs. Comparing similar performance levels, Nvidia Turing generally comes in ahead of AMD, despite using a 12nm process compared to 7nm. That’s particularly true when looking at the GTX 1660 Super and below versus the RX 5500 XT cards, though the RTX models are closer to their AMD counterparts (while offering extra features).
It’s pretty obvious how far AMD fell behind Nvidia prior to the Navi generation GPUs. The various Vega and Polaris AMD cards use significantly more power than their Nvidia counterparts. RX Vega 64 was particularly egregious, with the reference card using nearly 300W. If you’re still running an older generation AMD card, this is one good reason to upgrade. The same is true of the legacy cards, though we’re missing many models from these generations of GPU. Perhaps the less said, the better, so let’s move on.
GPU Power with FurMark
FurMark, as we’ve frequently pointed out, is basically a worst-case scenario for power use. Some of the GPUs tend to be more aggressive about throttling with FurMark, while others go hog wild and dramatically exceed official TDPs. Few if any games can tax a GPU quite like FurMark, though things like cryptocurrency mining can come close with some algorithms (but not Ehterium’s Ethash, which tends to be limited by memory bandwidth). The chart setup is the same as above, with average power use charts followed by detailed line charts.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
The latest Ampere and RDNA2 GPUs are relatively evenly matched, with all of the cards using a bit more power in FurMark than in Metro Exodus. One thing we’re not showing here is average GPU clocks, which tend to be far lower than in gaming scenarios — you can see that data, along with fan speeds and temperatures, in our graphics card reviews.
The Navi / RDNA1 and Turing GPUs start to separate a bit more, particularly in the budget and midrange segments. AMD didn’t really have anything to compete against Nvidia’s top GPUs, as the RX 5700 XT only matched the RTX 2070 Super at best. Note the gap in power use between the RTX 2060 and RX 5600 XT, though. In gaming, the two GPUs were pretty similar, but in FurMark the AMD chip uses nearly 30W more power. Actually, the 5600 XT used more power than the RX 5700, but that’s probably because the Sapphire Pulse we used for testing has a modest factory overclock. The RX 5500 XT cards also draw more power than any of the GTX 16-series cards.
With the Pascal, Polaris, and Vega GPUs, AMD’s GPUs fall toward the bottom. The Vega 64 and Radeon VII both use nearly 300W, and considering the Vega 64 competes with the GTX 1080 in performance, that’s pretty awful. The RX 570 4GB (an MSI Gaming X model) actually exceeds the official power spec for an 8-pin PEG connector with FurMark, pulling nearly 180W. That’s thankfully the only GPU to go above spec, for the PEG connector(s) or the PCIe slot, but it does illustrate just how bad things can get in a worst-case workload.
The legacy charts are even worse for AMD. The R9 Fury X and R9 390 go well over 300W with FurMark, though perhaps that’s more of an issue with the hardware not throttling to stay within spec. Anyway, it’s great to see that AMD no longer trails Nvidia as badly as it did five or six years ago!
Analyzing GPU Power Use and Efficiency
It’s worth noting that we’re not showing or discussing GPU clocks, fan speeds or GPU temperatures in this article. Power, performance, temperature and fan speed are all interrelated, so a higher fan speed can drop temperatures and allow for higher performance and power consumption. Alternatively, a card can drop GPU clocks in order to reduce power consumption and temperature. We dig into this in our individual GPU and graphics card reviews, but we just wanted to focus on the power charts here. If you see discrepancies between previous and future GPU reviews, this is why.
The good news is that, using these testing procedures, we can properly measure the real graphics card power use and not be left to the whims of the various companies when it comes to power information. It’s not that power is the most important metric when looking at graphics cards, but if other aspects like performance, features and price are the same, getting the card that uses less power is a good idea. Now bring on the new GPUs!
Here’s the final high-level overview of our GPU power testing, showing relative efficiency in terms of performance per watt. The power data listed is a weighted geometric mean of the Metro Exodus and FurMark power consumption, while the FPS comes from our GPU benchmarks hierarchy and uses the geometric mean of nine games tested at six different settings and resolution combinations (so 54 results, summarized into a single fps score).
This table combines the performance data for all of the tested GPUs with the power use data discussed above, sorts by performance per watt, and then scales all of the scores relative to the most efficient GPU (currently the RX 6800). It’s a telling look at how far behind AMD was, and how far it’s come with the latest Big Navi architecture.
Efficiency isn’t the only important metric for a GPU, and performance definitely matters. Also of note is that all of the performance data does not include newer technology like ray tracing and DLSS.
The most efficient GPUs are a mix of AMD’s Big Navi GPUs and Nvidia’s Ampere cards, along with some first generation Navi and Nvidia Turing chips. AMD claims the top spot with the Navi 21-based RX 6800, and Nvidia takes second place with the RTX 3070. Seven of the top ten spots are occupied by either RDNA2 or Ampere cards. However, Nvidia’s GDDR6X-equipped GPUs, the RTX 3080 and 3090, rank 17 and 20, respectively.
Given the current GPU shortages, finding a new graphics card in stock is difficult at best. By the time things settle down, we might even have RDNA3 and Hopper GPUs on the shelves. If you’re still hanging on to an older generation GPU, upgrading might be problematic, but at some point it will be the smart move, considering the added performance and efficiency available by more recent offerings.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.