Gigabyte’s AORUS Z590 Extreme Waterforce is one of the craziest motherboards that you’ll be able to buy soon for Intel’s Comet Lake and Rocket Lake SKUs, and could very well at some point find its way onto our best motherboards list. The board is designed for custom liquid cooling from the start, featuring a large monoblock cooling the CPU and power delivery components. There’s also a fully liquid-cooled chipset heatsink, as well as liquid-cooled M.2 heatsinks to keep your high-speed storage devices extra cool.
Aesthetically, the board looks like something designed to draw attention on a CES showroom floor. The entire PCB is covered in matte black and metal, with the chipset and monoblocks featuring RGB illumination. Naturally, there’s an RGB-illuminated AORUS logo on top of the rear I/O.
To top it all off, the monoblock features digital water and CPU temperature gauges right on top of the block, and build-in leak protection that will automatically shut down your PC in-case that situation occurs. The monoblock is connected to an internal USB Type-C port to interface with the motherboard’s firmware.
As one of Gigabyte’s flagship motherboards, the amount of features it has is almost uncanny. For power delivery, the board comes with a 20+1 VRM solution with 100A power stages. This is a very high-end VRM system, with the bonus of being liquid cooled by the board’s monoblock. So you should have no problems with the motherboard when overclocking and overvolting Intel’s highest core-count CPUs.
For connectivity, you have basically everything you can ask for. Dual Thunderbolt 4 USB Type-C ports, Intel WiFI 6E and Bluetooth 5.0 for wireless support, one Aquantia 10Gb ethernet port, plus an Intel 2.5Gb LAN port, and over eight USB 3.2 Gen 2 ports, counting both internal USB headers and rear I/O.
We don’t know how much this board will cost, but given the number of features included, the price will be high. However, this board is targeted towards consumers who want the best of the best you can get from a motherboard. For those looking for similar features at a more mid-range price, take a look at the MSI MPG Z590 Carbon EK X, which we just took an in-depth look at.
Razer’s Tomahawk ITX values form over function. And although it looks great for a Mini-ITX chassis, it has design flaws that keep it from being worth its steep price.
For
+ Easy to work in
+ Thermally capable
+ Minimalistic looks
+ Built like a (small) tank
Against
– Very expensive
– Doesn’t get dust filtration right
– Ineffective front intake
– Doesn’t include fans
Specifications and Features
When Razer reached out asking if I wanted to review the Tomahawk ITX, I of course said yes. After all, it’s the first time Razer is delving into the ITX chassis market. And I have to admit, it’s a good looking case with a simple but purposeful desing.
Razer wouldn’t disclose who its production partner was, but the chassis closely resembles that of the Lian Li TU150, albeit with a few changes. Given the collaboration history between the two companies, a collaboration with Lian Li wouldn’t be surprising.
Whether this compact Razer case deserves a spot on our Best PC Cases list remains to be seen.Let’s dig into the Razer Tomahawk ITX’s design and performance to find out.
Razer Tomahawk Specifications
Type
ITX Tower
Motherboard Support
Mini-ITX
Dimensions (HxWxD)
8.46 x 9.72 x 14.49 inches (215 x 247 x 368 mm)
Max GPU Length
12.6 inches (320 mm)
CPU Cooler Height
6.5 inches (165 mm)
Max PSU Length
SFF, SFF-L
External Bays
None
Internal Bays
3x 3.5-inch
Expansion Slots
3x
Front I/O
2x USB 3.0
1x USB Type-C
Mic, Headphone
Other
Chroma RGB Controller
Front Fans
None (Up to 1x 120mm)
Rear Fans
None (Up to 1x 120mm)
Top Fans
None (Up to 2x 120mm)
Bottom Fans
None (Up to 2x 120mm)
Side Fans
x
RGB
Yes, Razer Chroma Underglow
Damping
No
Warranty
1 Year
Features
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Touring around the chassis, there’s not much to mention of any significance – the Tomahawk ITX is shaped like a shoebox on its side, with dark tinted tempered glass panels on each side and a closed front. There is some semblance of intake mesh on the side of the front panel, but the perforation is tiny and likely won’t do much for cooling.
At the bottom of the case you’ll spot two Chroma RGB strips between the front and back feet. These provide Chroma underglow lighting, which we’ll demonstrate later in the review. With diffusers, they should handsomely light up the area underneath the chassis.
Image 1 of 2
Image 2 of 2
The case’s side panels swing open on hinges, making it really easy to open and show off your system without the hassle of unscrewing and removing a panel. That said, there’s not a lot of space for cable management behind the motherboard tray, and without anything to hold the cables in place, it might become a challenge to keep the panel closed later on, as it’s only held shut by a magnet.
Top IO comprises a USB Type-C port, discrete microphone and headphone jacks and two USB 3.0 ports. Power and reset switches are naturally also present.
Internal Layout
Image 1 of 2
Image 2 of 2
After removing the glass panels, because I don’t want them swinging around during the build process, we reveal the interior of the case. There is space for up to Mini-ITX motherboards, an SFX power supply near the front, and large 3-slot graphics cards.
Image 1 of 1
Cooling
Despite being a $189 chassis, Razer does not include any fans with this case. You can install up to a 240mm AIO at the top of the case, two 120mm fans at the bottom, along with single 120mm spinners at the front intake and rear exhaust.
CPU coolers can be up to 6.5 inches (165 mm) tall, and GPUs up to three slots thick and 12.6 inches (320 mm) long.
However, air filtration is bound to be problematic in this case. There is a front intake filter, but the mesh design is so restrictive here that I doubt the case will pull much air through this filter. As a result, this can only turn into a negative-pressure case that draws unfiltered air in from the bottom and rear of the chassis.
Storage
An SSD mount is present on the side tray and the bottom also supports another two. There are no 3.5-inch HDD mounts.
Does it fit an RTX 3080?
Yes, the case fits triple-slot GPUs up to 320mm (12.6 inches) long.
Although I assembled it myself, and its software all comes from an open-source DIY project, in many ways my MiSTer is the most versatile computer I own. It’s a shapeshifting wonderbox that can change its own logic to make itself run like countless other machines as accurately as possible. From old arcade boards to early PCs to vintage consoles, MiSTer developers are devoted to helping it turn into an ever-expanding range of hardware.
If you’ve ever wanted to use computer software or hardware that is no longer available for sale, you’ve probably run into emulation before. It’s a huge field that often involves a ton of people working on a technically challenging feat: how to write software that lets one computer run code that was written for another. But there’s only so much traditional emulators can do. There are always inherent compromises and complexities involved in getting your current hardware to run software it was never designed to handle. Emulated operating systems or video games often encounter slowdown, latency, and bugs you’d never have encountered with the original devices. So what if there was a way to alter the hardware itself?
Well, that’s MiSTer. It’s an open-source project built upon field-programmable gate array (FPGA) technology, which means it makes use of hardware that can be reconfigured after the fact. While traditional CPUs are fixed from the point of manufacture, FPGAs can be reprogrammed to work as if they came right off the conveyor belt with the actual silicon you want to use.
What this means is, you’re not tricking a processor into believing it’s something else, you’re setting it up to run that way from the start. A MiSTer system can theoretically run software from the NES to the Neo Geo, to the Apple II or Acorn Archimedes, and deliver responsive, near-as-dammit accurate performance next to what you’d get from the actual devices.
Of course, it’s not as easy as that makes it sound. In order to program an FPGA to act like a computer from three decades ago, you have to intimately understand the original hardware. And that’s what makes MiSTer one of the technically coolest DIY projects going today, building on the knowledge of developers around the globe.
FPGAs aren’t new technology. Two early companies in the field (sorry) were Altera, now owned by Intel, and Xilinx, now part of AMD. The two have competed since the 1980s for market share in programmable logic devices, largely serving enterprise customers. One of the biggest advantages of FPGAs on an industrial scale is that companies can iterate their software design on hardware before they need to manufacture the final silicon. FPGAs are widely used to develop embedded systems, for example, because the software and the hardware can be designed near-concurrently.
You might be familiar with FPGAs if you’ve come across Analogue’s boutique console clones, like the Mega Sg and the Super Nt. Those use FPGAs programmed in a certain way to replicate a single, specific piece of hardware, so you can use your original physical cartridges with them and get an experience that’s very close to the actual consoles.
The MiSTer project is built around more accessible FPGA hardware than you’d find in commercial or enterprise applications. The core of the system is an FPGA board called the DE10-Nano, produced by another Intel-owned company called Terasic that’s based out of Taiwan. It was originally intended for students as a way to teach themselves how to work with FPGAs.
The DE10-Nano looks somewhat similar to a Raspberry Pi — it’s a tiny motherboard that ships without a case and is designed to be expanded. The hardware includes an Altera Cyclone V with two ARM Cortex-A9 CPU cores, 1GB of DDR3 SDRAM, an HDMI out, a microSD card slot, a USB-A port, and Ethernet connectivity. It runs a Linux-based OS out of the box and sells for about $135, or $99 to students.
MiSTer is inspired by MiST, an earlier project that made use of an Altera FPGA board to recreate the Atari ST. But the DE10-Nano is cheaper, more powerful, and expandable, which is why project leader Alexey Melnikov used it as the basis for MiSTer when development started a few years back. Melnikov also designed MiSTer-specific daughterboards that enhance the DE10-Nano’s capability and make a finished machine a lot more versatile; the designs are open-source, so anyone is free to manufacture and sell them.
You can run MiSTer on a single DE10-Nano, but it’s not recommended, because the board alone will only support a few of the cores available. (A “core” is a re-creation of a specific console or computer designed to run on the MiSTer platform.) The one upgrade that should be considered essential is a 128MB stick of SDRAM, which gives MiSTer enough memory at the right speed to run anything released for the platform to date.
Beyond that, you’ll probably want a case, assuming you’d rather not run open circuitry exposed to the elements. There are various case designs available, many of which are intended for use with other MiSTer-specific add-ons that vertically attach to the DE10-Nano. An I/O board isn’t necessary for most cores, for example, but it adds a VGA port along with digital and analog audio out, which is useful for various setups. (A lot of MiSTer users prefer to hook up their systems to CRT TVs to make the most of the authentic output and low latency.) You can add a heatsink or a fan, which can be a good idea if you want to run the system for extended periods of time. And there’s a USB hub board that adds seven USB-A ports.
For my setup, I ordered the DE10-Nano, a 128MB SDRAM stick, a VGA I/O board with a fan, a USB hub board, and a case designed for that precise selection of hardware. These largely came from different sources and took varying amounts of time to show up; you can order the DE10-Nano from countless computer retailers, but other MiSTer accessories involve diving into a cottage industry of redesigns and resellers. Half of my parts arrived in a battered box from Portugal filled with shredded paper and loosely attached bubble wrap.
MiSTer accessories are based on Melnikov’s original designs, but since the project is open-source, many sellers customize their own versions. My case, for example, includes a patch cable that hooks directly into the IO board to control its lighting, while some others require you to route the LEDs yourself. The USB board, meanwhile, came with a bridge to the DE10-Nano that seemed to be a different height from most others, which meant I had to improvise a little with screw placements. Nothing I ordered came with instructions, so it did take some time to figure out what should go where, but everything worked fine in the end. The only other thing I had to do was go buy a small hex screwdriver for the final screws in the case.
That’s part of the fun with MiSTer. There’s a base specification that everything works around, but you’re still ultimately assembling your own FPGA computer, and you can adjust the build as much or as little as you want.
Once your hardware is set, you need to install the MiSTer software. There are a few ways to do this, and you’ll want to dig around forums and GitHub for a while so you know what you’re doing, but the method I went with was simple in the end — essentially, you format your microSD card with an installer package, put it into the DE10-Nano, plug in an Ethernet cable and a USB keyboard, power on the system, and it’ll download all of the available cores. Your SD card will then be set up to boot the MiSTer OS directly, and you can run another script to make sure everything’s updated with the most recent versions.
The MiSTer OS is very simple, with a default background that looks like pixelated TV static and a basic menu in a monospaced font that lets you select from lists of console and computer cores. The first thing I did was load some old Game Boy Advance ROMs I dumped well over a decade ago, because for some reason Nintendo doesn’t want to sell them for the Switch. (Please sell them for the Switch, Nintendo.) The performance felt about as authentic as I could’ve expected, except for the fact that I was looking at a 4K TV instead of a tiny screen.
My main reason for getting into MiSTer is to have a hardware-based way to access the parts of computer history that I missed, or to revisit forgotten platforms that I was around for. I knew that computer systems like the Apple II and the Amiga were big gaps in my knowledge, so it’s great to have a little box that can run like either of them on command. I’ve also been getting into the MSX platform, which was popular in Japan in the ’80s. My next rainy-day project is to work on an install of RISC OS, the Acorn operating system that was on the first computers I ever used at school in the UK. (You can actually still buy licensed ROM copies of various versions of the OS, which was a neat surprise.)
MiSTer development is a vibrant scene. Melnikov has a Patreon that’s updated several times a week with improvements he’s made to various cores, but there are lots of other people contributing to the project on a daily or weekly basis. A colleague introduced me to the work of Jose Tejada, for example, who’s based in Spain and has made a ton of progress on replicating old Capcom arcade machine boards. There’s another project aiming to get the original PlayStation running, marking the biggest step yet into 3D hardware on MiSTer.
FPGAs are often talked about as if they’re a silver bullet for perfect emulation, but that’s really not the case — at least, not without a lot of effort. Anything that runs perfectly on MiSTer, or as close to perfectly as is otherwise imperceptible, is the result of a ton of work by talented programmers who have spent time figuring out the original hardware and applying the knowledge to their cores. Just read this post from the FPGA PSX Project about what it took to get Ridge Racer running on MiSTer, as well as the assessment of how far they have to go. The cores can vary in quality, accuracy, and state of completion, but a lot of them are still under active development and huge strides have been made in the past couple of years.
Analogue lead hardware engineer Kevin Horton spoke to The Verge in 2019 about the work that went into re-creating the Sega Genesis for the Mega Sg console. The process took him nine months, including two-and-a-half months figuring out the CPU at the heart of the console. “I didn’t know Genesis very well, and knew literally nothing about the 68000 CPU at all!” he said. “This was my first foray into both things and probably slowed the process down since I had to learn it all as I went.”
Ultimately, Horton confirmed the accuracy of his work by directly connecting a 68000 to an FPGA and comparing their performance on a test that ran for a week straight. It demonstrates the lengths that FPGA enthusiasts go to in pursuit of the most accurate results possible, but what makes MiSTer special is that this is largely the work of hobbyists. No one’s paying anyone a salary to make incremental tweaks to the performance of the arcade version of Bionic Commando, but that’s where Tejada has directed his passion.
MiSTer is an important project because it speaks to the concept of preservation in a way that all too often goes underserved by the technology industry. The project makes the argument that the way we run software is as big a part of our experience as its content. Yes, you can port or emulate or re-release software to run on modern hardware, but there’s always going to be a compromise in the underlying code that moves the pixels in front of your eyes.
Of course, that might sound like a pretty niche concern for anyone who’s satisfied with, say, the emulated software you can run in a browser at Archive.org. I’m often one of those people myself — emulation can be great, and it’s hard to beat the convenience. But the MiSTer project is an incredible effort all the same. I’ll never have a shred of the technical knowledge possessed by MiSTer developers, but I’m grateful for their effort. Once you build your own system, it’s hard not to feel invested in the work that goes into it; MiSTer is a never-ending pursuit of perfection, and there’s something beautiful about that.
TechPowerUp is one of the most highly cited graphics card review sources on the web, and we strive to keep our testing methods, game selection, and, most importantly, test bench up to date. Today, I am pleased to announce our newest March 2021 VGA test system, which has one of many firsts for TechPowerUp. This is our first graphics card test bed powered by an AMD CPU. We are using the Ryzen 7 5800X 8-core processor based on the “Zen 3” architecture. The new test setup fully supports the PCI-Express 4.0 x16 bus interface to maximize performance of the latest generation of graphics cards by both NVIDIA and AMD. The platform also enables the Resizable BAR feature by PCI-SIG, allowing the processor to see the whole video memory as a single addressable block, which could potentially improve performance.
A new test system heralds completely re-testing every single graphics card used in our performance graphs. It allows us to kick out some of the older graphics cards and game tests to make room for newer cards and games. It also allows us to refresh our OS, testing tools, update games to the latest version, and explore new game settings, such as real-time raytracing, and newer APIs.
A VGA rebench is a monumental task for TechPowerUp. This time, I’m testing 26 graphics cards in 22 games at 3 resolutions, or 66 game tests per card, which works out to 1,716 benchmark runs in total. In addition, we have doubled our raytracing testing from two to four titles. We also made some changes to our power consumption testing, which is now more detailed and more in-depth than ever.
In this article, I’ll share some thoughts on what was changed and why, while giving you a first look at the performance numbers obtained on the new test system.
Hardware
Below are the hardware specifications of the new March 2021 VGA test system.
Windows 10 Professional 64-bit Version 20H2 (October 2020 Update)
Drivers:
AMD: 21.2.3 Beta NVIDIA: 461.72 WHQL
The AMD Ryzen 7 5800X has emerged as the fastest processor we can recommend to gamers for play at any resolution. We could have gone with the 12-core Ryzen 9 5900X or even maxed out this platform with the 16-core 5950X, but neither would be faster at gaming, and both would be significantly more expensive. AMD certainly wants to sell you the more expensive (overpriced?) CPU, but the Ryzen 7 5800X is actually the fastest option because of its single CCD architecture. Our goal with GPU test systems over the past decade has consistently been to use the fastest mainstream-desktop processor. Over the years, this meant a $300-something Core i7 K-series LGA115x chip making room for the $500 i9-9900K. The 5900X doesn’t sell for anywhere close to this mark, and we’d rather not use an overpriced processor just because we can. You’ll also notice that we skipped upgrading to the 10-core “Comet Lake” Core i9-10900K processor from the older i9-9900K because we saw no significant increases and negligible gaming performance gains, especially considering the large overclock on the i9-9900K. The additional two cores do squat for nearly all gaming situations, which is the second reason besides pricing that had us decide against the Ryzen 9 5900X.
We continue using our trusted Thermaltake TOUGHRAM 16 GB dual-channel memory kit that served us well for many years. 32 GB isn’t anywhere close to needed for gaming, so I didn’t want to hint at that, especially to less experienced readers checking out the test system. We’re running at the most desirable memory configuration for Zen 3 to reduce latencies inside the processor: Infinity Fabric at 2000 MHz, memory clocked at DDR4-4000, in 1:1 sync with the Infinity Fabric clock. Timings are at a standard CL19 configuration that’s easily found on affordable memory modules—spending extra for super-tight timings usually is overkill and not worth it for the added performance.
The MSI B550-A PRO was an easy choice for a motherboard. We wanted a cost-effective motherboard for the Ryzen 9 5800X and don’t care at all about RGB or other bling. The board can handle the CPU and memory settings we wanted for this test bed, the VRM barely gets warm. It also doesn’t come with any PCIe gymnastics—a simple PCI-Express 4.0 x16 slot wired to the CPU without any lane switches along the way. The slot is metal-reinforced and looks like it can take quite some abuse over time. Even though I admittedly swap cards hundreds of times each year, probably even 1000+ times, it has never been any issue—insertion force just gets a bit softer, which I actually find nice.
Software and Games
Windows 10 was updated to 20H2
The AMD graphics driver used for all testing is now 21.2.3 Beta
All NVIDIA cards use 461.72 WHQL
All existing games have been updated to their latest available version
The following titles were removed:
Anno 1800: old, not that popular, CPU limited
Assassin’s Creed Odyssey: old, DX11, replaced by Assassin’s Creed Valhalla
Hitman 2: old, replaced by Hitman 3
Project Cars 3: not very popular, DX11
Star Wars: Jedi Fallen Order: horrible EA Denuvo makes hardware changes a major pain, DX11 only, Unreal Engine 4, of which we have several other titles
Strange Brigade: old, not popular at all
The following titles were added:
Assassin’s Creed Valhalla
Cyberpunk 2077
Hitman 3
Star Wars Squadrons
Watch Dogs: Legion
I considered Horizon Zero Dawn, but rejected it because it uses the same game engine as Death Stranding. World of Warcraft or Call of Duty won’t be tested because of their always-online nature, which enforces game patches that mess with performance—at any time. Godfall is a bad game, Epic exclusive, and commercial flop.
The full list of games now consists of Assassin’s Creed Valhalla, Battlefield V, Borderlands 3, Civilization VI, Control, Cyberpunk 2077, Death Stranding, Detroit Become Human, Devil May Cry 5, Divinity Original Sin 2, DOOM Eternal, F1 2020, Far Cry 5, Gears 5, Hitman 3, Metro Exodus, Red Dead Redemption 2, Sekiro, Shadow of the Tomb Raider, Star Wars Squadrons, The Witcher 3, and Watch Dogs: Legion.
Raytracing
We previously tested raytracing using Metro Exodus and Control. For this round of retesting, I added Cyberpunk 2077 and Watch Dogs Legion. While Cyberpunk 2077 does not support raytracing on AMD, I still felt it’s one of the most important titles to test raytracing with.
While Godfall and DIRT 5 support raytracing, too, neither has had sufficient commercial success to warrant inclusion in the test suite.
Power Consumption Testing
The power consumption testing changes have been live for a couple of reviews already, but I still wanted to detail them a bit more in this article.
After our first Big Navi reviews I realized that something was odd about the power consumption testing method I’ve been using for years without issue. It seemed the Radeon RX 6800 XT was just SO much more energy efficient than NVIDIA’s RTX 3080. It definitely is more efficient because of the 7 nm process and AMD’s monumental improvements in the architecture, but the lead just didn’t look right. After further investigation, I realized that the RX 6800 XT was getting CPU bottlenecked in Metro: Last Light at even the higher resolutions, whereas the NVIDIA card ran without a bottleneck. This of course meant NVIDIA’s card consumed more power in this test because it could run faster.
The problem here is that I used the power consumption numbers from Metro for the “Performance per Watt” results under the assumption that the test loaded the card to the max. The underlying reason for the discrepancy is AMD’s higher DirectX 11 overhead, which only manifested itself enough to make a difference once AMD actually had cards able to compete in the high-end segment.
While our previous physical measurement setup was better than what most other reviewers use, I always wanted something with a higher sampling rate, better data recording, and a more flexible analysis pipeline. Previously, we recorded at 12 samples per second, but could only store minimum, maximum, and average. Starting and stopping the measurement process was a manual operation, too.
The new data acquisition system also uses professional lab equipment and collects data at 40 samples per second, which is four times faster than even NVIDIA’s PCAT. Every single data point is recorded digitally and stashed away for analysis. Just like before, all our graphics card power measurement is “card only”, not the “whole system” or “GPU chip only” (the number displayed in the AMD Radeon Settings control panel).
Having all data recorded means we can finally chart power consumption over time, which makes for a nice overview. Below is an example data set for the RTX 3080.
The “Performance per Watt” chart has been simplified to “Energy Efficiency” and is now based on the actual power and FPS achieved during our “Gaming” power consumption testing run (Cyberpunk 2077 at 1440p, see below).
The individual power tests have also been refined:
“Idle” testing is now measuring at 1440p, whereas it used 1080p previously. This is to follow the increasing adoption rates of high-res monitors.
“Multi-monitor” is now 2560×1440 over DP + 1920×1080 over HDMI—to test how well power management works with mixed resolutions over mixed outputs.
“Video Playback” records power usage of a 4K30 FPS video that’s encoded with H.264 AVC at 64 Mbps bitrate—similar enough to most streaming services. I considered using something like madVR to further improve video quality, but rejected it because I felt it to be too niche.
“Gaming” power consumption is now using Cyberpunk 2077 at 1440p with Ultra settings—this definitely won’t be CPU bottlenecked. Raytracing is off, and we made sure to heat up the card properly before taking data. This is very important for all GPU benchmarking—in the first seconds, you will get unrealistic boost rates, and the lower temperature has the silicon operating at higher efficiency, which screws with the power consumption numbers.
“Maximum” uses Furmark at 1080p, which pushes all cards into its power limiter—another important data point.
Somewhat as a bonus, and I really wasn’t sure if it’s as useful, I added another run of Cyberpunk at 1080p, capped to 60 FPS, to simulate a “V-Sync” usage scenario. Running at V-Sync not only removes tearing, but also reduces the power consumption of the graphics card, which is perfect for slower single-player titles where you don’t need the highest FPS and would rather conserve some energy and have less heat dumped into your room. Just to clarify, we’re technically running a 60 FPS soft cap so that weaker cards that can’t hit 60 FPS (GTX 1650S and GTX 1660) won’t run 60/30/20 FPS V-Sync, but go as high as able.
Last but not least, a “Spikes” measurement was added, which reports the highest 20 ms spike recorded in this whole test sequence. This spike usually appears at the start of Furmark, before the card’s power limiting circuitry can react to the new conditions. On RX 6900 XT, I measured well above 600 W, which can trigger the protections of certain power supplies, resulting in the machine suddenly turning off. This happened to me several times with a different PSU than the Seasonic, so it’s not a theoretical test.
Radeon VII Fail
Since we’re running with Resizable BAR enabled, we also have to boot with UEFI instead of CSM. When it was time to retest the Radeon VII, I got no POST, and it seemed the card was dead. Since there’s plenty of drama around Radeon VII cards suddenly dying, I already started looking for a replacement, but wanted to give it another chance in another machine, which had it working perfectly fine. WTF?
After some googling, I found our article detailing the lack of UEFI support on the Radeon VII. So that was the problem, the card simply didn’t have the BIOS update AMD released after our article. Well, FML, the page with the BIOS update no longer exists on AMD’s website.
Really? Someone on their web team made the decision to just delete the pages that contain an important fix to get the product working, a product that’s not even two years old? (launched Feb 7 2019, page was removed no later than Nov 8 2020).
Luckily, I found the updated BIOS in our VGA BIOS collection, and the card is working perfectly now.
Performance results are on the next page. If you have more questions, please do let us know in the comments section of this article.
(Pocket-lint) – NVMe drives are becoming more and more common. With the rise in popularity of PCIe gen 4, they’re also getting faster and faster.
We’ve written a detailed guide on how to install these tiny drives to give your system a performance boost, but if you’re contemplating which drive to buy you might be stuck knowing which to choose.
How to build and upgrade your own extreme gaming PC
We’re here with a helping hand. Covering the best drives available for you both in terms of storage, speed and reliability. As well as other features like RGB. Yes, some drives also have RGB now, because every good PC gamer knows RGB lighting means better performance.
Our guide to the best NVMe SSDs to buy today
Samsung 980
squirrel_widget_4274759
Samsung’s range of Evo NVMe SSDs have been popular and highly recommended for a long time and with good reason. These drives are solidly built and designed to last.
The 980 continues that trend with a new design that includes intelligently upgraded internals capable of delivering outstanding performance for longer than ever before.
It’s a PCIe gen 3 drive capable of 3,500 MB/s and 3,000 MB/s for sequential read and write speeds. The highlight of this drive is it can maintain those speeds for 75 per cent longer than the previous model.
WD_Black AN1500
squirrel_widget_4152663
If, for some reason, you don’t have an M.2 NVMe slot on your motherboard, then this may well be the answer. This drive is also a really interesting option as it boasts speeds similar to PCIe Gen 4 NVMe drives but on Gen 3 motherboards.
That’s right the WD_Black AN1500 can run with up to 6,500MB/s read speeds which is bonkers. It also installs in a PCIe X16 slot (the same one as your graphics card) meaning it’s potentially even easier to install. You do need to make sure it’s compatible but if it is this thing is incredibly fast and it has RGB as well.
Seagate FireCuda 520
squirrel_widget_338238
If you own a new motherboard from AMD, the chances are you have support for PCIe gen 4.
This means you can make the most of drives like the Seagate FireCuda 520 which offers speeds almost twice that of older drives. This NVMe drive can run up to 5,000/4,400 MB/s in terms of sequential read/write performance, That’s nine times faster than standard SATA SSDs and faster than older NVMe drives too.
Neat.
WD_Black SN750
squirrel_widget_4258622
One of the performance problems potentially associated with NVMe drives relates to their running temps. If they get too hot they won’t run as well.
If this worries you, then the WD_Black SN750 is a great option. It sports a cool looking heatsink designed in collaboration with EK Water Blocks that promises to help keep run at good temps and deliver the performance you want.
Just keep in mind that the heatsink makes it fatter than other drives and it won’t fit under motherboard heatshields.
Samsung has announced its newest SSD, a follow-up to the 970 Evo called the 980. The drive is a NVMe M.2 PCIe 3.0 drive, and it’s an affordable one, too. It costs up to $129.99 for the 1TB version and as little as $49.99 for the 250GB model.
There’s a reason for the low price — it’s Samsung’s first-ever DRAM-less NVMe SSD, a cost-cutting measure that many other storage manufacturers have already dabbled with to varying degrees of success. The 980 lacks fast dynamic random access memory typically used for mapping the contents of an SSD, which would help it quickly and efficiently serve up your data.
Yet despite removing the feature, Samsung is touting some impressive performance compared to other DRAM-less options because this drive takes advantage of the Host Memory Buffer feature in the NVMe specification. In Samsung’s case, it’s tapping up to 64MB of your CPU’s DRAM via PCIe to pick up the slack on behalf of the SSD. The result isn’t as fast as an SSD that has its own DRAM, but the Host Memory Buffer feature helps it perform much better than a model that lacks it entirely — while you reap some cost savings. Samsung says that this SSD can achieve speeds up to six times that of an SATA-based SSD.
Also helping deliver those fast speeds is Samsung’s Intelligent TurboWrite 2.0 feature, which multiplies the maximum allocated buffer region within the 980 to as much as 160GB, up from just 42GB in the 970 Evo. This feature simulates fast single-layer cell (SLC) performance in the 980, despite the fact it uses 3-bit multilayer cell (MLC) memory, and it’s aimed at delivering sustained performance while transferring large files.
Samsung claims the 1TB version of the 980 can provide up to 3,500MB/s sequential read and 3,000MB/s write speeds, which is roughly on par with its fast (and more expensive) 970 Evo Plus SSD, besting the 970 Evo’s top sequential write speed. It’s a far cry from Samsung’s 980 Pro, though, which boasts sequential read and write speeds of up to 7,000MB/s and 5,000MB/s, respectively, when connected to a PCIe 4.0-ready motherboard.
As usual, there’s a steep fall-off in performance for lesser capacities: the low-end 250GB model claims up to 2,900MB/s sequential read and 1,300MB/s sequential write speeds, for instance. One of the other big highlights here across the lineup is that, even without DRAM, Samsung claims the random read and write input and output speeds during intensive tasks are similar to the 970 Evo and not far off from the 970 Evo Plus.
So, even while omitting a component that helps an SSD go quickly, Samsung’s 980 still seems very fast. In case you’re curious, Samsung’s test systems that provided these benchmarks run an Intel Core i7-6700K, the Ryzen 7 3700X, and 8GB of 2,133MHz DDR4 RAM.
When times get tough, modders get modding, and 2020 was no different. Today, the winners of Cooler Master’s Case Mod World Series 2020 modding contest receive their crowns, rewarding some of the most remarkable mods created in a challenging year.
The 2020 contest saw 90 entries from enthusiasts across 23 countries. Mods were equally judged on craftsmanship, aesthetics, functionality and innovation, with judges including Cooler Master, professional modders, sponsors, including MSI and the game Control and media judges, including Tom’s Hardware.
Overall, 12 mods won awards, with the most coveted “Best Of” awards going to 6 builds (Best Tower Mod, Best Scratch Build, Best Innovation and Design, Best Craftsman and Best Art Direction).
You can see the full list of Case Mod World Series 2020 winners here. Below is an inside look at some of the fiercest award winners.
Best Tower of the Year: A.R.E.S. by Explore Modding
Case: Cooler Master Cosmos C700M
CPU: AMD Ryzen 7 3700X
Graphics Card: Inno3D iChill Frostbite RTX 2070 Super
We may still be waiting for the hover cars that so many movies and novels have promised, but with Explore Modding’sA.R.E.S. build, the appearance of a floating tower is already here. The modder describes his build as a “story, told in art form.” He drew inspiration for the colors, curves and starry window (made of optic fibers and epoxy resin) from the character Robot from Netflix’s Lost in Space reboot.
Ultimately, A.R.E.S. tells its own story though. And with its base designed to make the tower look like it’s awesomely afloat, that story is told from a world seemingly far off in the future.
Not surprisingly, designing and assembling the base was the hardest part of the mod, Explore Modding told us. It required many parts that were hard to fit together, “due to tight tolerances.”
“Even designing it was difficult because I really wanted something that made it look like the case was separated from it and floating above the surface, but that required a lot of trial and error in order to make it stable enough,” Explore Modding told Tom’s Hardware. “In the end, the three acrylic blocks are very sturdy and they’re very transparent, so they even tend to disappear under some light scenarios, creating that awesome effect of floating.”
A.R.E.S.’ hardware panel rotates 180 degrees on the fly, so you can easily swap the build’s look — components on the left or on the right. Cable management located in the back and front allowed for a clean look inside, where the suspended centerpiece boasting all the components steals the show.
“I often change the layout on my setups, and I always had the struggle of sacrificing the amazing view of the internal hardware when I had to move the PC to the other side of the desk,” Explore Modding said. “I actually ended up tearing apart my build to make an inverted mod a couple times for this reason. … So this PC can be put wherever you want and still show the same side every time.”
Maintenance is also a bit easier. Just undo a couple screws and turn the panel to access your components. The rotating panel also means you don’t have to tilt the entire case to bleed air from the loop.
Best Scratch of the Year: Ikigai by Nick Falzone Design
CPU: AMD Ryzen 5 5600X
Graphics Card: MSI Radeon 5700 Gaming X
Motherboard: MSI B550I Gaming Edge WiFi
RAM: G. Skill Ripjaws V DDR4-3600 (32GB)
SSD: WD Black SN750
Cooling: Alphacool Laing DDC, Alphacool GPU waterblock and radiator, Optimus CPU block, EKWB fittings, Cooler Master SF360R fans
Power Supply: Cooler Master V650 SFX
Nick Falzone Design’s mod Ikiagi is named after the Japanese word for, as he put it, “one’s personal passions, beliefs, values and vocation.” The Japanese concept about finding your life’s purpose has also recently picked up Western attention and led the modder to create a sensible design with both modern and traditional Japanese woodworking techniques.
Nick Falzone Design, an American modder, has been working with wood since childhood and grew to enjoy the Japanese aesthetic, including the “overall timeless and modern design.” In fact, the modder’s first PC case had mini shoji doors.
“At the time, YouTube was not around, so I read books about Japanese architecture and Japanese joinery. … I’d always wanted to make the hemp leaf pattern that I did in Ikigai,” the modder told Tom’s Hardware
Ikigai incorporates “traditionally made Japanese Kumiko designs” from unfinished Sitka Spruce contrasting with a Wenge wood outer shell complete with hand-sawn dovetails. The inside is mostly acrylic and aluminum with Wenge added for accent pieces.
To keep Ikigai cool, Nick Falzone Design crafted a distribution plate that also serves as the build’s pump top and reservoir, while keeping most of the cables out of view.
The biggest challenge, however, came in maintaining Nick Falzone Design’s vision of a Mini-ITX build. Keeping up with the small form factor trend is great, but carefully constructing the watercooling and wiring in a build that’s under 20 liters is no small task.
“I made three full-scale models of the main case and many more models of the interior to maximize each component and make everything work efficiently,” Nick Falzon Design said.
Best Craftsmanship: Cyberpunk 2077 – Deconstruction by AK Mod
CPU: Intel Core I9-10900K
Graphics Card: Aorus GeForce RTX 3080 Master
Motherboard: Aorus Z490 Xtreme
RAM: Aorus RGB Memory DDR4-3200 (4x 8GB)
SSD: Gigabyte NVMe SSD M.2 2280 (1TB)
Cooling: Bitspower fittings, Premium Summit M Mystic Black Metal Edition CPU block, D5 Vario motor, Leviathan XF 120 4xG1/4″ radiator, Water Tank Z-Multi 50 V2 and Bitspower Touchaqua in-line filter, digital thermal sensor, digital RGB multi-function controller, PWM fan multi-function hub, Cooler Master MasterFan SF120M, AlphaCool Eiszapfen laser fitting with 4-pin molex
Power Supply: Aorus P850W 80+ Gold Modular
With Cyberpunk 2077 making splashes of all types in 2020, it wasn’t surprising to see a Cyberpunk-inspired mod. More surprising are the undeniable intricacies, craftsmanship and expertise boasted in this showstopping mod that looked unlike any other entry, (and yes, we looked at all 90).
The mod embodies Mantis Blades being repaired. AK Mod did a whole lot of 3D printing, as well as CNC milling and research into unique parts, like military aviation connectors, a vacuum fluorescent (VFD) display and a light bar — to bring the concept to life.
Of course, 3D printing Mantis Blades calls for some patience. AK Mod separated the blades’ parts into over 90 fdm and dlp files but had to redesign due to construction failure.
“In the original design, inner metal structure frame and outer arm were separated. The outcome of the first design was too thin. The finger parts are difficult to assemble, and the weight bearing for the wrist part was not as expected, so we had to improve the design and print the outcome all over again,” AK Mod told Tom’s Hardware.
Other techniques used to make Cyberpunk 2077 – Deconstruction include welding, digital processing lathing, UV printing and laser engraving and cutting. Hand-made parts were also sanded, soil filled, spray painted and given an aged treatment.
AK Mod also included an actionable ring scanning instrument to “simulate the Mantis Blades being scanned as a weapon,” AK Mod said. Red LEDs add authenticity as the blades move horizontally.
Best Innovation and Design: Spirit of Motion by Maximum Bubble Mods
CPU: AMD Ryzen 5 3600
Graphics Card: Nvidia RTX 2080 Founder’s Edition
Motherboard: MSI B450M Pro-M2
RAM: G.Skill TridentZ – 3,600 MHz (16GB)
SSD: Samsung 970 EVO (500GB)
Cooling: Corsair Hydro H115i Pro, Cooler Master MasterFan Pro Air Pressure RGB
Power Supply: EVGA SuperNOVA 750 G5
While some of this year’s winning mods look straight from the future, Spirit of Motion opts for a retro vibe. Building the mod for his father, Maximum Bubble Mods’ Spirit in Motion goes for a classic car theme, incorporating an “Art Deco era front car grille,” as the modder describes it, topped off with delicious Candy Apple Red paint.
That custom grille not only looks good but opens up to reveal the PC’s components. Hand-building the aluminum grille took “tens of hours, hard work and many processes,” Maximum Bubble Mods told us.
Further earning the Innovation & Design title, Maximum Bubble Mods inverted and mirrored the motherboard and vertically mounted the graphics card to keep all the I/O as low as possible.
“It was all done to keep the PC from getting excessively large and to keep the I/O below the frame that my hinge would mount to,” Maximum Bubble Mods explained.
Perhaps the best part is that Spirit In Motion is now the modder’s father’s best gaming PC (you can even watch him receive the mod on this YouTube video).
“The last time we talked, he was on a Civilization kick and sounding like he was loving the PC, so I’m happy,” Maximum Bubble Mods said.
Supermicro’s 1023US-TR4 is a slim 1U dual-socket server designed for high-density compute environments in high-end cloud computing, virtualization, and enterprise applications. With support for AMD’s EPYC 7001 and 7002 processors, this high-end server packs up to two 64-core Eypc Rome processors, allowing it to cram 128 cores and 256 threads into one slim chassis.
We’re on the cusp of Intel’s Ice Lake and AMD’s EPYC Milan launches, which promise to reignite the fierce competition between the long-time x86 rivals. In preparation for the new launches, we’ve been working on a new set of benchmarks for our server testing, and that’s given us a pretty good look at the state of the server market as it stands today.
We used the Supermicro 1023US-TR4 server for EPYC Rome testing, and we’ll focus on examining the platform in this article. Naturally, we’ll add in Ice Lake and EPYC Milan testing as soon as those chips are available. In the meantime, here’s a look at some of our new benchmarks and the current state of the data center CPU performance hierarchy in several hotly-contested price ranges.
Inside the Supermicro 1023US-TR4 Server
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The Supermicro 1023US-TR4 server comes in the slim 1U form factor. And despite its slim stature, it can host an incredible amount of compute horsepower under the hood. The server supports AMD’s EPYC 7001 and 7002 series chips, with the latter series topping out at 64 cores apiece, which translates to 128 cores and 256 threads spread across the dual sockets.
Support for the 7002 series chips requires a 2.x board revision, and the server can accommodate CPU cTDP’s up to 280W. That means it can accommodate the beefiest of EPYC chips, which currently comes in the form of the 280W 64-core EPYC 7H12 with a 280W TDP.
The server has a tool-less rail mounting system that eases installation into server racks and the CSE-819UTS-R1K02P-T chassis measures 1.7 x 17.2 x 29 inches, ensuring broad compatibility with standard 19-inch server racks.
The front panel comes with standard indicator lights, like a unit identification (UID) light that helps with locating the server in a rack, along with drive activity, power, status light (to indicate fan failures or system overheating), and two LAN activity LEDs. Power and reset buttons are also present at the upper right of the front panel.
By default, the system comes with four tool-less 3.5-inch hot-swap SATA 3 drive bays, but you can configure the server to accept four NVMe drives on the front panel, and an additional two M.2 drives internally. You can also add an optional SAS card to enable support for SAS storage devices. The front of the system also houses a slide-out service/asset tag identifier card to the upper left.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
Popping the top off the chassis reveals two shrouds that direct air from the two rows of hot-swappable fans. A total of eight fan housings feed air to the system, and each housing includes two counter-rotating 4cm fans for maximum static pressure and reduced vibration. As expected with servers intended for 24/7 operation, the system can continue to function in the event of a fan failure. However, the remainder of the fans will automatically run at full speed if the system detects a failure. Naturally, these fans are loud, but that’s not a concern for a server environment.
Two fan housings are assigned to cool each CPU, and a simple black plastic shroud directs air to the heatsinks underneath. Dual SP3 sockets house both processors, and they’re covered by standard heatsinks that are optimized for linear airflow.
A total of 16 memory slots flank each processor, for a total of 32 memory slots that support up to 4TB of registered ECC DDR4-2666 with EPYC 7001 processors, or an incredible 8TB of ECC DDR4-3200 memory (via 256GB DIMMs) with the 7002 models, easily outstripping the memory capacity available with competing Intel platforms.
We tested the EPYC processors with 16x 32GB DDR4-3200 Samsung modules for a total memory capacity of 512GB. In contrast, we loaded down the Xeon comparison platform with 12x 32GB Sk hynix DDR4-2933 modules, for a total capacity of 384GB of memory.
The H11DSU-iN motherboard’s expansion slots consist of two full-height 9.5-inch PCIe 3.0 slots and one low-profile PCIe 3.0 x8 slot, all mounted on riser cards. An additional internal PCIe 3.0 x8 slot is also available, but this slot only accepts proprietary Supermicro RAID cards. All told, the system exposes a total of 64 lanes (16 via NVMe storage devices) to the user.
As one would imagine, Supermicro has other server offerings that expose more of EPYCs available 128 lanes to the user and also come with the faster PCIe 4.0 interface.
Image 1 of 2
Image 2 of 2
The rear I/O panel includes four gigabit RJ45 LAN ports powered by an Intel i350-AM4 controller, along with a dedicated IPMI port for management. Here we find the only USB ports on the machine, which come in the form of two USB 3.0 headers, along with a COM and VGA port.
Two 1000W Titanium-Level (96%+) redundant power supplies provide power to the server, with automatic failover in the event of a failure, as well as hot-swapability for easy servicing.
The BIOS is easy to access and use, while the IPMI web interface provides a wealth of monitoring capabilities and easy remote management that matches the type of functionality available with Xeon platforms. Among many options, you can update the BIOS, use the KVM-over-LAN remote console, monitor power consumption, access health event logs, monitor and adjust fan speeds, and monitor the CPU, DIMM, and chipset temperatures and voltages. Supermicro’s remote management suite is polished and easy to use, which stands in contrast to other platforms we’ve tested.
Test Setup
Cores/Threads
1K Unit Price
Base / Boost (GHz)
L3 Cache (MB)
TDP (W)
AMD EPYC 7742
64 / 128
$6,950
2.25 / 3.4
256
225W
Intel Xeon Platinum 8280
28 / 56
$10,009
2.7 / 4.0
38.5
205W
Intel Xeon Gold 6258R
28 / 56
$3,651
2.7 / 4.0
38.5
205W
AMD EPYC 7F72
24 / 48
$2,450
3.2 / ~3.7
192
240W
Intel Xeon Gold 5220R
24 / 48
$1,555
2.2 / 4.0
35.75
150W
AMD EPYC 7F52
16 / 32
$3,100
3.5 / ~3.9
256
240W
Intel Xeon Gold 6226R
16 / 32
$1,300
2.9 / 3.9
22
150W
Intel Xeon Gold 5218
16 / 32
$1,280
2.3 / 3.9
22
125W
AMD EPYC 7F32
8 / 16
$2,100
3.7 / ~3.9
128
180W
Intel Xeon Gold 6250
8 / 16
$3,400
3.9 / 4.5
35.75
185W
Here we can see the selection of processors we’ve tested for this review, though we use the Xeon Platinum Gold 8280 as a stand-in for the less expensive Xeon Gold 6258R. These two chips are identical and provide the same level of performance, with the difference boiling down to the more expensive 8280 coming with support for quad-socket servers, while the Xeon Gold 6258R tops out at dual-socket support.
Memory
Tested Processors
Supermicro AS-1023US-TR4
16x 32GB Samsung ECC DDR4-3200
EPYC 7742, 7F72, 7F52, 7F32
Dell/EMC PowerEdge R460
12x 32GB SK Hynix DDR4-2933
Intel Xeon 8280, 6258R, 5220R, 6226R, 6250
To assess performance with a range of different potential configurations, we used the Supermicro 1024US-TR4 server with four different EPYC Rome configurations. We outfitted this server with 16x 32GB Samsung ECC DDR4-3200 memory modules, ensuring that both chips had all eight memory channels populated.
We used a Dell/EMC PowerEdge R460 server to test the Xeon processors in our test group, giving us a good sense of performance with competing Intel systems. We equipped this server with 12x 32GB Sk hynix DDR4-2933 modules, again ensuring that each Xeon chip’s six memory channels were populated. These configurations give the AMD-powered platform a memory capacity advantage, but come as an unavoidable side effect of the capabilities of each platform. As such, bear in mind that memory capacity disparities may impact the results below.
We used the Phoronix Test Suite for testing. This automated test suite simplifies running complex benchmarks in the Linux environment. The test suite is maintained by Phoronix, and it installs all needed dependencies and the test library includes 450 benchmarks and 100 test suites (and counting). Phoronix also maintains openbenchmarking.org, which is an online repository for uploading test results into a centralized database. We used Ubuntu 20.04 LTS and the default Phoronix test configurations with the GCC compiler for all tests below. We also tested both platforms with all available security mitigations.
Linux Kernel and LLVM Compilation Benchmarks
Image 1 of 2
Image 2 of 2
We used the 1023US-TR4 for testing with all of the EPYC processors in the chart, and here we see the expected scaling in the timed Linux kernel compile test with the AMD EPYC processors taking the lead over the Xeon chips at any given core count. The dual EPYC 7742 processors complete the benchmark, which builds the Linux kernel at default settings, in 21 seconds. The dual 24-core EPYC 7F72 configuration is impressive in its own right — it chewed through the test in 25 seconds, edging past the dual-processor Xeon 8280 platform.
AMD’s EPYC delivers even stronger performance in the timed LLVM compilation benchmark — the dual 16-core 7F72’s even beat the dual 28-core 8280’s. Performance scaling is somewhat muted between the flagship 64-core 7742 and the 24-core 7F72, largely due to the strength of the latter’s much higher base and boost frequencies. That impressive performance comes at the cost of a 240W TDP rating, but the Supermicro server handles the increased thermal output easily.
Molecular Dynamics and Parallel Compute Benchmarks
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
NAMD is a parallel molecular dynamics code designed to scale well with additional compute resources; it scales up to 500,000 cores and is one of the premier benchmarks used to quantify performance with simulation code. The EPYC processors are obviously well-suited for these types of highly-parallelized workloads due to their prodigious core counts, with the dual 7742 configuration completing the workload 28% faster than the dual Xeon 8280 setup.
Stockfish is a chess engine designed for the utmost in scalability across increased core counts — it can scale up to 512 threads. Here we can see that this massively parallel code scales well with EPYC’s leading core counts. But, as evidenced by the dual 24-core 7F72’s effectively tying the 28-core Xeon 8280’s, the benchmark also generally responds well to the EPYC processors. The dual 16-core 7F52 configuration also beat out both of the 16-core Intel comparables. Intel does pull off a win as the eight-core 6250 processors beat the 7F32’s, though.
We see similarly impressive performance in other molecular dynamics workloads, like the Gromacs water benchmark that simulates Newtonian equations of motion with hundreds of millions of particles and the NAS Parallel Benchmarks (NPB) suite. NPB characterizes Computational Fluid Dynamics (CFD) applications, and NASA designed it to measure performance from smaller CFD applications up to “embarrassingly parallel” operations. The BT.C test measures Block Tri-Diagonal solver performance, while the LU.C test measures performance with a lower-upper Gauss-Seidel solver.
Regardless of the workload, the EPYC processors deliver a brutal level of performance in highly-parallelized applications, and the Supermicro server handled the heat output without issue.
Rendering Benchmarks
Image 1 of 8
Image 2 of 8
Image 3 of 8
Image 4 of 8
Image 5 of 8
Image 6 of 8
Image 7 of 8
Image 8 of 8
Turning to more standard fare, provided you can keep the cores fed with data, most modern rendering applications also take full advantage of the compute resources. Given the well-known strengths of EPYC’s core-heavy approach, it isn’t surprising to see the 64-core EPYC 7742 processors carve out a commanding lead in the C-Ray and Blender benchmarks. Still, it is impressive to see the 7Fx2 models beat the competing Xeon processors with similar core counts nearly across the board.
The performance picture changes somewhat with the Embree benchmarks, which test high-performance ray tracing libraries developed at Intel Labs. Naturally, the Xeon processors take the lead in the Asian Dragon renders, but the crown renders show that AMD’s EPYC can offer leading performance even with code that is heavily optimized for Xeon processors.
Encoding Benchmarks
Image 1 of 3
Image 2 of 3
Image 3 of 3
Encoders tend to present a different type of challenge: As we can see with the VP9 libvpx benchmark, they often don’t scale well with increased core counts. Instead, they often benefit from per-core performance and other factors, like cache capacity.
However, newer encoders, like Intel’s SVT-AV1, are designed to leverage multi-threading more fully to extract faster performance for live encoding/transcoding video applications. Again, we can see the impact of EPYC’s increased core counts paired with its strong per-core performance as the EPYC 7742 and 7F72 post impressive wins.
Python and Sysbench Benchmarks
Image 1 of 2
Image 2 of 2
The Pybench and Numpy benchmarks are used as a general litmus test of Python performance, and as we can see, these tests don’t scale well with increased core counts. That allows the Xeon 6250, which has the highest boost frequency of the test pool at 4.5 GHz, to take the lead.
Compression and Security
Image 1 of 3
Image 2 of 3
Image 3 of 3
Compression workloads also come in many flavors. The 7-Zip (p7zip) benchmark exposes the heights of theoretical compression performance because it runs directly from main memory, allowing both memory throughput and core counts to impact performance heavily. As we can see, this benefits the EPYC 7742 tremendously, but it is noteworthy that the 28-core Xeon 8280 offers far more performance than the 24-core 7F72 if we normalize throughput based on core counts. In contrast, the gzip benchmark, which compresses two copies of the Linux 4.13 kernel source tree, responds well to speedy clock rates, giving the eight-core Xeon 6250 the lead due to its 4.5 GHz boost clock.
The open-source OpenSSL toolkit uses SSL and TLS protocols to measure RSA 4096-bit performance. As we can see, this test favors the EPYC processors due to its parallelized nature, but offloading this type of workload to dedicated accelerators is becoming more common for environments with heavy requirements.
SPEC CPU 2017 Estimated Scores
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
We used the GCC compiler and the default Phoronix test settings for these SPEC CPU 2017 test results. SPEC results are highly contested and can be impacted heavily with various compilers and flags, so we’re sticking with a bog-standard configuration to provide as level of a playing field as possible. It’s noteworthy that these results haven’t been submitted to the SPEC committee for verification, so they aren’t official. Instead, view the above tests as estimates, based on our testing.
The multi-threaded portion of the SPEC CPU 2107 suite is of most interest for the purpose of our tests, which is to gauge the ability of the Supermicro platform to handle heavy extended loads. As expected, the EPYC processors post commanding leads in both the intrate and fprate subtests. And close monitoring of the platform didn’t find any thermal throttling during these extended duration tests. The Xeon 6250 and 8280 processors take the lead in the single-threaded intrate tests, while the AMD EPYC processors post impressively-strong single-core measurements in the fprate tests.
Conclusion
AMD has enjoyed a slow but steadily-increasing portion of the data center market, and much of its continued growth hinges on increasing adoption beyond hyperscale cloud providers to more standard enterprise applications. That requires a dual-pronged approach of not only offering a tangible performance advantage, particularly in workloads that are sensitive to per-core performance, but also having an ecosystem of fully-validated OEM platforms readily available on the market.
The Supermicro 1023US-TR4 server slots into AMD’s expanding constellation of OEM EPYC systems and also allows discerning customers to upgrade from the standard 7002 series processors to the high-frequency H- and F-series models as well. It also supports up to 8TB of ECC memory, which is an incredible amount of available capacity for memory-intensive workloads. Notably, the system comes with the PCIe 3.0 interface while the second-gen EPYC processors support PCIe 4.0, but this arrangement allows customers that don’t plan to use PCIe 4.0 devices to procure systems at a lower price point. As one would imagine, Supermicro has other offerings that support the faster interface.
Overall we found the platform to be robust, and out-of-the-box installation was simple with a tool-less rail kit and an easily-accessible IPMI interface that offers a cornucopia of management and monitoring capabilities. Our only minor complaints are that the front panel could use a few USB ports for easier physical connectivity. The addition of a faster embedded networking interface would also free up an additional PCIe slot. Naturally, higher-end Supermicro platforms come with these features.
As seen throughout our testing, the Supermicro 1023US-TR4 server performed admirably and didn’t suffer from any thermal throttling issues regardless of the EPYC processors we used, which is an important consideration. Overall, the Supermicro 1023US-TR4 server packs quite the punch in a small form factor that enables incredibly powerful and dense compute deployments in cloud, virtualization, and enterprise applications.
PNY’s XLR8 Gaming Epic-X RGB DDR4-3200 C16 memory kit is a good partner for contemporary AMD and Intel processors that natively support DDR4-3200 memory.
For
Acceptable performance
RGB lighting doesn’t require proprietary software
Against
Too expensive
Limited overclocking potential
Nowadays, it feels like the norm that every computer hardware company has a dedicated gaming sub-brand. For PNY, that would be XLR8 Gaming that currently competes in three major hardware markets: memory, gaming graphics cards, and SSDs. In terms of memory, the XLR8 Gaming branding is still a bit wet behind the ears, but the company has started to solidify its lineups. The Epic-X RGB series, in particular, is one of XLR8 Gaming’s latest additions to its memory portfolio.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The Epic-X RGB memory modules come with a black PCB and a matching aluminum heat spreader. The design is as simple as it gets, and that’s not a bad thing. The heat spreaders feature a few diagonal lines and the XLR8 logo in the middle. An RGB lightbar is positioned on top of the memory module to provide some flair. The memory measures 47mm (1.85 inches) tall, so it might get in the way of some CPU air coolers.
PNY didn’t develop a proprietary program to control the Epic-X RGB’s lighting, which will favor users who don’t want to install another piece of software on their system. Instead, PNY is handing the responsibility over to the motherboard. Fear not, because the Epic-X RGB has all its bases covered. The memory’s illumination is compatible with Asus Aura Sync, Gigabyte RGB Fusion, MSI Mystic Light Sync, and ASRock Polychrome Sync.
The Epic-X RGB memory kit is comprised of two 8GB DDR4 memory modules. They’re built on a 10-layer PCB and feature a single-rank design. Thaiphoon Burner was unable to identify the integrated circuits (ICs) inside the Epic-X RGB. However, given the primary timings, the memory is likely using Hynix C-die chips.
Predictably, the Epic-X RGB runs at DDR4-2133 with 15-15-15-36 timings by default. There’s a single XMP profile that brings the memory up to speed. In this case, it sets the memory modules to DDR4-3200 and the timings to 16-18-18-38. At this frequency, the memory draws 1.35V. For more on timings and frequency considerations, see our PC Memory 101 feature, as well as our How to Shop for RAM story.
Comparison Hardware
Memory Kit
Part Number
Capacity
Data Rate
Primary Timings
Voltage
Warranty
Team Group T-Force Xtreem ARGB
TF10D416G3600HC14CDC01
2 x 8GB
DDR4-3600 (XMP)
14-15-15-35 (2T)
1.45 Volts
Lifetime
Gigabyte Aorus RGB Memory
GP-AR36C18S8K2HU416R
2 x 8GB
DDR4-3600 (XMP)
18-19-19-39 (2T)
1.35 Volts
Lifetime
PNY XLR8 Gaming Epic-X RGB
MD16GK2D4320016XRGB
2 x 8GB
DDR4-3200 (XMP)
16-18-18-38 (2T)
1.35 Volts
Lifetime
Lexar DDR4-2666
LD4AU008G-R2666U x 2
2 x 8GB
DDR4-2666
19-19-19-43 (2T)
1.20 Volts
Lifetime
Our Intel test system consists of an Intel Core i9-10900K and Asus ROG Maximus XII Apex on 0901 firmware. On the opposite side, the AMD testbed leverages an AMD Ryzen 5 3600 and ASRock B550 Taichi with 1.30 firmware. The MSI GeForce RTX 2080 Ti Gaming Trio handles the graphical duties on both platforms.
Intel Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
Predictably, the Epic-X RGB didn’t beat the faster memory kits in our RAM benchmarks. Performance was consistent, with the Epic-X kit placing third overall on the application and gaming charts.
AMD Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
Things didn’t change on the AMD platform, either. However, the Epic-X RGB did earn some merits since the memory kit was the fastest in the Cinebench R20 and HandBrake x264 conversion tests. The margin of victory was slim, though, at less than 1%.
Overclocking and Latency Tuning
Image 1 of 3
Image 2 of 3
Image 3 of 3
The Epic-X RGB isn’t the best overclocker that we’ve had in the labs. Nevertheless, we squeezed an extra 400 MHz out of the kit. We could hit DDR4-3600 at 1.45V after we relaxed the timings to 20-20-20-40.
Lowest Stable Timings
Memory Kit
DDR4-2666 (1.45V)
DDR4-3200 (1.45V)
DDR4-3600 (1.45V)
DDR4-3900 (1.45V)
DDR4-4200 (1.45V)
Team Group T-Force Xtreem ARGB
N/A
N/A
13-14-14-35 (2T)
N/A
19-19-19-39 (2T)
Gigabyte Aorus RGB Memory
N/A
N/A
16-19-19-39 (2T)
20-20-20-40 (2T)
N/A
PNY XLR8 Gaming Epic-X RGB
N/A
15-18-18-38 (2T)
20-20-20-40 (2T)
N/A
N/A
Lexar DDR4-2666
16-21-21-41 (2T)
N/A
N/A
17-22-22-42 (2T)
N/A
Sadly, we didn’t have the same level of luck optimizing the Epic-X RGB at DDR4-3200. Even with a 1.45V DRAM voltage, we could only get the CAS Latency down from 15 to 16 clocks. The other timings wouldn’t yield.
Bottom Line
In this day and age, enthusiasts are pursuing faster and faster memory kits. However, there’s always space for a standard memory kit, and the XLR8 Gaming Epic-X RGB DDR4-3200 C16 kit could very well find its place with users that want to stick to a processor’s official supported memory frequency. Today’s modern processors, such as AMD’s Zen 2 and Zen 3 processors and Intel’s looming Rocket Lake processors, support DDR4-3200 memory right out of the box. The XLR8 Gaming Epic-X RGB DDR4-3200 C16 would fit nicely in this situation since you can just enable XMP and never look back.
The XLR8 Gaming Epic-X RGB DDR4-3200 C16 only has a tiny flaw, and that’s pricing. The memory kit retails for $94.99 when the typical DDR4-3200 C16 kit starts at $74.99. Even the faster DDR4-3600 C18 memory kits sell for as low as $79.99. In PNY’s defense, the Epic-X RGB memory modules do look nice with the RGB lighting and whatnot, so we can probably chalk the extra cost up to the RGB tax.
News of intermittent USB connectivity issues on AMD Ryzen systems broke a few weeks ago, and the company has since announced that it is investigating the widespread reports to identify the root cause. AMD has not yet produced a full fix for the issue, but in the interim, its tech support has now issued its first official advice on steps that might help resolve the situation.
The issues seem confined to Ryzen 3000 and 5000 series CPUs in 500- and 400-series motherboards (i.e., X570, X470, B550, and B450) and consist of random dropouts for USB-connected devices. The complaints encompass several different types of USB devices, including unresponsive external capture devices, momentary keyboard connection drops, slow mouse responses, issues with VR headsets, external storage devices, and USB-connected CPU coolers.
The enthusiast community has been hard at work exploring several different workarounds that appear to improve the situation, and AMD’s tech support has emailed an impacted user outlining its suggestions (which were then posted to Reddit).
AMD has since confirmed to us that this is its current guidance on the matter. The advice comes as just that, advice, and it doesn’t appear that these steps are guaranteed to resolve the issue for all users. Here’s the short version of the advice, with emphasis added to the most relevant bits:
“Based on user feedback we have received; the suggestions below could improve or resolve general USB device stuttering issues.
1.) Verify that your motherboard is updated to the latest BIOS version and configured using optimized/factory default settings. Check your motherboard manufacturer’s website for BIOS update and download instructions.
2.) Check if your Windows 10 is on the latest build and fully up to date. For information on updating Windows 10, please refer to Microsoft article:
Update Windows 10
3.) Ensure that the Ryzen chipset driver from AMD is installed and up to date. Latest Ryzen chipset driver version is2.13.27.501and can be downloadhere.
If you continue to experience USB connectivity problems after following the suggestions above, you may consider using either of the following workaround:
1.) Set PCIe mode from Gen4/Auto to Gen 3 in the BIOS
2.) Disable Global C-State in the BIOS.These settings are found in the BIOS. Please refer to the motherboard user manual if more information is needed.”Most of the advice follows good common-sense troubleshooting tips, like making sure you’ve installed the latest BIOS, chipset driver, and version of Windows 10.
The most interesting bits cover the two suggested workarounds. Notably, these suggested fixes mirror some of the crowd-sourced suggestions we’ve seen in enthusiast circles. AMD doesn’t suggest a few other popular suggestions, though, like manually uninstalling/reinstalling USB ports and root hubs and disabling unused USB headers. Hence, it appears that AMD has at least narrowed down the potential solution to some extent.
The message also provides information on how you can submit a bug report and assures users that the company is actively investigating the issue and will provide an update once a proper fix is available. You can see the full text of the message here.
After almost a decade of total market dominance, Intel has spent the past few years on the defensive. AMD’s Ryzen processors continue to show improvement year over year, with the most recent Ryzen 5000 series taking the crown of best gaming processor: Intel’s last bastion of superiority.
Now, with a booming hardware market, Intel is preparing to retake some of that lost ground with the new 11th Gen Core Processors. Intel is claiming these new 11th Gen CPUs offer double-digit IPC improvements despite remaining on a 14 nm process. The top-end 8-core Intel Core i9-11900K may not be able to compete against its AMD rival Ryzen 9 5900X in heavily multi-threaded scenarios, but the higher clock speeds and alleged IPC improvements could be enough to take back the gaming crown. Along with the new CPUs, there is a new chipset to match, the Intel Z590. Last year’s Z490 chipset motherboards are also compatible with the new 11th Gen Core Processors, but Z590 brings some key advantages.
First, Z590 offers native PCIe 4.0 support from the CPU, which means the PCIe and M.2 slots powered off the CPU will offer PCIe 4.0 connectivity when an 11th Gen CPU is installed. The PCIe and M.2 slots controlled by the Z590 chipset are still PCI 3.0. While many high-end Z490 motherboards advertised this capability, it was not a standard feature for the platform. In addition to PCIe 4.0 support, Z590 offers USB 3.2 Gen 2×2 from the chipset. The USB 3.2 Gen 2×2 standard offers speeds of up to 20 Gb/s. Finally, Z590 boasts native support for 3200 MHz DDR4 memory. With these upgrades, Intel’s Z series platform has feature parity with AMD’s B550. On paper, Intel is catching up to AMD, but only testing will tell if these new Z590 motherboards are up to the challenge.
The AORUS line from Gigabyte spans a broad range of products: laptops, peripherals, and core components. Across the enthusiast spectrum, the AORUS name denotes Gigabyte’s gaming-focused products, with the AORUS motherboard range featuring a consistent naming scheme that includes the Pro, Elite, Ultra, Master, and Extreme motherboards. Within this lineup, the Master serves as the high-end mainstream option offering prime features at a high but attainable price point.
The Gigabyte Z590 AORUS Master features a monster 19-phase VRM utilizing 90 A power stages and Gigabyte’s signature finned cooling solution. Both Q-Flash and a dual BIOS have been included, providing a redundant safety net for ambitious overclocking. The Gigabyte Z590 AORUS Master also offers a full-coverage aluminium backplate for added rigidity and additional VRM cooling. Additionally, Gigabyte has included a 10 Gb/s LAN controller from Aquantia. All of the features are in order, so let’s see how the Gigabyte Z590 AORUS Master stacks up against the competition.
1x Q-Flash Plus button 1x Clear CMOS button 2x SMA antenna connectors 1x DisplayPort 1x USB Type-C® port, with USB 3.2 Gen 2×2 5x USB 3.2 Gen 2 Type-A ports (red) 4x USB 3.2 Gen 1 ports 1x RJ-45 port 1x optical S/PDIF Out connector 5x audio jacks
Audio:
1x Realtek ALC1220 Codec
Fan Headers:
9x 4-pin
Form Factor:
ATX Form Factor: 12.0 x 9.6 in.; 30.5 x 24.4 cm
Exclusive Features:
APP Center
@BIOS
EasyTune
Fast Boot
Game Boost
RGB Fusion
Smart Backup
System Information Viewer
USB TurboCharger
Support for Q-Flash Plus
Support for Q-Flash
Support for Xpress Install
Testing for this review was conducted using a 10th Gen Intel Core i9-10900K. Stay tuned for an 11th Gen update when the new processors launch!
During the announcement of the Radeon RX 6700 XT graphics card, AMD also confirmed that it will bring Smart Access Memory support to the Ryzen 3000 series desktop processors, excluding the Ryzen 5 3400G and the Ryzen 3 3200G.
It seems AMD has taken the community’s feedback into consideration and decided to enable SAM (Smart Access Memory) on Zen2-based processors. The confirmation was made during the “Where Gaming Begins: Episode 3” stream yesterday. AMD also confirmed that SAM will bring about the same performance increase (up to 16%) on systems using Ryzen 3000 CPUs as it does on the ones using Ryzen 5000 CPUs.
AMD Smart Access Memory is a technology based on the PCIe Resizable BAR standard. By using SAM/PCIe Resizable BAR, the data channel between the CPU and the GPU is expanded, increasing bandwidth, which results in a theoretical performance increase.
This is a move that we did see coming eventually. We have previously shared screenshots of Asus and MSI motherboards with SAM enabled while using Zen and Zen2 processors, meaning that the ability to enable it was more of a restriction from AMD than an incompatibility.
Nvidia also has announced recently that it will bring PCIe Resizable BAR support to its RTX 30 series graphics cards. Nvidia mentioned that AMD 400-series motherboards would be compatible, but only when using Zen3 processors. Considering AMD’s announcement, Nvidia might extend it to the Zen2 processors as well.
Discuss on our Facebook page, HERE.
KitGuru says: Do you own an AMD Ryzen 3000 series processor? Will you enable SAM once AMD has released the appropriate BIOS for your motherboard?
Become a Patron!
Check Also
Ghost of Tsushima’s developers will become ambassadors for the island
Ghost of Tsushima took many by surprise when it launched last Summer, offering a beautiful …
While we still don’t have an Intel Rocket Lake-S Core i9-11900K CPU to use for testing, Intel Z590 boards have been rolling in. So while we await benchmark results, we’ll be walking in detail through the features of these brand-new boards. First up on our bench was the ASRock Z590 Steel Legend 6E Wi-Fi, followed by the Gigabyte Z590 Aorus Master and Gigabyte’s Z590 Vision G. Today, we take a close look at the MSI MEG Z590 Ace. We’ll have to wait for benchmark results, though, to see if it deserves a spot on our best motherboards list.
The latest version of the Ace board features robust power delivery, four M.2 sockets, a premium audio codec and more. The new Ace also has updated styling on the heatsink and shrouds while still keeping the black with gold highlights theme from the previous generation. Emblazoned on the rear IO is the MSI Dragon (with RGB LEDs) and the Ace name (no lighting). We don’t have an exact price for the MEG Z590 Ace. However, the Z490’s MSRP was $399, so we expect the Z590 version to cost the same or slightly more.
MSI’s current Z590 product stack consists of 11 models, with most falling into the MEG (high-end) MPG (mid-range) and MAG (budget) lineups. We’re greeted by several familiar SKUs and a couple of new ones. Starting at the top is the flagship MEG Z590 Godlike, the Ace we’re looking at now, and a Mini ITX MEG Z590I Unify. The mid-range MPG line consists of four boards (Carbon EK X, Gaming Edge WiFi, Gaming Carbon WiFi and Gaming Force), while the less expensive MAG lineup consists of two boards (Z590 Tomahawk WiFi, and Torpedo). Wrapping up the current product stack are two ‘Pro’ boards in the Z590 Pro WiFi and Z590-A Pro. The only thing missing out of the gate is a Micro ATX board, but it’s likely we see one or two down the line.
We can’t talk about Rocket Lake-S performance yet — not that we have a CPU at this time to test boards with anyway. All we’ve seen at this point are rumors and a claim from Intel of a significant increase to IPC. But the core count was lowered from 10 cores/20 threads in Comet Lake (i9-10900K) to 8 cores/16 threads in the yet-to-be-released i9-11900K. To that end, we’ll stick with specifications and features, adding a full review that includes benchmarking, overclocking and power consumption shortly.
MSI’s MEG Z590 Ace includes all the bits you expect from a premium motherboard. The board has a stylish appearance, very capable power delivery (16-phase 90A Vcore) and the flagship Realtek ALC4082 audio codec with included DAC. We’ll cover these features and much more in detail below. First, here are the full specs from MSI.
(1) Intel Wi-Fi 6E AX210 (MU-MIMO, 2.4/5/6GHz, BT 5.2)
USB Controllers
??
HD Audio Codec
Realtek ALC4082
DDL/DTS Connect
✗ / DTS:X Ultra
Warranty
3 Years
The accessories included with the board are reasonably comprehensive, including most of what you need to get started. Below is a full list.
Manual
Quick Installation Guide
USB drive (Drivers)
Cleaning brush
Screwdrivers
Stickers (MEG/Cable)
(4) SATA cables
(4) Screws/standoff sets for M.2 sockets
Thermistor cable
1 to 2 RGB LED Y cable, Corsair RGB LED cable, Rainbow RGB LED cable
DP to mini DP cable
Image 1 of 3
Image 2 of 3
Image 3 of 3
Looking at the Z590 Ace for the first time, we see the black PCB along with black heatsinks and shrouds covering most of the board. MSI stenciled on identifying language such as the MEG Ace name and the MSI Gaming Dragon in gold, setting this SKU apart from the rest. The VRM heatsinks are both made from a solid block of aluminum with lines cut out. Additionally, the shroud is made of metal and connected to the heat pipes, increasing surface area significantly. Also worth noting is the VRM heatsinks share the load connected via heatpipe. RGB LED lighting is minimal here, with a symbol on the chipset shining through a mesh cover on the chipset heatsink and the MSI dragon above the rear IO. While tastefully done, some may want more. With its mostly black appearance, the board won’t have trouble fitting in most build themes.
Focusing on the top half of the board, we’ll get a better look at what’s going with the VRM heatsinks and other board features in this area. In the upper-left corner, we spot two 8-pin EPS connectors, one of which is required for operation. Just below this is the shroud covering the rear IO bits and part of the VRM heatsink. On it is a carbon-fiber pattern along with the MSI Gaming Dragon illuminated by RGB LEDs. The socket area is relatively clean, with only a few caps visible.
Just above the VRM heatsink is the first of eight fan headers. All fan headers on the board are the 4-pin type and support PWM- and DC-controlled fans and pumps. The CPU_FAN1 header supports up to 2A/24W and auto-detects the attached device type. The PUMP_FAN1 supports up to 3A/36W. The rest of the system fan headers support up to 1A/12W. This configuration offers plenty of support for most cooling systems. That said, I would like to have seen all pump headers auto-detect PWM/DC modes instead of only CPU_FAN1.
To the right of the socket are four reinforced DRAM slots. The Z590 Ace supports up to 128GB of RAM with speeds listed up to DDR4 5600 (for one stick with one rank). The highest supported speed with two DIMMs is DDR4 4400+, which is plenty fast enough for an overwhelming majority of users.
MOving down the right edge of the board, we see the 2-character debug LED up top, a system fan header, five voltage read points (Vcore/DRAM/SA/IO/IO2), 4-LED debug, 24-pin ATX connector, and finally, a USB 3.2 Gen2 Type-C front panel header. Between both debug tools and the voltage read points, you’ll have an accurate idea of what’s going on with your PC.
With the MEG Z590 Ace towards the top of the product stack, you’d expect well-built power delivery and you wouldn’t be wrong. MSI lists the board as 16+2+1 (Vcore/GT/SA) and it uses a Renesas ISL69269 (X+Y+Z = 8+2+1) PWM controller that feeds power to eight-phase doublers (Renesas ISL617A), then onto 16 90A Renesas ISL99390B MOSFETs for the Vcore. This configuration yields 1440A of power for the CPU, which is plenty for ambient and sub-ambient/extreme overclocking. It won’t be this board holding you back in any overclocking adventures, that’s for sure.
As we focus on the bottom half, we’ll take a closer look at the integrated audio, PCIe slot configuration and storage. Starting with the audio bits on the left side, under the shroud, is the Realtek latest premium codec, the ALC4082. Additionally, the Z590 Ace includes an ESS Sabre 9018Q2C combo DAC, a dedicated headphone amplifier (up to 600 Ohm) and high-quality Chemicon audio capacitors. This audio solution should be more than adequate for most users.
In the middle of the board are four M.2 sockets and five PCIe slots. With the PCIe connectivity, all three full-length slots are reinforced to prevent shearing and EMI, while the two PCIe x1 slots don’t have any reinforcement. The top slot supports PCIe 4.0 x16 speeds, with the second and third slots PCIe 3.0. The slots break down as follows, x16/x0/x4 x8/x8/x4 or x8/x4+x4/x4. This configuration supports 2-Way Nvidia SLI and 2-Way AMD Crossfire technologies. All x1 slots and the full-length bottom slot are fed from the chipset, while the top two full-length slots source their lanes from the CPU.
M.2 storage on the Z590 Ace consists of four onboard sockets supporting various speeds and module lengths. The top slot, M2_1, supports PCIe 4.0 x4 modules up to 110mm. Worth noting on this socket is that it only works with an 11th Gen Intel CPU installed. M2_2, M2_3, M2_4 are fed from the chipset, with M2_2 and M2_3 supporting SATA- and PCIe-based modules up to 80mm, while M2_4 supports PCIe only. M2_2/3/4 are all PCIe 3.0 x4.
The way this is wired, you will lose some SATA ports and PCIe bandwidth depending on the configuration. For example, SATA2 is unavailable when using a SATA-based SSD in the M2_2 socket. SATA 5/6 are unavailable when using the M2_3 socket with any type of device. Finally, the bandwidth on M2_4 switches from x4 to x2 when PCI_E5 (bottom x1 slot) is used. The M.2 sockets support RAID 0/1 for those who would like additional speed or redundancy.
Finally, along the right edge of the board are six horizontally oriented SATA ports. The Z590 Ace supports RAID 0, 1 and 10 on the SATA ports. Just be aware you lose a couple of ports on this board if you’re using some of the M.2 sockets. Above these ports is a USB 3.2 Gen1 front panel header along with another 4-pin system fan header.
Across the board’s bottom edge are several headers, including more USB ports, fan headers, and more. Below is the full list, from left to right:
Front Panel Audio
aRGB and RGB headers
(3) System Fan headers
Supplemental PCIe power
Tuning controller connector
Temperature sensor
(2) USB 2.0 headers
LED switch
BIOS selector switch
OC Retry jumper
TPM header
Power and Reset buttons
Slow mode jumpers
Front panel connectors
Moving to the rear IO area, we see the integrated IO plate sporting a black background with gold writing matching the board theme. There are eight USB Type-A ports (two USB 3.2 Gen2, four USB 3.2 Gen1 and two USB 2.0 ports). On the Type-C front, the Z590 Ace includes two Thunderbolt 4 ports capable of speeds up to 40 Gbps. Just to the right of those are Mini-DisplayPort inputs for running video through the Thunderbolt connection(s). Handling the video output for the CPU’s integrated graphics is a single HDMI (2.0b) port. We also spy here the Wi-Fi antenna connections, 5-plug plus SPDIF audio stack, Intel 2.5 GbE and finally, a Clear CMOS button and BIOS Flashback button that can be used without a CPU.
Software
For Z590, MSI has changed up its software offerings. We used to have several individual programs to adjust the system, but MSI moved to an all-in-one application called MSI Center with thisboard. The new Software is a central repository for many of the utilities (12) MSI offers. These include Mystic Light (RGB control), AI Cooling (adjust fan speeds), LAN Manager (control the NIC), Speed Up (for storage), Gaming Mode (auto-tune games), among several others (see the screenshots below for details). The User Scenario application has a couple of presets for system performance and is where you manually adjust settings, including CPU clock speeds and voltage, RAM timings, and more. Overall, I like the move to a single application. The user interface is easy to read and get around in. However, sometimes loading these applications takes longer than I would like to see. But MSI Center does an excellent job of pulling everything in.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
Firmware
To give you a taste of the Firmware, we’ve gathered screenshots showing most BIOS screens. MSI’s BIOS is unique from the other board partners in that the headings aren’t at the top but split out to the sides. In each section, all the frequently used options are easy to find and not buried deep within menus. Overall, MSI didn’t change much here when moving from Z490 to Z590 and their BIOS continues to be easy to use.
Image 1 of 23
Image 2 of 23
Image 3 of 23
Image 4 of 23
Image 5 of 23
Image 6 of 23
Image 7 of 23
Image 8 of 23
Image 9 of 23
Image 10 of 23
Image 11 of 23
Image 12 of 23
Image 13 of 23
Image 14 of 23
Image 15 of 23
Image 16 of 23
Image 17 of 23
Image 18 of 23
Image 19 of 23
Image 20 of 23
Image 21 of 23
Image 22 of 23
Image 23 of 23
Future Tests and Final Thoughts
With Z590 boards arriving but now Rocket Lake-S CPUs yet, we’re in an odd place. We know most of these boards should perform similarly to our previous Z490 motherboard reviews. And while there are exceptions, they are likely mostly at the bottom of the product stack. To that end, we’re posting these as detailed previews until we get data using a Rocket Lake processor.
Once we receive a Rocket Lake CPU and as soon as any embargos have expired, we’ll fill in the data points, including the benchmarking/performance results, as well as overclocking/power and VRM temperatures.
We’ll also be updating our test system hardware to include a PCIe 4.0 video card and storage. This way, we can utilize the platform to its fullest using the fastest protocols supported. We will also update to the latest Windows 10 64-bit OS (20H2) with all threat mitigations applied and update the video card driver and use the newest release when we start this testing. We use the latest non-beta motherboard BIOS available to the public unless otherwise noted. While we do not have performance results from the yet-to-be-released Rocket Lake CPU, we’re confident the 70A VRMs will handle the i9-11900K processor without issue. A quick test of the i9-10900K found the board quite capable with that CPU, easily allowing the 5.2 GHz overclock we set. For now, we’ll focus on features, price, and appearance until we gather performance data from the new CPU.
The MSI MEG Z590 Ace is a premium motherboard adorned with several high-end features, including a very robust VRM capable of handling 10th and 11th generation flagship Intel processors at both stock speeds and overclocked. Additionally, the board includes four M.2 sockets, 2.5 GbE and integrated Wi-Fi 6E, and two Thunderbolt 4 ports for increased bandwidth and peripheral flexibility.
The MEG Z590 Ace’s 16-phase 90A VRM handled our i9-10900K without issue, even overclocked to 5.2 GHz. We’ll retest once we receive our Rocket Lake-based i9-11900K, but so long as the BIOS is right, it shouldn’t pose any problems for this board. Although it has four M.2 sockets, unlike the Gigabyte Z590 Vision G, using these sockets causes SATA ports to drop, because more lanes are tied to the chipset on this board). That said, if you’re in a worst-case scenario, you can run four M.2 modules and still have three SATA ports left over. Most users should find this acceptable.
As far as potential drawbacks go, the price point of $400-plus will be out of reach for some users. Another concern for some may be the lack of RGB elements on the board. The MSI dragon and chipset heatsink light up with RGB LEDs, but that’s it. If you like a lot of RGB LED bling, you can add it via the four aRBG/RGB headers located around the board. The other drawback is the lack of a USB 3.2 Gen2x2 Type-C port, but the faster Thunderbolt 4 ports certainly make up for that.
Direct competitors at this price point are the Asus ROG Strix Z590-E Gaming, Gigabyte Z590 Aorus Master, and the ASRock Z590 Taichi. All of these boards are plenty capable with the differences residing in VRMs (Gigabyte gets the nod here), M.2 storage (MSI and Giga both have four) and audio (the Ace has the most premium codec). Beauty is in the eye of the beholder, but if you forced me to pick among these, the Taichi would be the board I’d want to show off the most. That said, no board here is a turnoff and has its own benefit over another.
The Ace’s appearance, including the brushed aluminum and carbon fiber-like finish, really gives it a premium look and feel, while easily blending in with your build theme. If your budget allows for a ~$400 motherboard and you’re looking for a lot of M.2 storage and enjoy a premium audio experience, the MEG Z590 Ace is an excellent option near that price point. Stay tuned for benchmarking, overclocking, and power results using the new Rocket Lake CPU.
Intel’s 11th Generation Rocket Lake processors aren’t due until March 30. However, some retailers are already shipping out orders. One user from the Chiphell forums has gotten his hands on a retail Core i7-11700K, and it would appear that Intel is using a similar memory overclocking concept as AMD’s Infinity Fabric Clock (FCLK), but with Rocket Lake chips.
If you’re not familiar with AMD’s Ryzen processors, many of which sit on our best CPUs list, the FCLK dictates the frequency of the Infinity Fabric, which serves as an interconnect across the chiplets. Adjusting this value allows you to hit higher memory frequency overclocks. By default, the FCLK is synchronized with the unified memory controller clock (UCLK) and memory clock (MEMCLK). Obviously, you can run the FCLK in asynchronous mode, but doing so will induce a latency penalty that negatively impacts performance.
Since Rocket Lake isn’t officially out yet, we’re not completely sure how Intel’s portrayal of the FCLK memory overclocking will work. The BIOS screenshot shows two operational modes for the CPU IMC (integrated memory controller) and the DRAM clock on MSI’s Z490I Unify. Apparently, Gear 1 runs both in a 1:1 ratio, while Gear 2 puts them in a 1/2:1 ratio. It’s similar to how the FCLK works on Ryzen processors.
According to the author of the forum post, his retail Core i7-11700K seems to hit a wall at DDR4-3733, suggesting that DDR4-3733 is the limit at which Rocket Lake’s IMC can run in a 1:1 ratio with the memory clock. In retrospect, the majority of AMD’s Zen 2 processors scale to a 1,800 FCLK (DDR4-3600) with some samples hitting a 1,900 MHz FCLK (DDR4-3800). If Rocket Lake has the same limits, it’s going to lose points since AMD’s latest Zen 3 processors have peaked as high as a 2,000 MHz FCLK (DDR4-4000) before breaking synchronous operation.
Image 1 of 2
Image 2 of 2
It’s too soon to pass judgement whether DDR4-3733 is a hard cap that’s built into the Rocket Lake silicon itself or it’s merely a product of an early and unoptimized microcode. We should point out that the user did his testing on a MEG Z490I Unify motherboard so a proper firmware is required to make Rocket Lake operate properly. The Chiphell forum user provided some RAM benchmarks that reportedly shows the performance impact.
With a DDR4-4000 memory kit with 18-20-20-40 1T timings in asynchronous mode, the user got a latency of 61.3 nanoseconds in AIDA64. Switching over to a DDR4-3600 memory that has 14-14-14-34 2T timings allowed him to decrease the latency to 50.2 nanoseconds, which represents a 18.1% reduction. However, we have to take certain points into consideration. For one, the DDR4-4000 memory kit obviously has very sloppy timings that help contribute to higher latency. Furthermore, the user evidently overclocked the Core i7-11700K’s uncore frequency to 4,100 MHz on the DDR4-3600 run so that probably skewed the results in its favor as well.
We’ll have to wait until the Rocket Lake processors are available to investigate the matter thoroughly. So far, a DDR4-3733 limit certainly doesn’t bode well for Rocket Lake, especially when some of the really pricey Z590 motherboards are advertising memory support above DDR4-5000. In all fairness, Rocket Lake only natively supports memory up to DDR4-3200 so anything higher is technically overclocking in Intel’s book.
The bar has just been lowered (in a good way!) for Resizable BAR, the PCI-Express graphics feature that lets CPUs directly access a GPU’s onboard memory to improve gaming frame rates. That’s because AMD just announced it’s bringing the tech to its last-gen Ryzen 3000 series processors, not just the new Ryzen 5000 chips that initially launched with the feature.
AMD originally debuted the feature as “AMD Smart Access Memory,” and you specifically needed an AMD Ryzen 5000 CPU and an AMD Radeon RX 6000 graphics card to make it work. That wasn’t a particularly easy sell, considering both have been incredibly hard to find at retail since they first debuted.
But it was an easy sell for Nvidia and Intel, which announced in January that they’d be adopting Resizable BAR with initial support for Nvidia’s RTX 3000-series laptop GPUs, and later rolling out to RTX 3000-series desktops when paired with both AMD and a selection of both 11th Gen and 10th Gen Intel CPUs. Nvidia just launched support for the new RTX 3060 desktop graphics card last week, with its other new GPUs coming in late March (though you’ll need a motherboard update, too).
With Ryzen 3000, AMD’s actually promising up to 16 percent more performance, compared to the 10 percent both AMD and Nvidia previously offered, though it will really depend on the game. TechSpot discovered that some games could see a 20 percent boost on the AMD CPU+AMD GPU side of things, while other games actually had reduced performance.
Because of the possible downsides, Nvidia decided to only turn it on for certain games where there’s a benefit, with the first wave including Assassin’s Creed Valhalla, Battlefield V, Borderlands 3, Forza Horizon 4, Gears 5, Metro Exodus, Red Dead Redemption 2, and Watch Dogs: Legion.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.