ASRock Z590 Steel Legend WiFi 6E Review

Introduction

After almost a decade of total market dominance, Intel has spent the past few years on the defensive. AMD’s Ryzen processors continue to show improvement year over year, with the most recent Ryzen 5000 series taking the crown of best gaming processor: Intel’s last bastion of superiority.

Now, with a booming hardware market, Intel is preparing to make up some of that lost ground with the new 11th Gen Intel Core Processors. Intel is claiming these new 11th Gen CPUs offer double-digit IPC improvements despite remaining on a 14 nm process. The top-end 8-core Intel Core i9-11900K may not be able to compete against its Ryzen 9 5900X AMD rival in heavily multi-threaded scenarios, but the higher clock speeds and alleged IPC improvements could be enough to take back the gaming crown. Along with the new CPUs, there is a new chipset to match, the Intel Z590. Last year’s Z490 chipset motherboards are also compatible with the new 11th Gen Intel Core Processors, but Z590 introduces some key advantages.

First, Z590 offers native PCIe 4.0 support from the CPU, which means the PCIe and M.2 slots powered off the CPU will offer PCIe 4.0 connectivity when an 11th Gen CPU is installed. The PCIe and M.2 slots controlled by the Z590 chipset are still PCI 3.0. While many high-end Z490 motherboards advertised this capability, it was not a standard feature for the platform. In addition to PCIe 4.0 support, Z590 offers USB 3.2 Gen 2×2 from the chipset. The USB 3.2 Gen 2×2 standard offers speeds of up to 20 Gb/s. Finally, Z590 boasts native support for 3200 MHz DDR4 memory. With these upgrades, Intel’s Z series platform has feature parity with AMD’s B550. On paper, Intel is catching up to AMD, but only testing will tell if these new Z590 motherboards are up to the challenge.

The ASRock Z590 Steel Legend WiFi 6E aims to be a durable, dependable platform for the mainstream market. The ASRock Z590 Steel Legend WiFi 6E features a respectable 14-phase VRM that takes advantage of 50 A power stages from Vishay. Additionally, ASRock has included a 2.5 Gb/s LAN controller from Realtek as well as the latest WiFi 6 connectivity. The ASRock Z590 Steel Legend WiFi 6E has all the mainstream features most users need packaged in at a reasonable price. All that is left is to see how the ASRock Z590 Steel Legend WiFi 6E stacks up against the competition!

Specifications

Specifications
CPU Support: 10th / 11th Gen Intel Core Processors
Power Design: CPU Power: 14-phase

Memory Power: 2-phase
Chipset: Intel Z590
Integrated Graphics: Dependent on installed CPU
Memory: 4x DIMM, supports dual-channel DDR4-4800+ (OC) MHz
BIOS: AMI UEFI BIOS
Expansion Slots: 2x PCIe x16 slots (x16/x4)

3x PCIe 3.0 x1 Slots
Storage: 6x SATA 6 Gb/s ports

3x M.2 ports* (SATA3/PCIe 3.0 x4)
Networking: 1x Realtek RTL8125BG 2.5 Gb/s LAN

1x Intel Wi-Fi 6 AX210
Rear Ports: 2x Antenna Ports

1x PS/2 Mouse/Keyboard Port

1x HDMI Port

1x DisplayPort 1.4

1x Optical SPDIF Out Port

1x USB 3.2 Gen2 Type-A Port

1x USB 3.2 Gen2 Type-C Port

2x USB 3.2 Gen1 Ports

2x USB 2.0 Ports

1x RJ-45 LAN Port

5x HD Audio Jacks
Audio: 1x Realtek ALC897 Codec
Fan Headers: 7x 4-pin
Form Factor: ATX Form Factor: 12.0 x 9.6 in.; 30.5 x 24.4 cm
Exclusive Features:
  • ASRock Super Alloy
  • XXL Aluminium Alloy Heatsink
  • Premium Power Choke
  • 50A Dr.MOS
  • Nichicon 12K Black Caps
  • I/O Armor
  • Shaped PCB Design
  • Matte Black PCB
  • High Density Glass Fabric PCB
  • 2oz copper PCB
  • 2.5G LAN
  • Intel® 802.11ax Wi-Fi 6E
  • ASRock steel Slot
  • ASRock Full Coverage M.2 Heatsink
  • ASRock Hyper M.2 (PCIe Gen4x4)
  • ASRock Ultra USB Power
  • ASRock Full Spike Protection
  • ASRock Live Update & APP Shop

Testing for this review was conducted using a 10th Gen Intel Core i9-10900K. Stay tuned for an 11th Gen update when the new processors launch!

gpu-test-system-update-march-2021

GPU Test System Update March 2021

Introduction

TechPowerUp is one of the most highly cited graphics card review sources on the web, and we strive to keep our testing methods, game selection, and, most importantly, test bench up to date. Today, I am pleased to announce our newest March 2021 VGA test system, which has one of many firsts for TechPowerUp. This is our first graphics card test bed powered by an AMD CPU. We are using the Ryzen 7 5800X 8-core processor based on the “Zen 3” architecture. The new test setup fully supports the PCI-Express 4.0 x16 bus interface to maximize performance of the latest generation of graphics cards by both NVIDIA and AMD. The platform also enables the Resizable BAR feature by PCI-SIG, allowing the processor to see the whole video memory as a single addressable block, which could potentially improve performance.

A new test system heralds completely re-testing every single graphics card used in our performance graphs. It allows us to kick out some of the older graphics cards and game tests to make room for newer cards and games. It also allows us to refresh our OS, testing tools, update games to the latest version, and explore new game settings, such as real-time raytracing, and newer APIs.

A VGA rebench is a monumental task for TechPowerUp. This time, I’m testing 26 graphics cards in 22 games at 3 resolutions, or 66 game tests per card, which works out to 1,716 benchmark runs in total. In addition, we have doubled our raytracing testing from two to four titles. We also made some changes to our power consumption testing, which is now more detailed and more in-depth than ever.

In this article, I’ll share some thoughts on what was changed and why, while giving you a first look at the performance numbers obtained on the new test system.

Hardware

Below are the hardware specifications of the new March 2021 VGA test system.

Test System – VGA 2021.1
Processor: AMD Ryzen 7 5800X @ 4.8 GHz

(Zen 3, 16 MB Cache)
Motherboard: MSI B550-A Pro

BIOS 7C56vA5 / AGESA 1.2.0.0
Memory: Thermaltake TOUGHRAM, 16 GB DDR4

@ 4000 MHz 19-23-23-42 1T

Infinity Fabric @ 2000 MHz (1:1)
Cooling: Corsair iCue H100i RGB Pro XT

240 mm AIO
Storage: Crucial MX500 2 TB SSD
Power Supply: Seasonic Prime Ultra Titanium 850 W
Case: darkFlash DLX22
Operating System: Windows 10 Professional 64-bit

Version 20H2 (October 2020 Update)
Drivers: AMD: 21.2.3 Beta

NVIDIA: 461.72 WHQL


The AMD Ryzen 7 5800X has emerged as the fastest processor we can recommend to gamers for play at any resolution. We could have gone with the 12-core Ryzen 9 5900X or even maxed out this platform with the 16-core 5950X, but neither would be faster at gaming, and both would be significantly more expensive. AMD certainly wants to sell you the more expensive (overpriced?) CPU, but the Ryzen 7 5800X is actually the fastest option because of its single CCD architecture. Our goal with GPU test systems over the past decade has consistently been to use the fastest mainstream-desktop processor. Over the years, this meant a $300-something Core i7 K-series LGA115x chip making room for the $500 i9-9900K. The 5900X doesn’t sell for anywhere close to this mark, and we’d rather not use an overpriced processor just because we can. You’ll also notice that we skipped upgrading to the 10-core “Comet Lake” Core i9-10900K processor from the older i9-9900K because we saw no significant increases and negligible gaming performance gains, especially considering the large overclock on the i9-9900K. The additional two cores do squat for nearly all gaming situations, which is the second reason besides pricing that had us decide against the Ryzen 9 5900X.

We continue using our trusted Thermaltake TOUGHRAM 16 GB dual-channel memory kit that served us well for many years. 32 GB isn’t anywhere close to needed for gaming, so I didn’t want to hint at that, especially to less experienced readers checking out the test system. We’re running at the most desirable memory configuration for Zen 3 to reduce latencies inside the processor: Infinity Fabric at 2000 MHz, memory clocked at DDR4-4000, in 1:1 sync with the Infinity Fabric clock. Timings are at a standard CL19 configuration that’s easily found on affordable memory modules—spending extra for super-tight timings usually is overkill and not worth it for the added performance.

The MSI B550-A PRO was an easy choice for a motherboard. We wanted a cost-effective motherboard for the Ryzen 9 5800X and don’t care at all about RGB or other bling. The board can handle the CPU and memory settings we wanted for this test bed, the VRM barely gets warm. It also doesn’t come with any PCIe gymnastics—a simple PCI-Express 4.0 x16 slot wired to the CPU without any lane switches along the way. The slot is metal-reinforced and looks like it can take quite some abuse over time. Even though I admittedly swap cards hundreds of times each year, probably even 1000+ times, it has never been any issue—insertion force just gets a bit softer, which I actually find nice.

Software and Games

  • Windows 10 was updated to 20H2
  • The AMD graphics driver used for all testing is now 21.2.3 Beta
  • All NVIDIA cards use 461.72 WHQL
  • All existing games have been updated to their latest available version

The following titles were removed:

  • Anno 1800: old, not that popular, CPU limited
  • Assassin’s Creed Odyssey: old, DX11, replaced by Assassin’s Creed Valhalla
  • Hitman 2: old, replaced by Hitman 3
  • Project Cars 3: not very popular, DX11
  • Star Wars: Jedi Fallen Order: horrible EA Denuvo makes hardware changes a major pain, DX11 only, Unreal Engine 4, of which we have several other titles
  • Strange Brigade: old, not popular at all

The following titles were added:

  • Assassin’s Creed Valhalla
  • Cyberpunk 2077
  • Hitman 3
  • Star Wars Squadrons
  • Watch Dogs: Legion

I considered Horizon Zero Dawn, but rejected it because it uses the same game engine as Death Stranding. World of Warcraft or Call of Duty won’t be tested because of their always-online nature, which enforces game patches that mess with performance—at any time. Godfall is a bad game, Epic exclusive, and commercial flop.

The full list of games now consists of Assassin’s Creed Valhalla, Battlefield V, Borderlands 3, Civilization VI, Control, Cyberpunk 2077, Death Stranding, Detroit Become Human, Devil May Cry 5, Divinity Original Sin 2, DOOM Eternal, F1 2020, Far Cry 5, Gears 5, Hitman 3, Metro Exodus, Red Dead Redemption 2, Sekiro, Shadow of the Tomb Raider, Star Wars Squadrons, The Witcher 3, and Watch Dogs: Legion.

Raytracing

We previously tested raytracing using Metro Exodus and Control. For this round of retesting, I added Cyberpunk 2077 and Watch Dogs Legion. While Cyberpunk 2077 does not support raytracing on AMD, I still felt it’s one of the most important titles to test raytracing with.

While Godfall and DIRT 5 support raytracing, too, neither has had sufficient commercial success to warrant inclusion in the test suite.

Power Consumption Testing

The power consumption testing changes have been live for a couple of reviews already, but I still wanted to detail them a bit more in this article.

After our first Big Navi reviews I realized that something was odd about the power consumption testing method I’ve been using for years without issue. It seemed the Radeon RX 6800 XT was just SO much more energy efficient than NVIDIA’s RTX 3080. It definitely is more efficient because of the 7 nm process and AMD’s monumental improvements in the architecture, but the lead just didn’t look right. After further investigation, I realized that the RX 6800 XT was getting CPU bottlenecked in Metro: Last Light at even the higher resolutions, whereas the NVIDIA card ran without a bottleneck. This of course meant NVIDIA’s card consumed more power in this test because it could run faster.

The problem here is that I used the power consumption numbers from Metro for the “Performance per Watt” results under the assumption that the test loaded the card to the max. The underlying reason for the discrepancy is AMD’s higher DirectX 11 overhead, which only manifested itself enough to make a difference once AMD actually had cards able to compete in the high-end segment.

While our previous physical measurement setup was better than what most other reviewers use, I always wanted something with a higher sampling rate, better data recording, and a more flexible analysis pipeline. Previously, we recorded at 12 samples per second, but could only store minimum, maximum, and average. Starting and stopping the measurement process was a manual operation, too.

The new data acquisition system also uses professional lab equipment and collects data at 40 samples per second, which is four times faster than even NVIDIA’s PCAT. Every single data point is recorded digitally and stashed away for analysis. Just like before, all our graphics card power measurement is “card only”, not the “whole system” or “GPU chip only” (the number displayed in the AMD Radeon Settings control panel).

Having all data recorded means we can finally chart power consumption over time, which makes for a nice overview. Below is an example data set for the RTX 3080.

The “Performance per Watt” chart has been simplified to “Energy Efficiency” and is now based on the actual power and FPS achieved during our “Gaming” power consumption testing run (Cyberpunk 2077 at 1440p, see below).

The individual power tests have also been refined:

  • “Idle” testing is now measuring at 1440p, whereas it used 1080p previously. This is to follow the increasing adoption rates of high-res monitors.
  • “Multi-monitor” is now 2560×1440 over DP + 1920×1080 over HDMI—to test how well power management works with mixed resolutions over mixed outputs.
  • “Video Playback” records power usage of a 4K30 FPS video that’s encoded with H.264 AVC at 64 Mbps bitrate—similar enough to most streaming services. I considered using something like madVR to further improve video quality, but rejected it because I felt it to be too niche.
  • “Gaming” power consumption is now using Cyberpunk 2077 at 1440p with Ultra settings—this definitely won’t be CPU bottlenecked. Raytracing is off, and we made sure to heat up the card properly before taking data. This is very important for all GPU benchmarking—in the first seconds, you will get unrealistic boost rates, and the lower temperature has the silicon operating at higher efficiency, which screws with the power consumption numbers.
  • “Maximum” uses Furmark at 1080p, which pushes all cards into its power limiter—another important data point.
  • Somewhat as a bonus, and I really wasn’t sure if it’s as useful, I added another run of Cyberpunk at 1080p, capped to 60 FPS, to simulate a “V-Sync” usage scenario. Running at V-Sync not only removes tearing, but also reduces the power consumption of the graphics card, which is perfect for slower single-player titles where you don’t need the highest FPS and would rather conserve some energy and have less heat dumped into your room. Just to clarify, we’re technically running a 60 FPS soft cap so that weaker cards that can’t hit 60 FPS (GTX 1650S and GTX 1660) won’t run 60/30/20 FPS V-Sync, but go as high as able.
  • Last but not least, a “Spikes” measurement was added, which reports the highest 20 ms spike recorded in this whole test sequence. This spike usually appears at the start of Furmark, before the card’s power limiting circuitry can react to the new conditions. On RX 6900 XT, I measured well above 600 W, which can trigger the protections of certain power supplies, resulting in the machine suddenly turning off. This happened to me several times with a different PSU than the Seasonic, so it’s not a theoretical test.

Radeon VII Fail

Since we’re running with Resizable BAR enabled, we also have to boot with UEFI instead of CSM. When it was time to retest the Radeon VII, I got no POST, and it seemed the card was dead. Since there’s plenty of drama around Radeon VII cards suddenly dying, I already started looking for a replacement, but wanted to give it another chance in another machine, which had it working perfectly fine. WTF?

After some googling, I found our article detailing the lack of UEFI support on the Radeon VII. So that was the problem, the card simply didn’t have the BIOS update AMD released after our article. Well, FML, the page with the BIOS update no longer exists on AMD’s website.

Really? Someone on their web team made the decision to just delete the pages that contain an important fix to get the product working, a product that’s not even two years old? (launched Feb 7 2019, page was removed no later than Nov 8 2020).

Luckily, I found the updated BIOS in our VGA BIOS collection, and the card is working perfectly now.

Performance results are on the next page. If you have more questions, please do let us know in the comments section of this article.

twitter-tries-to-fix-problematic-image-crops-by-not-cropping-pictures-anymore

Twitter tries to fix problematic image crops by not cropping pictures anymore

Twitter has devised a potential solution to its problematic image cropping issue: no more cropping. The company said on Wednesday it’s now testing a “what you see is what you get” image preview within the tweet compose box and experimenting with displaying full-frame images. That way, images will show up in the Twitter timeline looking just as they did when the user was composing the tweet.

“Now testing on Android and iOS: when you Tweet a single image, how the image appears in the Tweet composer is how it will look on the timeline –– bigger and better,” the company wrote in its announcement tweet on the new feature test. Twitter also says its testing new 4K image uploading on Android and iOS as part of a broader push “to improve how you can share and view media on Twitter.”

Now testing on Android and iOS: when you Tweet a single image, how the image appears in the Tweet composer is how it will look on the timeline –– bigger and better. pic.twitter.com/izI5S9VRdX

— Twitter Support (@TwitterSupport) March 10, 2021

With the new image preview change, there should be less algorithmic surprises — like the ones several users brought attention to last fall that showed how the company’s automated cropping tool quite often favored white faces over Black ones. In many of those cases, irregularly sized images shared on Twitter were automatically cropped behind the scenes using an AI-powered algorithm, but in ways that raised some troubling questions about how the software prioritized skin color and other factors.

Twitter at the time said the neural network it uses for automated image cropping was tested for racial bias, and the company claims it found none. But it also admitted it needed to perform more analysis and refine its approach to avoid situations like this where even the appearance of bias was a possibility.

“It’s clear that we’ve got more analysis to do. We’ll open source our work so others can review and replicate,” wrote Twitter communications lead Liz Kelley in the aftermath of the controversy going viral. “Just because a system shows no statistical bias, doesn’t mean it won’t cause harm.” Kelley said Twitter would rely “less on auto-cropping so more often the photo you see in the Tweet composer is what it will look like in the Tweet.”

we are going to rely less on auto-cropping so more often the photo you see in the Tweet composer is what it will look like in the Tweet =

— liz kelley (@lizkelley) October 1, 2020

Twitter’s Parag Agrawal, the company’s chief technology officer, later wrote a blog post delving into the issue at length, saying at the time that Twitter would be conducting “additional analysis to add further rigor to our testing” and that it was “committed to sharing our findings and… exploring ways to open-source our analysis so that others can help keep us accountable.”

Now, it looks like Twitter’s proposed solution is here, at least in a test phase. While tweets in standard aspect ratios will be identical when previewed in the compose window and displayed in the timeline, Twitter’s design chief Dantley Davis says extra-wide or tall images will be center cropped for those included in the test. Twitter has not shared a concrete timeline for when this change may be pushed live for all users.

With this test, we hope to learn if this new approach is better and what changes we need to make to provide a “what you see is what you get” experience for Tweets with images.

— Dantley Davis (@dantley) March 10, 2021

pico4ml-brings-machine-learning-to-the-raspberry-pi-pico

Pico4ML Brings Machine Learning To the Raspberry Pi Pico

(Image credit: Arducam)

The Raspberry Pi Pico wouldn’t be the first board that comes to mind for machine learning, but it seems that the $4 may be a viable platform for machine learning projects. The Pico4ML from Arducam is an RP2040 based board with an onboard camera, screen, and microphone that looks to be the same size as the Raspberry Pi Pico.

Arducam is probably better known for its range of cameras for the Raspberry Pi and Nvidia Jetson boards, but since the release of the Raspberry Pi Pico, they have been tinkering with machine learning projects powered by the Pico. The Arducam Pico4ML is their first RP2040-based board and the first board to feature an onboard camera, a microphone that you can use for “wake word” detection, a screen, and an Inertial Measurement Unit (IMU) that can detect gestures.

The Pico4ML is intended for machine learning and artificial intelligence projects based around Tiny Machine Learning (TinyML). The TensorFlow Lite Micro library has been ported to the RP2040, opening up a whole new world of projects for the $4 microcontroller. The Arducam Pico4ML is at its heart still a Raspberry Pi Pico, and so it should be compatible with accessories designed for the Pico.

The Pico4ML can detect two persons in real-time, and in the latest demo video, we see it in action with a real person and a Mario action figure. Pico4ML reacts with a percentage value to show how certain it is that an image is a person while providing a live camera feed of the subject in the frame. 

Image 1 of 3

(Image credit: Arducam)

Image 2 of 3

(Image credit: Arducam)

Image 3 of 3

(Image credit: Arducam)

At this time we don’t know how much and when this board will be available, but we do know that Tom’s Hardware will receive one for review in the next few weeks.