Neo Forza is a relatively young manufacturer of DRAM memory modules and flash memory products. The Taiwanese company was founded in 2018 as enthusiast-focused brand of Goldkey, a well-established producer of computer hardware which has focused on OEM manufacturing until recently.
Today’s review covers the Neo Forza eSports M.2 NVMe SSD, which is also known as NFP075. “eSports” is not a range of products, but the name of this specific drive. A future Gen 4 drive would be called “Esports4x4”, according to Neo Forza. Under the hood, the NFP075 is powered by a Phison PS5012-E12S controller paired with 3D TLC NAND from Chinese state-backed flash memory maker Yangtze Memory Technologies Co (YMTC)—the first YMTC flash I’ve ever reviewed! A DRAM chip from Kingston is included, too. PCI-Express 3.0 x4 is used as the host interface.
The Neo Forza eSports is available in capacities of 256 GB, 512 GB, 1 TB, and 2 TB. Endurance for these models is set at 420 TBW, 890 TBW, 1350 TBW, and 1550 TBW respectively. Neo Forza provides a three-year warranty for the eSports SSD.
Specifications: Neo Forza eSports NFP075 1 TB
Brand:
Neo Forza
Model:
NFP075PCI1T-3400200
Capacity:
1024 GB (953 GB usable) No additional overprovisioning
In a bid to sell more products to their loyal customers, many of hardware makers these days start offering new product categories. Earlier this year at CES, MSI outlined plans to start offering SSDs under its newly introduced Spatinum brand.
At the time the company only announced its flagship module featuring a PCIe 4.0 x4 interface rated for a 7,000 MB/s read speed as well as a 6.900 MB/s write speed, but it turns out MSI has readied a range of drives.
MSI has registered dozens of Spatinum SSD models with the with the Eurasian Economic Commission (ECC) in a bid to supply them to countries that belong to the Eurasian Economic Union, as discovered by PC Gamer. Not all products registered with the EEC actually reach the market, but at least some of them do. If MSI proceeds with what it registered with the ECC, its choice of drives will include three product families that will all include eight subfamilies, reports ComputerBase:
The Spatinum M400: top-of-the-range SSDs with a PCIe 4.0 interface. Expected to include M480, M471, and M470 models for different sub-segments of the market. Capacities set to range from 500GB to 2TB.
The Spatinum M300: mainstream SSDs with a PCIe 3.0 interface and capacity points from 256GB to 2TB. Projected to feature M381, M380, M371, and M370 model ranges.
The Spatinum S200: entry-level drives in a 2.5-inch form-factor with a SATA interface that will start at 240GB and will top at 1TB.
We don’t yet know the specs of MSI’s drives, but typically PC and hardware makers choose to use off-the-shelf designs offered by companies like Phison and Silicon Motion, which reduces risks and allows to quickly roll out a comprehensive product family. MSI has reportedly started to offer its Spatinum M370 drives to its partner CyberPowerPC in the U.S.
Considering that MSI sells not only motherboards, but also desktops, it makes a great sense for the company to also offer a range of SSDs. Meanwhile, it is unlikely that the company will indeed proceed with eight SSD models. When you join the ranks of over 200 SSD suppliers, you’d better keep your product line lean.
In recent years a number of PC makers have introduced small form-factor (SFF) and ultra-compact form-factor (UCFF) computers based on AMD’s latest accelerated processing units (APUs), but none of those systems are as tiny as Intel’s NUCs. Asus is one of a few manufacturers to actually offer its Mini PC PN-series AMD-powered UCFF machines that are just as compact as NUCs and they have just updated them to feature AMD Ryzen 5000 series APUs.
Asus freshly introduced Mini PC PN51 packs AMD’s Ryzen 5000U-series mobile processors with up to eight Zen 3 cores as well as up to Radeon Vega 7 graphics. The APU can be paired with a maximum of 32 GB of DDR4-3200 memory. Storage is via an M.2-2280 SSD with up to 1TB capacity, and a 2.5-inch 1 TB 7200-RPM hard drive. Meanwhile, the system still measures 115×115×49 mm has a 0.62-liter volume.
(Image credit: Asus)
The diminutive size of the Asus Mini PC PN51 does not impact the choice of ports on offer. The desktop computer is equipped with an Intel Wi-Fi 6 + Bluetooth 5.0 module (or a Wi-Fi 5 + BT 4.2), a 2.5 GbE or a GbE connector, three USB 3.2 Gen 1 Type-A ports, two USB 3.2 Gen 2 Type-C receptacles (supporting DisplayPort mode), an HDMI output with CEC support, a 3-in-1 card reader, a configurable port (which can be used for Ethernet, DisplayPort, D-Sub or COM ports), and a combo audio jack.
(Image credit: Asus)
Asus seems to position the Mini PC PN51 as a universal PC suitable for both home and office. The configurable I/O port that can add an Ethernet connector or a COM header is obviously aimed at corporate and business users. In addition, the PC has a TPM module on board. Meanwhile, the system also has an IR receiver (something that many modern UCFF and SFF PCs lost following Apple’s Mac Mini) and a Microsoft Cortana-compatible microphone array that will be particularly beneficial for home users that will use the PN51 as an HTPC.
(Image credit: Asus)
As far as power consumption and noise levels are concerned, the PN51 consumes as little as 9W at idle, produces 21.9 dBA of noise at idle, and 34.7 dBA at full load.
(Image credit: Asus)
The Asus Mini PC PN51 will be available shortly. Pricing will depend on configuration as Asus plans to offer numerous models based on the AMD Ryzen 3 5300U, AMD Ryzen 5 5500U, and AMD Ryzen 7 5700U processors with various memory and storage configurations.
Seagate has announced that it had surpassed a shipments milestone this March. Throughout its history, the company has shipped three zettabytes (ZB) of hard drive storage.
Seagate’s math about its 3ZB achievement is pretty interesting by itself. Three zettabytes equal 30 billion 4K movies, 60 billion video games, 7.5 trillion MP3 songs, or 1.5 quadrillion selfies. If you prefer SI metrics, one zettabyte is a thousand exabytes, which is a thousand petabytes. So 3ZB equals three billion TB. That’s a lot of hard drives.
Seagate was founded in 1979, more than 41 years ago. Throughout its history, the company shipped hundreds of millions of hard drives. It is particularly noteworthy that 2ZB out of 3ZB were shipped in the last couple of years, which indicates that the world now generates more data than ever.
From a business perspective, Seagate’s history looks no less impressive as the company has outlived more than 200 other HDD makers and is currently one of the three suppliers of hard drives, including some that compete with the best external hard drives. But what is particularly impressive is that the company is only getting started, as to remain relevant, it will have to do more than it has done so far.
(Image credit: Seagate)
A number of important events have happened in the storage market in recent years, which accelerated sales of high-capacity hard drives. First up, laptops shrank their dimensions and many of them no longer can accommodate a 2.5-inch hard drive, but rely on an SSD such as one of the best SSDs and cloud storage.
Secondly, cloud services have become ubiquitous and all of them use tens of thousands of HDDs to store data. Thirdly, since data centers now consume more hard drives than ever, they use high-capacity HDDs, which is why Seagate and its competitors now offer drives that can store up to 20TB of data, which is more than an average person needs today. All of these factors allowed Seagate and its peers to significantly increase the storage capacities they sell today even despite the fact that unit shipments of HDDs dropped in recent years.
(Image credit: Seagate)
Demand for data storage will increase once again soon. Consumers and businesses will continue to expand cloud storage usage, so the appropriate services will have to use more drives.
As major Hollywood studios launch their own streaming services, they (or rather their data center partners) will naturally need more storage, too. But end-users, businesses, and streaming services will still be responsible for only a fraction of that data that will have to be stored several years from now. Smart cities, smart factories, smart devices, autonomous vehicles, and robots will generate more data than the whole of humanity combined throughout its history.
(Image credit: Seagate)
As a result, Seagate and its rivals will sell tens of zettabytes of hard drive storage in the coming years. Last year Seagate and IDC estimated that the sum of data generated globally by 2025 was set to accelerate exponentially to 175ZB. To store data four years down the road, Seagate and other makers of HDDs will need to offer higher capacity drives and these are where technologies like heat-assisted magnetic recording (HAMR) will come into play. Seagate expects capacities of its HDDs to increase to 40TB ~ 50TB by 2025 ~ 2026 and to 100TB in 2030.
Hard drives provide a good balance between storage capacity and performance. Meanwhile, loads of cold data will be stored using tape drives, so 1PB tapes by Fujitsu and IBM will come in quite handy. Furthermore, as there will be loads of ‘hot’ data that will have to be always available, makers of NAND flash as well as dozens of SSD manufacturers will definitely not sit without a job.
Alienware is releasing its first laptop with an AMD CPU since 2007. Its parent company, Dell, today announced the Alienware m15 Ryzen Edition R5, alongside a lower-end Dell G15 Ryzen Edition and a Dell G15 refresh with Intel chips.
The Alienware m15 Ryzen Edition R5 will use AMD’s Ryzen 5000 H-series chips paired with Nvidia GeForce RTX 30-series GPUs. Like the Asus ROG Zephyrus Duo 15 SE, it will go up to a Ryzen 9 5900HX, which might give it enough power for consideration on our list of the best gaming laptops (we’ll have to review it first, of course).
The last Alienware laptop to pair with an AMD CPU and an Nvidia GPU was the Aurora mALX, last seen in 2007. That line went up to 19 inches and featured an AMD Turion 64 ML-44 and two Nvidia GeForce Go 7900 GTX cards in SLI.
The new m15 Ryzen Edition R5 will also be the first 15-inch Alienware laptop going to DDR4 memory at 3,200 MHz, and that memory will be user-replaceable.
Specs
Alienware m15 Ryzen Edition R5
Dell G15 Ryzen Edition
Dell G15
CPU
Up to AMD Ryzen 9 5900HX
Up to AMD Ryzen 7 5800H
Up to Intel Core i7-10870H
GPU
Up to Nvidia GeForce RTX 3070
Up to Nvidia GeForce RTX 3060
Up to Nvidia GeForce RTX 3060
RAM
Up to 32GB DDR4-3200MHz, user-replaceable
Up to 32GB DDR4-3200
Up to 16GB DDR4-2933
Storage
Up to 4TB (2x 2TB PCIe M.2 SSD)
Up to 2TB (PCIe NVMe M.2 SSD)
Up to 2TB (PCIe NVMe M.2 SSD)
Display
15.6-inches: FHD at 165 Hz, QHD at 260 Hz or FHD at 360 Hz
15.6 inches: FHD at 120 Hz or 165 Hz
15.6 inches: FHD at 120 Hz or 165 Hz
Release Date
April 20 (United States), April 7 (China), May 4 (Global)
May 4 (Global), April 30 (China)
April 13 (Global), March 5 (China)
Starting Price
$2,229.99
$899.99
$899.99
The m15 Ryzen Edition will come only in the black “dark side of the moon paint job,” as Dell put it, and feature a new two-toned finish, marking the first real change to Alienware’s “Legend” design language. Inside, the laptop uses what Alienware refers to as “Silky-Smooth High-Endurance” paint, which it claims reduces stains and feels like a more premium product.
Alienware’s AMD machine will also benefit from the option of the Cherry MX keyboard it introduced recently on the m15 R4. Additionally, the m15 Ryzen Editionfeatures Alienware’s proprietary cooling, dubbed “Cryo-Tech.”
Image 1 of 3
(Image credit: Dell)
Image 2 of 3
(Image credit: Dell)
Image 3 of 3
(Image credit: Dell)
There are three display options on the new Alienware: A 1080p (1920 x 1080), 360 Hz display for eSports aficionados, a 1440p (2560 x 1440) 240 Hz panel and a 1080p screen at 165 Hz.
The Alienware m15 Ryzen Edition R5 will start at $2,299.99 and go on sale first in China on April 7, in the U.S. on April 20 and with a global release on May 4.
New Dell G15 Gaming Laptops
The two new Dell G15 models use the redesigned chassis that the company
introduced in China in March
, with more aggressive angles and some new colors. The Dell G15 Ryzen Edition model will go up to an AMD Ryzen 7 5800H CPU and an Nvidia GeForce RTX 3060, while the Intel version will go up to a 10th Gen Intel Core i7-10870H (the Ryzen version uses the faster RAM, while the Intel version does not).
Both G15s will offer 15.6-inch displays with 1920 x 1080 resolution at either 120 Hz or 165 Hz.
(Image credit: Dell)
The two Dell G15 laptops will start at $899.99, with the Intel version launching globally on April 13 and the AMD option hitting on May 4.
Intel’s long-delayed 10nm+ third-gen Xeon Scalable Ice Lake processors mark an important step forward for the company as it attempts to fend off intense competition from AMD’s 7nm EPYC Milan processors that top out at 64 cores, a key advantage over Intel’s existing 14nm Cascade Lake Refresh that tops out at 28 cores. The 40-core Xeon Platinum 8380 serves as the flagship model of Intel’s revamped lineup, which the company says features up to a 20% IPC uplift on the strength of the new Sunny Cove core architecture paired with the 10nm+ process.
Intel has already shipped over 200,000 units to its largest customers since the beginning of the year, but today marks the official public debut of its newest lineup of data center processors, so we get to share benchmarks. The Ice Lake chips drop into dual-socket Whitley server platforms, while the previously-announced Cooper Lake slots in for quad- and octo-socket servers. Intel has slashed Xeon pricing up to 60% to remain competitive with EPYC Rome, and with EPYC Milan now shipping, the company has reduced per-core pricing again with Ice Lake to remain competitive as it targets high-growth markets, like the cloud, enterprise, HPC, 5G, and the edge.
The new Xeon Scalable lineup comes with plenty of improvements, like increased support for up to eight memory channels that run at a peak of DDR4-3200 with two DIMMs per channel, a notable improvement over Cascade Lake’s support for six channels at DDR4-2933 and matching EPYC’s eight channels of memory. Ice Lake also supports 6TB of DRAM/Optane per socket (4TB of DRAM) and 4TB of Optane Persistent Memory DIMMs per socket (8 TB in dual-socket). Unlike Intel’s past practices, Ice Lake also supports the full memory and Optane capacity on all models with no additional upcharge.
Intel has also moved forward from 48 lanes of PCIe 3.0 connectivity to 64 lanes of PCIe 4.0 (128 lanes in dual-socket), improving both I/O bandwidth and increasing connectivity to match AMD’s 128 available lanes in a dual-socket server.
Intel says that these additives, coupled with a range of new SoC-level optimizations, a focus on improved power management, along with support for new instructions, yield an average of 46% more performance in a wide range of data center workloads. Intel also claims a 50% uplift to latency-sensitive applications, like HammerDB, Java, MySQL, and WordPress, and up to 57% more performance in heavily-threaded workloads, like NAMD, signaling that the company could return to a competitive footing in what has become one of AMD’s strongholds — heavily threaded workloads. We’ll put that to the test shortly. First, let’s take a closer look at the lineup.
Intel Third-Gen Xeon Scalable Ice Lake Pricing and Specfications
We have quite the list of chips below, but we’ve actually filtered out the downstream Intel parts, focusing instead on the high-end ‘per-core scalable’ models. All told, the Ice Lake family spans 42 SKUs, with many of the lower-TDP (and thus performance) models falling into the ‘scalable performance’ category.
Intel also has specialized SKUs targeted at maximum SGX enclave capacity, cloud-optimized for VMs, liquid-cooled, networking/NFV, media, long-life and thermal-friendly, and single-socket optimized parts, all of which you can find in the slide a bit further below.
Cores / Threads
Base / Boost – All Core (GHz)
L3 Cache (MB)
TDP (W)
1K Unit Price / RCP
EPYC Milan 7763
64 / 128
2.45 / 3.5
256
280
$7,890
EPYC Rome 7742
64 / 128
2.25 / 3.4
256
225
$6,950
EPYC Milan 7663
56 / 112
2.0 / 3.5
256
240
$6,366
EPYC Milan 7643
48 / 96
2.3 / 3.6
256
225
$4.995
Xeon Platinum 8380
40 / 80
2.3 / 3.2 – 3.0
60
270
$8,099
Xeon Platinum 8368
38 / 76
2.4 / 3.4 – 3.2
57
270
$6,302
Xeon Platinum 8360Y
36 / 72
2.4 / 3.5 – 3.1
54
250
$4,702
Xeon Platinum 8362
32 / 64
2.8 / 3.6 – 3.5
48
265
$5,448
EPYC Milan 7F53
32 / 64
2.95 / 4.0
256
280
$4,860
EPYC Milan 7453
28 / 56
2.75 / 3.45
64
225
$1,570
Xeon Gold 6348
28 / 56
2.6 / 3.5 – 3.4
42
235
$3,072
Xeon Platinum 8280
28 / 56
2.7 / 4.0 – 3.3
38.5
205
$10,009
Xeon Gold 6258R
28 / 56
2.7 / 4.0 – 3.3
38.5
205
$3,651
EPYC Milan 74F3
24 / 48
3.2 / 4.0
256
240
$2,900
Intel Xeon Gold 6342
24 / 48
2.8 / 3.5 – 3.3
36
230
$2,529
Xeon Gold 6248R
24 / 48
3.0 / 4.0
35.75
205
$2,700
EPYC Milan 7443
24 / 48
2.85 / 4.0
128
200
$2,010
Xeon Gold 6354
18 / 36
3.0 / 3.6 – 3.6
39
205
$2,445
EPYC Milan 73F3
16 / 32
3.5 / 4.0
256
240
$3,521
Xeon Gold 6346
16 / 32
3.1 / 3.6 – 3.6
36
205
$2,300
Xeon Gold 6246R
16 / 32
3.4 / 4.1
35.75
205
$3,286
EPYC Milan 7343
16 / 32
3.2 / 3.9
128
190
$1,565
Xeon Gold 5317
12 / 24
3.0 / 3.6 – 3.4
18
150
$950
Xeon Gold 6334
8 / 16
3.6 / 3.7 – 3.6
18
165
$2,214
EPYC Milan 72F3
8 / 16
3.7 / 4.1
256
180
$2,468
Xeon Gold 6250
8 / 16
3.9 / 4.5
35.75
185
$3,400
At 40 cores, the Xeon Platinum 8380 reaches new heights over its predecessors that topped out at 28 cores, striking higher in AMD’s Milan stack. The 8380 comes at $202 per core, which is well above the $130-per-core price tag on the previous-gen flagship, the 28-core Xeon 6258R. However, it’s far less expensive than the $357-per-core pricing of the Xeon 8280, which had a $10,008 price tag before AMD’s EPYC upset Intel’s pricing model and forced drastic price reductions.
With peak clock speeds of 3.2 GHz, the 8380 has a much lower peak clock rate than the previous-gen 28-core 6258R’s 4.0 GHz. Even dipping down to the new 28-core Ice Lake 6348 only finds peak clock speeds of 3.5 GHz, which still trails the Cascade Lake-era models. Intel obviously hopes to offset those reduced clock speeds with other refinements, like increased IPC and better power and thermal management.
On that note, Ice Lake tops out at 3.7 GHz on a single core, and you’ll have to step down to the eight-core model to access these clock rates. In contrast, Intel’s previous-gen eight-core 6250 had the highest clock rate, 4.5 GHz, of the Cascade Lake stack.
Surprisingly, AMD’s EPYC Milan models actually have higher peak frequencies than the Ice Lake chips at any given core count, but remember, AMD’s frequencies are only guaranteed on one physical core. In contrast, Intel specs its chips to deliver peak clock rates on any core. Both approaches have their merits, but AMD’s more refined boost tech paired with the 7nm TSMC process could pay dividends for lightly-threaded work. Conversely, Intel does have solid all-core clock rates that peak at 3.6 GHz, whereas AMD has more of a sliding scale that varies based on the workload, making it hard to suss out the winners by just examining the spec sheet.
Ice Lake’s TDPs stretch from 85W up to 270W. Surprisingly, despite the lowered base and boost clocks, Ice Lake’s TDPs have increased gen-on-gen for the 18-, 24- and 28-core models. Intel is obviously pushing higher on the TDP envelope to extract the most performance out of the socket possible, but it does have lower-power chip options available (listed in the graphic below).
AMD has a notable hole in its Milan stack at both the 12- and 18-core mark, a gap that Intel has filled with its Gold 5317 and 6354, respectively. Milan still holds the top of the hierarchy with 48-, 56- and 64-core models.
Image 1 of 12
(Image credit: Intel)
Image 2 of 12
(Image credit: Intel)
Image 3 of 12
(Image credit: Intel)
Image 4 of 12
(Image credit: Intel)
Image 5 of 12
(Image credit: Intel)
Image 6 of 12
(Image credit: Intel)
Image 7 of 12
(Image credit: Intel)
Image 8 of 12
(Image credit: Intel)
Image 9 of 12
(Image credit: Intel)
Image 10 of 12
(Image credit: Intel)
Image 11 of 12
(Image credit: Intel)
Image 12 of 12
(Image credit: Intel)
The Ice Lake Xeon chips drop into Whitley server platforms with Socket LGA4189-4/5. The FC-LGA14 package measures 77.5mm x 56.5mm and has an LGA interface with 4189 pins. The die itself is predicted to measure ~600mm2, though Intel no longer shares details about die sizes or transistor counts. In dual-socket servers, the chips communicate with each other via three UPI links that operate at 11.2 GT/s, an increase from 10.4 GT/s with Cascade Lake. . The processor interfaces with the C620A chipset via four DMI 3.0 links, meaning it communicates at roughly PCIe 3.0 speeds.
The C620A chipset also doesn’t support PCIe 4.0; instead, it supports up to 20 lanes of PCIe 3.0, ten USB 3.0, and fourteen USB 2.0 ports, along with 14 ports of SATA 6 Gbps connectivity. Naturally, that’s offset by the 64 PCIe 4.0 lanes that come directly from the processor. As before, Intel offers versions of the chipset with its QuickAssist Technology (QAT), which boosts performance in cryptography and compression/decompression workloads.
Image 1 of 12
(Image credit: Intel)
Image 2 of 12
(Image credit: Intel)
Image 3 of 12
(Image credit: Intel)
Image 4 of 12
(Image credit: Intel)
Image 5 of 12
(Image credit: Intel)
Image 6 of 12
(Image credit: Intel)
Image 7 of 12
(Image credit: Intel)
Image 8 of 12
(Image credit: Intel)
Image 9 of 12
(Image credit: Intel)
Image 10 of 12
(Image credit: Intel)
Image 11 of 12
(Image credit: Intel)
Image 12 of 12
(Image credit: Intel)
Intel’s focus on its platform adjacencies business is a key part of its messaging around the Ice Lake launch — the company wants to drive home its message that coupling its processors with its own differentiated platform additives can expose additional benefits for Whitley server platforms.
The company introduced new PCIe 4.0 solutions, including the new 200 GbE Ethernet 800 Series adaptors that sport a PCIe 4.0 x16 connection and support RDMA iWARP and RoCEv2, and the Intel Optane SSD P5800X, a PCIe 4.0 SSD that uses ultra-fast 3D XPoint media to deliver stunning performance results compared to typical NAND-based storage solutions.
Intel also touts its PCIe 4.0 SSD D5-P5316, which uses the company’s 144-Layer QLC NAND for read-intensive workloads. These SSDs offer up to 7GBps of throughput and come in capacities stretching up to 15.36 TB in the U.2 form factor, and 30.72 TB in the E1.L ‘Ruler’ form factor.
Intel’s Optane Persistent Memory 200-series offers memory-addressable persistent memory in a DIMM form factor. This tech can radically boost memory capacity up to 4TB per socket in exchange for higher latencies that can be offset through software optimizations, thus yielding more performance in workloads that are sensitive to memory capacity.
The “Barlow Pass” Optane Persistent Memory 200 series DIMMs promise 30% more memory bandwidth than the previous-gen Apache Pass models. Capacity remains at a maximum of 512GB per DIMM with 128GB and 256GB available, and memory speeds remain at a maximum of DDR4-2666.
Intel has also expanded its portfolio of Market Ready and Select Solutions offerings, which are pre-configured servers for various workloads that are available in over 500 designs from Intel’s partners. These simple-to-deploy servers are designed for edge, network, and enterprise environments, but Intel has also seen uptake with cloud service providers like AWS, which uses these solutions for its ParallelCluster HPC service.
Image 1 of 10
(Image credit: Intel)
Image 2 of 10
(Image credit: Intel)
Image 3 of 10
(Image credit: Intel)
Image 4 of 10
(Image credit: Intel)
Image 5 of 10
(Image credit: Intel)
Image 6 of 10
(Image credit: Intel)
Image 7 of 10
(Image credit: Intel)
Image 8 of 10
(Image credit: Intel)
Image 9 of 10
(Image credit: Intel)
Image 10 of 10
(Image credit: Intel)
Like the benchmarks you’ll see in this review, the majority of performance measurements focus on raw throughput. However, in real-world environments, a combination of throughput and responsiveness is key to deliver on latency-sensitive SLAs, particularly in multi-tenant cloud environments. Factors such as loaded latency (i.e., the amount of performance delivered to any number of applications when all cores have varying load levels) are key to ensuring performance consistency across multiple users. Ensuring consistency is especially challenging with diverse workloads running on separate cores in multi-tenant environments.
Intel says it focused on performance consistency in these types of environments through a host of compute, I/O, and memory optimizations. The cores, naturally, benefit from increased IPC, new ISA instructions, and scaling up to higher core counts via the density advantages of 10nm, but Intel also beefed up its I/O subsystem to 64 lanes of PCIe 4.0, which improves both connectivity (up from 48 lanes) and throughput (up from PCIe 3.0).
Intel says it designed the caches, memory, and I/O, not to mention power levels, to deliver consistent performance during high utilization. As seen in slide 30, the company claims these alterations result in improved application performance and latency consistency by reducing long tail latencies to improve worst-case performance metrics, particularly for memory-bound and multi-tenant workloads.
Image 1 of 12
(Image credit: Intel)
Image 2 of 12
(Image credit: Intel)
Image 3 of 12
(Image credit: Intel)
Image 4 of 12
(Image credit: Intel)
Image 5 of 12
(Image credit: Intel)
Image 6 of 12
(Image credit: Intel)
Image 7 of 12
(Image credit: Intel)
Image 8 of 12
(Image credit: Intel)
Image 9 of 12
(Image credit: Intel)
Image 10 of 12
(Image credit: Intel)
Image 11 of 12
(Image credit: Intel)
Image 12 of 12
(Image credit: Intel)
Ice Lake brings a big realignment of the company’s die that provides cache, memory, and throughput advances. The coherent mesh interconnect returns with a similar arrangement of horizontal and vertical rings present on the Cascade Lake-SP lineup, but with a realignment of the various elements, like cores, UPI connections, and the eight DDR4 memory channels that are now split into four dual-channel controllers. Here we can see that Intel shuffled around the cores on the 28-core die and now has two execution cores on the bottom of the die clustered with I/O controllers (some I/O is now also at the bottom of the die).
Intel redesigned the chip to support two new sideband fabrics, one controlling power management and the other used for general-purpose management traffic. These provide telemetry data and control to the various IP blocks, like execution cores, memory controllers, and PCIe/UPI controllers.
The die includes a separate peer-to-peer (P2P) fabric to improve bandwidth between cores, and the I/O subsystem was also virtualized, which Intel says offers up to three times the fabric bandwidth compared to Cascade Lake. Intel also split one of the UPI blocks into two, creating a total of three UPI links, all with fine-grained power control of the UPI links. Now, courtesy of dedicated PLLs, all three UPIs can modulate clock frequencies independently based on load.
Densely packed AVX instructions augment performance in properly-tuned workloads at the expense of higher power consumption and thermal load. Intel’s Cascade Lake CPUs drop to lower frequencies (~600 to 900 MHz) during AVX-, AVX2-, and AVX-512-optimized workloads, which has hindered broader adoption of AVX code.
To reduce the impact, Intel has recharacterized its AVX power limits, thus yielding (unspecified) higher frequencies for AVX-512 and AVX-256 operations. This is done in an adaptive manner based on three different power levels for varying instruction types. This nearly eliminates the frequency delta between AVX and SSE for 256-heavy and 512-light operations, while 512-heavy operations have also seen significant uplift. All Ice Lake SKUs come with dual 512b FMAs, so this optimization will pay off across the entire stack.
Intel also added support for a host of new instructions to boost cryptography performance, like VPMADD52, GFNI, SHA-NI, Vector AES, and Vector Carry-Less multiply instructions, and a few new instructions to boost compression/decompression performance. All rely heavily upon AVX acceleration. The chips also support Intel’s Total Memory Encryption (TME) that offers DRAM encryption through AES-XTS 128-bit hardware-generated keys.
Intel also made plenty of impressive steps forward on the microarchitecture, with improvements to every level of the pipeline allowing Ice Lake’s 10nm Sunny Cove cores to deliver far higher IPC than 14nm Cascade Lake’s Skylake-derivative architecture. Key improvements to the front end include larger reorder, load, and store buffers, along with larger reservation stations. Intel increased the L1 data cache from 32 KiB, the capacity it has used in its chips for a decade, to 42 KiB, and moved from 8-way to 12-way associativity. The L2 cache moves from 4-way to 8-way and is also larger, but the capacity is dependent upon each specific type of product — for Ice Lake server chips, it weighs in at 1.25 MB per core.
Intel expanded the micro-op cache (UOP) from 1.5K to 2.25K micro-ops, the second-level translation lookaside buffer (TLB) from 1536 entries to 2048, and moved from a four-wide allocation to five-wide to allow the in-order portion of the pipeline (front end) to feed the out-of-order (back end) portion faster. Additionally, Intel expanded the Out of Order (OoO) Window from 224 to 352. Intel also increased the number of execution units to handle ten operations per cycle (up from eight with Skylake) and focused on improving branch prediction accuracy and reducing latency under load conditions.
The store unit can now process two store data operations for every cycle (up from one), and the address generation units (AGU) also handle two loads and two stores each cycle. These improvements are necessary to match the increased bandwidth from the larger L1 data cache, which does two reads and two writes every cycle. Intel also tweaked the design of the sub-blocks in the execution units to enable data shuffles within the registers.
Intel also added support for its Software Guard Extensions (SGX) feature that debuted with the Xeon E lineup, and increased capacity to 1TB (maximum capacity varies by model). SGX creates secure enclaves in an encrypted portion of the memory that is exclusive to the code running in the enclave – no other process can access this area of memory.
Test Setup
We have a glaring hole in our test pool: Unfortunately, we do not have AMD’s recently-launched EPYC Milan processors available for this round of benchmarking, though we are working on securing samples and will add competitive benchmarks when available.
We do have test results for the AMD’s frequency-optimized Rome 7Fx2 processors, which represent AMD’s performance with its previous-gen chips. As such, we should view this round of tests largely through the prism of Intel’s gen-on-gen Xeon performance improvement, and not as a measure of the current state of play in the server chip market.
We use the Xeon Platinum Gold 8280 as a stand-in for the less expensive Xeon Gold 6258R. These two chips are identical and provide the same level of performance, with the difference boiling down to the more expensive 8280 coming with support for quad-socket servers, while the Xeon Gold 6258R tops out at dual-socket support.
Image 1 of 7
(Image credit: Tom’s Hardware)
Image 2 of 7
(Image credit: Tom’s Hardware)
Image 3 of 7
(Image credit: Tom’s Hardware)
Image 4 of 7
(Image credit: Tom’s Hardware)
Image 5 of 7
(Image credit: Tom’s Hardware)
Image 6 of 7
(Image credit: Tom’s Hardware)
Image 7 of 7
(Image credit: Tom’s Hardware)
Intel provided us with a 2U Server System S2W3SIL4Q Software Development Platform with the Coyote Pass server board for our testing. This system is designed primarily for validation purposes, so it doesn’t have too many noteworthy features. The system is heavily optimized for airflow, with the eight 2.5″ storage bays flanked by large empty bays that allow for plenty of air intake.
The system comes armed with dual redundant 2100W power supplies, a 7.68TB Intel SSD P5510, an 800GB Optane SSD P5800X, and an E810-CQDA2 200GbE NIC. We used the Intel SSD P5510 for our benchmarks and cranked up the fans for maximum performance in our benchmarks.
We tested with the pre-installed 16x 32GB DDR4-3200 DIMMs, but Intel also provided sixteen 128GB Optane Persistent Memory DIMMs for further testing. Due to time constraints, we haven’t yet had time to test the Optane DIMMs, but stay tuned for a few demo workloads in a future article. As we’re not entirely done with our testing, we don’t want to risk prying the 8380 out of the socket yet for pictures — the large sockets from both vendors are becoming more finicky after multiple chip reinstalls.
Memory
Tested Processors
Intel S2W3SIL4Q
16x 32GB SK hynix ECC DDR4-3200
Intel Xeon Platinum 8380
Supermicro AS-1023US-TR4
16x 32GB Samsung ECC DDR4-3200
EPYC 7742, 7F72, 7F52
Dell/EMC PowerEdge R460
12x 32GB SK hynix DDR4-2933
Intel Xeon 8280, 6258R, 5220R, 6226R
To assess performance with a range of different potential configurations, we used a Supermicro 1024US-TR4 server with three different EPYC Rome configurations. We outfitted this server with 16x 32GB Samsung ECC DDR4-3200 memory modules, ensuring the chips had all eight memory channels populated.
We used a Dell/EMC PowerEdge R460 server to test the Xeon processors in our test group. We equipped this server with 12x 32GB Sk hynix DDR4-2933 modules, again ensuring that each Xeon chip’s six memory channels were populated.
We used the Phoronix Test Suite for benchmarking. This automated test suite simplifies running complex benchmarks in the Linux environment. The test suite is maintained by Phoronix, and it installs all needed dependencies and the test library includes 450 benchmarks and 100 test suites (and counting). Phoronix also maintains openbenchmarking.org, which is an online repository for uploading test results into a centralized database.
We used Ubuntu 20.04 LTS to maintain compatibility with our existing test results, and leverage the default Phoronix test configurations with the GCC compiler for all tests below. We also tested all platforms with all available security mitigations.
Naturally, newer Linux kernels, software, and targeted optimizations can yield improvements for any of the tested processors, so take these results as generally indicative of performance in compute-intensive workloads, but not as representative of highly-tuned deployments.
Linux Kernel, GCC and LLVM Compilation Benchmarks
Image 1 of 2
(Image credit: Tom’s Hardware)
Image 2 of 2
(Image credit: Tom’s Hardware)
AMD’s EPYC Rome processors took the lead over the Cascade Lake Xeon chips at any given core count in these benchmarks, but here we can see that the 40-core Ice Lake Xeon 8380 has tremendous potential for these type of workloads. The dual 8380 processors complete the Linux compile benchmark, which builds the Linux kernel at default settings, in 20 seconds, edging out the 64-core EPYC Rome 7742 by one second. Naturally, we expect AMD’s Milan flagship, the 7763, to take the lead in this benchmark. Still, the implication is clear — Ice Lake-SP has significantly-improved performance, thus reducing the delta between Xeon and competing chips.
We can also see a marked improvement in the LLVM compile, with the 8380 reducing the time to completion by ~20% over the prior-gen 8280.
Molecular Dynamics and Parallel Compute Benchmarks
Image 1 of 6
(Image credit: Tom’s Hardware)
Image 2 of 6
(Image credit: Tom’s Hardware)
Image 3 of 6
(Image credit: Tom’s Hardware)
Image 4 of 6
(Image credit: Tom’s Hardware)
Image 5 of 6
(Image credit: Tom’s Hardware)
Image 6 of 6
(Image credit: Tom’s Hardware)
NAMD is a parallel molecular dynamics code designed to scale well with additional compute resources; it scales up to 500,000 cores and is one of the premier benchmarks used to quantify performance with simulation code. The Xeon 8380’s notch a 32% improvement in this benchmark, slightly beating the Rome chips.
Stockfish is a chess engine designed for the utmost in scalability across increased core counts — it can scale up to 512 threads. Here we can see that this massively parallel code scales well with EPYC’s leading core counts. The EPYC Rome 7742 retains its leading position at the top of the chart, but the 8380 offers more than twice the performance of the previous-gen Cascade Lake flagship.
We see similarly impressive performance uplifts in other molecular dynamics workloads, like the Gromacs water benchmark that simulates Newtonian equations of motion with hundreds of millions of particles. Here Intel’s dual 8380’s take the lead over the EPYC Rome 7742 while pushing out nearly twice the performance of the 28-core 8280.
We see a similarly impressive generational improvement in the LAAMPS molecular dynamics workload, too. Again, AMD’s Milan will likely be faster than the 7742 in this workload, so it isn’t a given that the 8380 has taken the definitive lead over AMD’s current-gen chips, though it has tremendously improved Intel’s competitive positioning.
The NAS Parallel Benchmarks (NPB) suite characterizes Computational Fluid Dynamics (CFD) applications, and NASA designed it to measure performance from smaller CFD applications up to “embarrassingly parallel” operations. The BT.C test measures Block Tri-Diagonal solver performance, while the LU.C test measures performance with a lower-upper Gauss-Seidel solver. The EPYC Milan 7742 still dominates in this workload, showing that Ice Lake’s broad spate of generational improvements still doesn’t allow Intel to take the lead in all workloads.
Rendering Benchmarks
Image 1 of 4
(Image credit: Tom’s Hardware)
Image 2 of 4
(Image credit: Tom’s Hardware)
Image 3 of 4
(Image credit: Tom’s Hardware)
Image 4 of 4
(Image credit: Tom’s Hardware)
Turning to more standard fare, provided you can keep the cores fed with data, most modern rendering applications also take full advantage of the compute resources. Given the well-known strengths of EPYC’s core-heavy approach, it isn’t surprising to see the 64-core EPYC 7742 processors retain the lead in the C-Ray benchmark, and that applies to most of the Blender benchmarks, too.
Encoding Benchmarks
Image 1 of 3
(Image credit: Tom’s Hardware)
Image 2 of 3
(Image credit: Tom’s Hardware)
Image 3 of 3
(Image credit: Tom’s Hardware)
Encoders tend to present a different type of challenge: As we can see with the VP9 libvpx benchmark, they often don’t scale well with increased core counts. Instead, they often benefit from per-core performance and other factors, like cache capacity. AMD’s frequency-optimized 7F52 retains its leading position in this benchmark, but Ice Lake again reduces the performance delta.
Newer software encoders, like the Intel-Netflix designed SVT-AV1, are designed to leverage multi-threading more fully to extract faster performance for live encoding/transcoding video applications. EPYC Rome’s increased core counts paired with its strong per-core performance beat Cascade Lake in this benchmark handily, but the step up to forty 10nm+ cores propels Ice Lake to the top of the charts.
Compression, Security and Python Benchmarks
Image 1 of 5
(Image credit: Tom’s Hardware)
Image 2 of 5
(Image credit: Tom’s Hardware)
Image 3 of 5
(Image credit: Tom’s Hardware)
Image 4 of 5
(Image credit: Tom’s Hardware)
Image 5 of 5
(Image credit: Tom’s Hardware)
The Pybench and Numpy benchmarks are used as a general litmus test of Python performance, and as we can see, these tests typically don’t scale linearly with increased core counts, instead prizing per-core performance. Despite its somewhat surprisingly low clock rates, the 8380 takes the win in the Pybench benchmark and improves Xeon’s standing in Numpy as it takes a close second to the 7F52.
Compression workloads also come in many flavors. The 7-Zip (p7zip) benchmark exposes the heights of theoretical compression performance because it runs directly from main memory, allowing both memory throughput and core counts to heavily impact performance. As we can see, this benefits the core-heavy chips as they easily dispatch with the chips with lesser core counts. The Xeon 8380 takes the lead in this test, but other independent benchmarks show that AMD’s EPYC Milan would lead this chart.
In contrast, the gzip benchmark, which compresses two copies of the Linux 4.13 kernel source tree, responds well to speedy clock rates, giving the 16-core 7F52 the lead. Here we see that 8380 is slightly slower than the previous-gen 8280, which is likely at least partially attributable to the 8380’s much lower clock rate.
The open-source OpenSSL toolkit uses SSL and TLS protocols to measure RSA 4096-bit performance. As we can see, this test favors the EPYC processors due to its parallelized nature, but the 8380 has again made big strides on the strength of its higher core count. Offloading this type of workload to dedicated accelerators is becoming more common, and Intel also offers its QAT acceleration built into chipsets for environments with heavy requirements.
Conclusion
Admittedly, due to our lack of EPYC Milan samples, our testing today of the Xeon Platinum 8380 is more of a demonstration of Intel’s gen-on-gen performance improvements rather than a holistic view of the current competitive landscape. We’re working to secure a dual-socket Milan server and will update when one lands in our lab.
Overall, Intel’s third-gen Xeon Scalable is a solid step forward for the Xeon franchise. AMD has steadily chewed away data center market share from Intel on the strength of its EPYC processors that have traditionally beaten Intel’s flagships by massive margins in heavily-threaded workloads. As our testing, and testing from other outlets shows, Ice Lake drastically reduces the massive performance deltas between the Xeon and EPYC families, particularly in heavily threaded workloads, placing Intel on a more competitive footing as it faces an unprecedented challenge from AMD.
AMD will still hold the absolute performance crown in some workloads with Milan, but despite EPYC Rome’s commanding lead in the past, progress hasn’t been as swift as some projected. Much of that boils down to the staunchly risk-averse customers in the enterprise and data center; these customers prize a mix of factors beyond the standard measuring stick of performance and price-to-performance ratios, instead focusing on areas like compatibility, security, supply predictability, reliability, serviceability, engineering support, and deeply-integrated OEM-validated platforms.
AMD has improved drastically in these areas and now has a full roster of systems available from OEMs, along with broadening uptake with CSPs and hyperscalers. However, Intel benefits from its incumbency and all the advantages that entails, like wide software optimization capabilities and platform adjacencies like networking, FPGAs, and Optane memory.
Although Ice Lake doesn’t lead in all metrics, it does improve the company’s positioning as it moves forward toward the launch of its Sapphire Rapids processors that are slated to arrive later this year to challenge AMD’s core-heavy models. Intel still holds the advantage in several criteria that appeal to the broader enterprise market, like pre-configured Select Solutions and engineering support. That, coupled with drastic price reductions, has allowed Intel to reduce the impact of a fiercely-competitive adversary. We can expect the company to redouble those efforts as Ice Lake rolls out to the more general server market.
Bloomberg today reported that a shortage of inexpensive display driver chips has delayed production of the LCD panels used in, well, pretty much every product category you can think of. Displays are ubiquitous, and many devices can’t function without them. But for the displays to work, they require a display driver — no, not Nvidia or AMD display drivers, those are software. We’re talking about a tiny chip that sends instructions and signals to the display.
That’s a fairly simple function, at least compared to those performed by the vastly more powerful components inside the device proper, which is why many display drivers cost $1. But a component’s price doesn’t always reflect its importance, as anyone who’s built a high-end PC, bought one of the best gaming monitors, and then realized they forgot to get a compatible cable can attest. That missing link is both cheap and vital.
All of which means that a display driver shortage can cause delays for smartphones, laptops, and game consoles; automobiles, airplanes, and navigation systems; and various appliances, smart home devices, and other newly be-screened products.
“I have never seen anything like this in the past 20 years since our company’s founding,” Himax CEO Jordan Wu told Bloomberg. He should know — Himax claims to be the “worldwide market leader in display driver ICs” for many product categories.
Himax’s share price has risen alongside demand for display drivers. Yahoo Finance data puts its opening share price for May 1, 2020 at $3.53; it opened at $14.06 on Monday. The market, at least, is acutely aware of display drivers’ importance.
Unfortunately there isn’t much Himax can do to improve the availability of display drivers, Wu told Bloomberg, because it’s a fabless company that relies on TSMC for production. TSMC simply can’t keep up with all the demand it’s experiencing.
Companies will have to sit on otherwise-ready displays (assuming panel production improves) until that changes. This probably seems familiar to manufacturers waiting for SSD controller supply to rebound after the February disruption of a Samsung fab.
That’s only part of the problem, of course, as the global chip shortage affects practically every aspect of the electronics industry. It’s a matter of improving the availability of CPUs, GPUs, mobile processors, chipsets, display panels, single board computers, and who-knows-how-many other components. No biggie.
for the PC version on Steam. The sequel follows the saga of Ethan Winters, this time with some apparently very large vampire ladies. Based on what we’ve seen, you’ll benefit from having one of the
best graphics cards
along with something from our list of the
best CPUs for gaming
when the game arrives on May 7.
The eighth entry in the series (VIII from Village), this will be the first Resident Evil to feature ray tracing technology. The developers have tapped AMD to help with the ray tracing implementation, however, so it’s not clear whether it will run on Nvidia’s RTX cards at launch, or if it will require a patch — and it’s unlikely to get DLSS support, though it could make for a stunning showcase for AMD’s FidelityFX Super Resolution if AMD can pull some strings.
We’ve got about a month to wait before the official launch. In the meantime, here are the official system requirements.
Minimum System Requirements for Resident Evil Village
Capcom notes that in either case, the game targets 1080p at 60 fps, though the framerate “might drop in graphics-intensive scenes.” While the minimum requirements specify using the “Prioritize Performance” setting, it’s not clear what settings are used for the recommended system.
The Resident Evil Village minimum system requirements are also for running the game without ray tracing, with a minimum requirement of an RTX 2060 (and likely future AMD GPUs like Navi 23), and a recommendation of at least an RTX 2070 or RX 6700 XT if you want to enable ray tracing. There’s no mention of installation size yet, so we’ll have to wait and see just how much of our SSD the game wants to soak up.
The CPU specs are pretty tame, and it’s very likely you can use lower spec processors. For example, the Ryzen 3 1200 is the absolute bottom of the entire Ryzen family stack, with a 4-core/4-thread configuration running at up to 3.4GHz. The Core i5-7500 also has a 4-core/4-thread configuration, but runs at up to 3.8GHz, and it’s generally higher in IPC than first generation Ryzen.
You should be able to run the game on even older/slower CPUs, though perhaps not at 60 fps. The recommended settings are a decent step up in performance potential, moving to 6-core/12-thread CPUs for both AMD and Intel, which are fairly comparable processors.
The graphics card will almost certainly play a bigger role in performance than the CPU, and while the baseline GTX 1050 Ti and RX 560 4GB are relatively attainable (the game apparently requires, maybe, 4GB or more VRAM), we wouldn’t be surprised if that’s with some form of dynamic resolution scaling enabled. Crank up the settings and the GTX 1070 and RX 5700 are still pretty modest cards, though the AMD card is significantly faster — not that you can find either in stock at acceptable prices these days, as we show in our
GPU pricing index
. But if you want to run the full-fat version of Resident Evil Village, with all the DXR bells and whistles at 1440p or 4K, you’re almost certainly going to need something far more potent.
Full size images: RE Village RT On / RE Village RT OffAMD showed a preview of the game running with and without ray tracing during its
Where Gaming Begins, Episode 3
presentation in early March. The pertinent section of the video starts at the 9:43 mark, though we’ve snipped the comparison images above for reference. The improved lighting and reflections are clearly visible in the RT enabled version, but critically we don’t know how well the game runs with RT enabled.
We’re looking forward to testing Resident Evil Village on a variety of GPUs and CPUs next month when it launches on PC, Xbox, and PlayStation. Based on what we’ve seen from other RT-enabled games promoted by AMD (e.g. Dirt 5), we expect frame rates will take a significant hit.
But like we said, this may also be the debut title for FidelityFX Super Resolution, and if so, that’s certainly something we’re eager to test. What we’d really like to see is a game that supports both FidelityFX Super Resolution and DLSS, just so we could do some apples-to-apples comparisons, but it may be a while before such a game appears.
use? It’s an important question, and while the performance we show in our
GPU benchmarks
hierarchy is useful, one of the true measures of a GPU is how efficient it is. To determine GPU power efficiency, we need to know both performance and power use. Measuring performance is relatively easy, but measuring power can be complex. We’re here to press the reset button on GPU power measurements and do things the right way.
There are various ways to determine power use, with varying levels of difficulty and accuracy. The easiest approach is via software like
GPU-Z
, which will tell you what the hardware reports. Alternatively, you can measure power at the outlet using something like a
Kill-A-Watt
power meter, but that only captures total system power, including PSU inefficiencies. The best and most accurate means of measuring the power use of a graphics card is to measure power draw in between the power supply (PSU) and the card, but it requires a lot more work.
We’ve used GPU-Z in the past, but it had some clear inaccuracies. Depending on the GPU, it can be off by anywhere from a few watts to potentially 50W or more. Thankfully, the latest generation AMD Big Navi and Nvidia Ampere GPUs tend to report relatively accurate data, but we’re doing things the right way. And by “right way,” we mean measuring in-line power consumption using hardware devices. Specifically, we’re using Powenetics software in combination with various monitors from TinkerForge. You can read our Powenetics project overview for additional details.
Image 1 of 2
(Image credit: Tom’s Hardware)
Image 2 of 2
(Image credit: Tom’s Hardware)
Tom’s Hardware GPU Testbed
After assembling the necessary bits and pieces — some soldering required — the testing process is relatively straightforward. Plug in a graphics card and the power leads, boot the PC, and run some tests that put a load on the GPU while logging power use.
We’ve done that with all the legacy GPUs we have from the past six years or so, and we do the same for every new GPU launch. We’ve updated this article with the latest data from the GeForce RTX 3090, RTX 3080, RTX 3070, RTX 3060 Ti, and RTX 3060 12GB from Nvidia; and the Radeon RX 6900 XT, RX 6800 XT, RX 6800, and RX 6700 XT from AMD. We use the reference models whenever possible, which means only the EVGA RTX 3060 is a custom card.
If you want to see power use and other metrics for custom cards, all of our graphics card reviews include power testing. So for example, the RX 6800 XT roundup shows that many custom cards use about 40W more power than the reference designs, thanks to factory overclocks.
Test Setup
We’re using our standard graphics card testbed for these power measurements, and it’s what we’ll use on graphics card reviews. It consists of an MSI MEG Z390 Ace motherboard,
Intel Core i9-9900K CPU
, NZXT Z73 cooler, 32GB Corsair DDR4-3200 RAM, a fast M.2 SSD, and the other various bits and pieces you see to the right. This is an open test bed, because the Powenetics equipment essentially requires one.
There’s a PCIe x16 riser card (which is where the soldering came into play) that slots into the motherboard, and then the graphics cards slot into that. This is how we accurately capture actual PCIe slot power draw, from both the 12V and 3.3V rails. There are also 12V kits measuring power draw for each of the PCIe Graphics (PEG) power connectors — we cut the PEG power harnesses in half and run the cables through the power blocks. RIP, PSU cable.
Powenetics equipment in hand, we set about testing and retesting all of the current and previous generation GPUs we could get our hands on. You can see the full list of everything we’ve tested in the list to the right.
From AMD, all of the latest generation Big Navi / RDNA2 GPUs use reference designs, as do the previous gen RX 5700 XT, RX 5700 cards,
Radeon VII
,
Vega 64
and
Vega 56
. AMD doesn’t do ‘reference’ models on most other GPUs, so we’ve used third party designs to fill in the blanks.
For Nvidia, all of the Ampere GPUs are Founders Edition models, except for the EVGA RTX 3060 card. With Turing, everything from the
RTX 2060
and above is a Founders Edition card — which includes the 90 MHz overclock and slightly higher TDP on the non-Super models — while the other Turing cards are all AIB partner cards. Older GTX 10-series and GTX 900-series cards use reference designs as well, except where indicated.
Note that all of the cards are running ‘factory stock,’ meaning there’s no manual
overclocking
or
undervolting
is involved. Yes, the various cards might run better with some tuning and tweaking, but this is the way the cards will behave if you just pull them out of their box and install them in your PC. (RX Vega cards in particular benefit from tuning, in our experience.)
Our testing uses the Metro Exodus benchmark looped five times at 1440p ultra (except on cards with 4GB or less VRAM, where we loop 1080p ultra — that uses a bit more power). We also run Furmark for ten minutes. These are both demanding tests, and Furmark can push some GPUs beyond their normal limits, though the latest models from AMD and Nvidia both tend to cope with it just fine. We’re only focusing on power draw for this article, as the temperature, fan speed, and GPU clock results continue to use GPU-Z to gather that data.
(Image credit: Tom’s Hardware)
GPU Power Use While Gaming: Metro Exodus
Due to the number of cards being tested, we have multiple charts. The average power use charts show average power consumption during the approximately 10 minute long test. These charts do not include the time in between test runs, where power use dips for about 9 seconds, so it’s a realistic view of the sort of power use you’ll see when playing a game for hours on end.
Besides the bar chart, we have separate line charts segregated into groups of up to 12 GPUs, and we’ve grouped cards from similar generations into each chart. These show real-time power draw over the course of the benchmark using data from Powenetics. The 12 GPUs per chart limit is to try and keep the charts mostly legible, and the division of what GPU goes on which chart is somewhat arbitrary.
Image 1 of 10
(Image credit: Tom’s Hardware)
Image 2 of 10
(Image credit: Tom’s Hardware)
Image 3 of 10
(Image credit: Tom’s Hardware)
Image 4 of 10
(Image credit: Tom’s Hardware)
Image 5 of 10
(Image credit: Tom’s Hardware)
Image 6 of 10
(Image credit: Tom’s Hardware)
Image 7 of 10
(Image credit: Tom’s Hardware)
Image 8 of 10
(Image credit: Tom’s Hardware)
Image 9 of 10
(Image credit: Tom’s Hardware)
Image 10 of 10
(Image credit: Tom’s Hardware)
Kicking things off with the latest generation GPUs, the overall power use is relatively similar. The 3090 and 3080 use the most power (for the reference models), followed by the three Navi 10 cards. The RTX 3070, RX 3060 Ti, and RX 6700 XT are all pretty close, with the RTX 3060 dropping power use by around 35W. AMD does lead Nvidia in pure power use when looking at the RX 6800 XT and RX 6900 XT compared to the RTX 3080 and RTX 3090, but then Nvidia’s GPUs are a bit faster so it mostly equals out.
Step back one generation to the Turing GPUs and Navi 1x, and Nvidia had far more GPU models available than AMD. There were 15 Turing variants — six GTX 16-series and nine RTX 20-series — while AMD only had five RX 5000-series GPUs. Comparing similar performance levels, Nvidia Turing generally comes in ahead of AMD, despite using a 12nm process compared to 7nm. That’s particularly true when looking at the GTX 1660 Super and below versus the RX 5500 XT cards, though the RTX models are closer to their AMD counterparts (while offering extra features).
It’s pretty obvious how far AMD fell behind Nvidia prior to the Navi generation GPUs. The various Vega and Polaris AMD cards use significantly more power than their Nvidia counterparts. RX Vega 64 was particularly egregious, with the reference card using nearly 300W. If you’re still running an older generation AMD card, this is one good reason to upgrade. The same is true of the legacy cards, though we’re missing many models from these generations of GPU. Perhaps the less said, the better, so let’s move on.
(Image credit: Tom’s Hardware)
GPU Power with FurMark
FurMark, as we’ve frequently pointed out, is basically a worst-case scenario for power use. Some of the GPUs tend to be more aggressive about throttling with FurMark, while others go hog wild and dramatically exceed official TDPs. Few if any games can tax a GPU quite like FurMark, though things like cryptocurrency mining can come close with some algorithms (but not Ehterium’s Ethash, which tends to be limited by memory bandwidth). The chart setup is the same as above, with average power use charts followed by detailed line charts.
Image 1 of 10
(Image credit: Tom’s Hardware)
Image 2 of 10
(Image credit: Tom’s Hardware)
Image 3 of 10
(Image credit: Tom’s Hardware)
Image 4 of 10
(Image credit: Tom’s Hardware)
Image 5 of 10
(Image credit: Tom’s Hardware)
Image 6 of 10
(Image credit: Tom’s Hardware)
Image 7 of 10
(Image credit: Tom’s Hardware)
Image 8 of 10
(Image credit: Tom’s Hardware)
Image 9 of 10
(Image credit: Tom’s Hardware)
Image 10 of 10
(Image credit: Tom’s Hardware)
The latest Ampere and RDNA2 GPUs are relatively evenly matched, with all of the cards using a bit more power in FurMark than in Metro Exodus. One thing we’re not showing here is average GPU clocks, which tend to be far lower than in gaming scenarios — you can see that data, along with fan speeds and temperatures, in our graphics card reviews.
The Navi / RDNA1 and Turing GPUs start to separate a bit more, particularly in the budget and midrange segments. AMD didn’t really have anything to compete against Nvidia’s top GPUs, as the RX 5700 XT only matched the RTX 2070 Super at best. Note the gap in power use between the RTX 2060 and RX 5600 XT, though. In gaming, the two GPUs were pretty similar, but in FurMark the AMD chip uses nearly 30W more power. Actually, the 5600 XT used more power than the RX 5700, but that’s probably because the Sapphire Pulse we used for testing has a modest factory overclock. The RX 5500 XT cards also draw more power than any of the GTX 16-series cards.
With the Pascal, Polaris, and Vega GPUs, AMD’s GPUs fall toward the bottom. The Vega 64 and Radeon VII both use nearly 300W, and considering the Vega 64 competes with the GTX 1080 in performance, that’s pretty awful. The RX 570 4GB (an MSI Gaming X model) actually exceeds the official power spec for an 8-pin PEG connector with FurMark, pulling nearly 180W. That’s thankfully the only GPU to go above spec, for the PEG connector(s) or the PCIe slot, but it does illustrate just how bad things can get in a worst-case workload.
The legacy charts are even worse for AMD. The R9 Fury X and R9 390 go well over 300W with FurMark, though perhaps that’s more of an issue with the hardware not throttling to stay within spec. Anyway, it’s great to see that AMD no longer trails Nvidia as badly as it did five or six years ago!
Analyzing GPU Power Use and Efficiency
It’s worth noting that we’re not showing or discussing GPU clocks, fan speeds or GPU temperatures in this article. Power, performance, temperature and fan speed are all interrelated, so a higher fan speed can drop temperatures and allow for higher performance and power consumption. Alternatively, a card can drop GPU clocks in order to reduce power consumption and temperature. We dig into this in our individual GPU and graphics card reviews, but we just wanted to focus on the power charts here. If you see discrepancies between previous and future GPU reviews, this is why.
The good news is that, using these testing procedures, we can properly measure the real graphics card power use and not be left to the whims of the various companies when it comes to power information. It’s not that power is the most important metric when looking at graphics cards, but if other aspects like performance, features and price are the same, getting the card that uses less power is a good idea. Now bring on the new GPUs!
Here’s the final high-level overview of our GPU power testing, showing relative efficiency in terms of performance per watt. The power data listed is a weighted geometric mean of the Metro Exodus and FurMark power consumption, while the FPS comes from our GPU benchmarks hierarchy and uses the geometric mean of nine games tested at six different settings and resolution combinations (so 54 results, summarized into a single fps score).
This table combines the performance data for all of the tested GPUs with the power use data discussed above, sorts by performance per watt, and then scales all of the scores relative to the most efficient GPU (currently the RX 6800). It’s a telling look at how far behind AMD was, and how far it’s come with the latest Big Navi architecture.
Efficiency isn’t the only important metric for a GPU, and performance definitely matters. Also of note is that all of the performance data does not include newer technology like ray tracing and DLSS.
The most efficient GPUs are a mix of AMD’s Big Navi GPUs and Nvidia’s Ampere cards, along with some first generation Navi and Nvidia Turing chips. AMD claims the top spot with the Navi 21-based RX 6800, and Nvidia takes second place with the RTX 3070. Seven of the top ten spots are occupied by either RDNA2 or Ampere cards. However, Nvidia’s GDDR6X-equipped GPUs, the RTX 3080 and 3090, rank 17 and 20, respectively.
Given the current GPU shortages, finding a new graphics card in stock is difficult at best. By the time things settle down, we might even have RDNA3 and Hopper GPUs on the shelves. If you’re still hanging on to an older generation GPU, upgrading might be problematic, but at some point it will be the smart move, considering the added performance and efficiency available by more recent offerings.
Thermaltake’s Divider 300TG is attractive, but lacks the quality and performance needed to stand out in today’s market. It didn’t perform well thermally or acoustically in our testing, making it tough to recommend.
For
+ Unusual, but fresh design
+ Complete front IO
Against
– Thermally disappointing
– Intake fans have little effect on temps, are noisy, and speed cannot be controlled
– Material quality lacking
– Glass frame is closer to turquoise than white
– 5.7-inch max CPU cooler height
– Difficult to remove sticker on glass side panel
– Frustrating side panel installation
– No support for top-mounted radiators
Features and Specifications
The vast majority of new ATX cases these days come with large slabs of tempered glass as side panels. The alternative seems to be a solid steel panel, but what if you want something in the middle?
That’s the idea behind Thermaltake’s Divider 300TG. Specifically, today on our test bench is the Divider 300TG ARGB Snow Edition. This chassis has both tempered glass and steel for its side panel, creatively slicing both in half for a fresh look. Pricing is set at $115 for this Snow Edition (or about $5 less for the black model) with all the bells and whistles, which sets expectations high.
So without further ado, let’s dig in to find out whether it’s worthy of a spot on our Best PC Cases list.
Thermaltake Divider 300TG Specifications
Type
Mid-Tower ATX
Motherboard Support
Mini-ITX, Micro-ATX, ATX
Dimensions (HxWxD)
18.7 x 8.7 x 18.1 inches (475 x 220 x 461 mm)
Max GPU Length
15.4 inches, 14.2 with front radiator (360 mm, 390 mm without front radiator)
CPU Cooler Height
5.7 inches (145 mm)
Max PSU Length
7.1 inches, 8.7 inches without HDD cage (180 mm, 220 mm)
External Bays
✗
Internal Bays
2x 3.5-inch
5x 2.5-inch
Expansion Slots
7x
Front I/O
2x USB 3.0, USB-C, 3.5 mm Audio + Mic
Other
2x Tempered Glass Panel, Fan/RGB Controller
Front Fans
3x 120 mm (Up to 3x 120mm)
Rear Fans
1x 120mm (Up to 1x 120mm)
Top Fans
None (Up to 1x 120mm)
Bottom Fans
None
Side Fans
Up to 2x 120mm
RGB
Yes
Damping
No
Warranty
3 Years (2 years for fans)
Thermaltake Divider 300TG Features
Image 1 of 4
(Image credit: Tom’s Hardware)
Image 2 of 4
(Image credit: Tom’s Hardware)
Image 3 of 4
(Image credit: Tom’s Hardware)
Image 4 of 4
(Image credit: Tom’s Hardware)
Touring around the outside of the chassis, two things immediately stand out: One is of course the slashed side panel, but on the other side you’ll spot an air intake. As we’ll see later, you can mount two extra 120mm fans here or mount an all-in-one liquid cooler.
(Image credit: Tom’s Hardware)
However, while all may look okay in the photos, the quality of the materials is quite disappointing. The sheet metal is thin, and the glass’s frame isn’t actually white – it’s closer to turquoise, which is a bit odd given that the chassis is named ‘snow edition,’ and it’s not a great look contrasting with the actual white of the rest of the chassis.
(Image credit: Tom’s Hardware)
The case’s IO resides at the top, cut through the steel panel. Here you’ll spot two USB 3.0 ports, a USB Type-C port, and discrete microphone and headphone jacks – a complete set that’s much appreciated. You’ll also spot the power and reset switches. But as we’ll find out later, the reset switch doesn’t serve as a reset button.
Image 1 of 2
(Image credit: Tom’s Hardware)
Image 2 of 2
(Image credit: Tom’s Hardware)
To remove the case’s paneling, you first remove the steel part of the slashed side panel, and then the glass. The steel part is removed by undoing two thumbscrews at the back, after which it awkwardly falls out of place. The same goes for the side panel on the other side; undo two screws and it falls out of the chassis – and re-installation is just as clunky, as the screws don’t line up nicely with the threads. The glass panels are clamped in place by a handful of push-pins, so de-installation and re-installation is as easy as pulling the panels off or pushing them back into place.
Thermaltake Divider 300TG Internal Layout
(Image credit: Tom’s Hardware)
With the chassis stripped down, you’ll spot a fairly standard layout with room for up to an ATX-size motherboard. The only unusual thing about the main compartment is the cover on the right, which either houses three 2.5-inch drives or can be removed to make space for two extra intake fans and an AIO.
Image 1 of 2
(Image credit: Tom’s Hardware)
Image 2 of 2
(Image credit: Tom’s Hardware)
Switch to the other side of the chassis, and you’ll spot the fan bracket we spoke of, along with two 2.5-inch SSD mounts behind the motherboard tray. In the PSU area there is also room for two 3.5-inch drives.
Thermaltake Divider 300TG Cooling
(Image credit: Tom’s Hardware)
While there wasn’t much to talk about regarding the case’s general features, there is plenty to discuss when it comes to cooling. From the factory, the chassis comes with a total of four fans installed, which seems quite lavish. The front intake fans are three 120mm RGB spinners, while the rear exhaust fan is a simple 3-pin spinner without any lighting features.
(Image credit: Tom’s Hardware)
But, behind the motherboard tray there is also a fan controller hub, where you can spot the reset switch header plugging in at the bottom. All four fans can be plugged into this hub, though the front trio come plugged in from the factory with very unusual connectors. As we’ll detail further on later, the RGB is controlled through the reset switch, and the fans offer no speed control.
The hub is powered by SATA power. There is an LED-out header on the hub, and an M/B-in header for connecting the RGB up to your motherboard with the included cable. The RGB effects included with the chassis’ controller are quite jumpy with infrequent changes, so it’s nice to see it tie into your motherboard’s control system.
The exhaust fan can be plugged into the motherboard, as it’s a 3-pin spinner but other than that, it’s safe to say that the chassis’ intake fan speeds cannot be controlled, which is a real let-down as they’re quite noisy.
Graphics cards can be up to 14.2 inches (360mm long), or 15.4 inches (390 mm) without a front radiator in place. This is plenty, but the space isn’t very wide: CPU coolers can only be up to 5.7 inches (145 mm) tall due to the side panel design, which isn’t much. Our Noctua cooler barely fit, so you’ll want to be careful with wide GPUs and tall CPU coolers.
(Image credit: Tom’s Hardware)
For liquid cooling, it’s tight, but there is space for a front-mounted 360mm radiator or a side-mounted 240 mm radiator–but you’ll have to pick between one or the other. Also, be careful with side-mounted radiators, as they’ll likely bump into long GPUs. Most standard-length GPUs shouldn’t have an issue here, but if you’re using a bigger GPU, you’re probably better off using the front mount, as counterintuitive as that might seem.
Razer has been a loyal supporter of Team Blue. However, the tech giant may have finally bitten the bullet and joined up with Team Red. If the recently discovered 3DMark submissions (via _rogame) are accurate, Razer will release the company’s first-ever AMD-powered gaming laptop soon.
The mysterious laptop emerged as the Razer PI411. There is speculation that the codename may allude to the Razer Blade 14, which debuted back in 2013. The last time Razer updated the Razer Blade 14 was in 2016, so a well-deserved update is due. Nevertheless, we can’t discard the possibility that PI411 could just be a codename for any other Razer device.
The Razer PI411 features AMD’s top-tier Ryzen 9 5900HX (Cezanne) processor. The Ryzen 9 5900HX is AMD’s first overclockable mobile processor, and the chipmaker designed it to take the fight to Intel’s HK-series of mobile chips, such as the Core i9-10900HK or the looming Core i9-11980HK.
Armed with eight Zen 3 cores and 16MB of L3 cache, the Ryzen 9 5900HX comes with a 3.3 GHz base clock and a 4.6 GHz boost clock. It has a generous cTDP (configurable thermal design power) between 35W and 54W. The last Razer Blade 14 (2016) employed the Core i7-6700HQ, a 45W processor from the Skylake days. The gaming laptop is no stranger to housing hot chips. If Razer wants to work the Ryzen 9 5900HX into the Razer Blade 14, the new iteration will likely have to rely on a more robust cooling solution than its predecessors to leave enough thermal headroom for manual overclocking.
Image 1 of 2
Razer PI411 (Image credit: _rogame/Twitter)
Image 2 of 2
Razer PI411 (Image credit: _rogame/Twitter)
The Razer PI411 is also equipped with 16GB of DDR4-3200 memory and a 512GB NVMe SSD. However, it’s probably just an engineering sample, so the final product could arrive with more memory and a bigger SSD. So far, we’ve seen the Razer PI411 with two discrete graphics card options from Nvidia. As a quick reminder, the chipmaker’s latest mobile GeForce RTX 3000 (Ampere) offerings are available at different TDP limits, which adds a lot of confusion if the vendor doesn’t specifically list the value.
The first Razer PI411 unit employs a GeForce RTX 3060. The 14 Gbps memory confirms that the Razer PI411 uses the GeForce RTX 3060 Mobile or Max-P variant as opposed to the Max-Q variant. The 900 MHz base clock points to the 80W version.
The second and most recent Razer PI411 unit, on the other hand, leverages the more powerful GeForce RTX 3070. The memory is clocked at 12 Gbps, meaning it’s the Max-Q variant. This particular GeForce RTX 3070 Max-Q sports a 780 MHz base clock, so it coincides with the 80W version as well.
The 3DMark submissions aren’t conclusive evidence that Razer is sold on the idea. We hope Razer does go through with it, though, since the laptop market could use another high-end AMD-based laptop.
With a robust design and major improvements to nCache 4.0, WD’s Black SN850 is among the most responsive drives we’ve seen. If you’re after the absolute best performance, we pick the 2TB WD Black SN850 over the 2TB Samsung 980 Pro.
For
+ Competitive performance
+ Large, fast-recovering dynamic SLC cache
+ Attractive design
+ Software package
+ 5-year warranty
Against
– Can get hot under heavy load
– High idle power consumption on desktops
– Lacks AES 256-bit hardware encryption
Features and Specifications
April 1, 2021 Update: We’ve updated this article with new testing for the 2TB WD Black SN850 M.2 NVMe SSD on page 2.
Original Review published December 11, 2020:
Boasting bleeding-edge PCIe Gen4 performance and available in up to 2TB of capacity, WD’s Black SN850 is a beast of an SSD that rival’s Samsung’s 980 PRO for the best SSD. If you got the cash, it is a great choice for gamers and enthusiasts looking for top-tier, quality storage.
WD’s Black product line has adapted quite a bit over the years. When it came to the company’s mechanical HDD line, Black traditionally meant uncompromising performance and reliability. Still, when it comes to the company‘s SSDs, WD’s Black product line emphasizes gaming above all. However, that doesn’t mean that the company forgot about those that need consistent prosumer storage for their applications.
The last WD Black SN750 was a data-writing powerhouse, with sustained write speed that could make nearly any other SSD jealous, making it perfect for video editors and those who often move large data sets around. But its read performance lagged most of its competition in most of our application benchmarks. The new WD Black SN850 aims to put on a much better showing this time around, with much of the company’s focus on optimizing the new SSD’s read speed as much as improving its already-strong write speed.
Specifications
Product
Black SN850 500GB
Black SN850 1TB
Black SN850 2TB
Pricing
$149.99
$229.99
$449.99
Capacity (User / Raw)
500GB / 512GB
1000GB / 1024GB
2000GB / 2048GB
Form Factor
M.2 2280
M.2 2280
M.2 2280
Interface / Protocol
PCIe 4.0 x4 / NVMe 1.4
PCIe 4.0 x4 / NVMe 1.4
PCIe 4.0 x4 / NVMe 1.4
Controller
WD_BLACK G2
WD_BLACK G2
WD_BLACK G2
DRAM
DDR4
DDR4
DDR4
Memory
BiCS4 96L TLC
BiCS4 96L TLC
BiCS4 96L TLC
Sequential Read
7,000 MBps
7,000 MBps
7,000 MBps
Sequential Write
4,100 MBps
5,300 MBps
5,100 MBps
Random Read
800,000 IOPS
1,000,000 IOPS
1,000,000 IOPS
Random Write
570,000 IOPS
720,000 IOPS
710,000 IOPS
Endurance (TBW)
300 TB
600 TB
1,200 TB
Part Number
WDS500G1X0E
WDS100T1X0E
WDS200T1X0E
Warranty
5-Years
5-Years
5-Years
With peak sequential performance up to 7/5.3 GBps of sequential read/write throughput and upwards of 1,000,000/720,000 random read/write IOPS, the WD Black SN850 delivers top-tier performance over the PCIe 4.0 x4 NVMe 1.4 interface. WD’s Black SN850 is available in capacities of 500GB, 1TB, and 2TB with street pricing listed at $150, $230, and $450, per respective capacity. If you want the model with a heatsink, it will cost an extra $20.
WD’s Black SN850 features a revamped SLC caching implementation, nCache 4.0. It now comes with a hybrid SLC caching that’s similar to Samsung’s TurboWrite but larger in capacity, much like we’re accustomed to with SSDs powered by Phison’s latest controllers. The total dynamic SLC capacity spans roughly one-third of the available capacity (300GB on our 1TB sample) with a small and quick-to-recover static SLC cache (12GB on our 1TB sample) that’s designed to provide the most performance and endurance.
With a multi-gear Low-Density Parity-Check (LDPC) ECC engine, RAID like protection for full multi-page recovery, internal SRAM ECC and end-to-end data path protection in its ECC scheme, along with the company over-provisioning the SSD by 9%, WD’s Black SN850 comes equipped with plenty of mechanisms to ensure your data is safe on the flash. WD backs the Black SN850 with a five-year warranty and rates it to endure up to 300 TB of writes per 500GB of capacity, or up to 1.2PB writes on the 2TB model.
We were able to quickly and securely wipe WD’s Black SN850 by initiating a secure erase from within our Asus X570 Crosshair VIII Hero (WiFi) motherboard’s UEFI. But, while it supports secure erase, the SSD lacks a now-common security feature that Samsung has supported on its drives for years – hardware-accelerated AES 256-bit full drive encryption. The Black SN850 does support both Trim and S.M.A.R.T. data reporting as well as Active State Power Management (ASPM), Autonomous Power State Transition (APST), and the PCIe L1.2 power state for low power draw at idle on mobile platforms, drawing only <5mW.
Software and Accessories
Image 1 of 4
(Image credit: Tom’s Hardware)
Image 2 of 4
(Image credit: Tom’s Hardware)
Image 3 of 4
(Image credit: Tom’s Hardware)
Image 4 of 4
(Image credit: Tom’s Hardware)
WD supports the Black SN850 with WD Dashboard, the company’s SSD toolbox that includes analysis tools, a firmware updater, and RGB lighting control on the heatsink model. WD also provides customers with Acronis True Image WD Edition for cloning and data backup.
A Closer Look
Image 1 of 2
(Image credit: Tom’s Hardware)
Image 2 of 2
(Image credit: Tom’s Hardware)
WD is tightlipped when it comes to revealing information about the hardware that powers the Black SN850 and didn’t divulge many details about its next-gen controller when questioned. Still, we could deduce a few things based on the scraps and crumbs we were given.
From our external analysis, we can see the drive comes in an M.2 2280 single-sided form factor with an NVMe flash controller, a DRAM chip, and just two flash packages because of the large size of the controller package, measuring 17 x 17mm, taking up most of the PCB space.
Image 1 of 2
(Image credit: Tom’s Hardware)
Image 2 of 2
(Image credit: Tom’s Hardware)
To power the SSD, WD uses a proprietary Arm-based multi-core eight-channel PCIe 4.0 x4 NVMe SSD controller that leverages a Micron DDR4 DRAM chip to deliver responsive performance. WD references the controller as its WD_BLACK G2. Outfitting the WD Black SN850 with a faster Gen4 PHY is great for performance, but with such fast bandwidth, power draw and heat output were a concern at 28nm. Thus, like the controllers from competing manufacturers, WD opted to build the WD_BLACK G2 on a newer process node to better control those variables with TSMC’s 16nm FinFET technology.
WD paired the second-generation controller with the company’s newer Kioxia BiCS4 96L TLC flash operating at Toggle DDR3.0 speeds of 800 MTps. Both the 500GB and 1TB models leverage 256Gb dies while the 2TB leverages 512Gb dies. This flash has two planes (regions of independent access) for better performance than just a single plane, but it’s not quite as fast as the company’s next-generation quad-plane BiCS5 112L flash that we will see become more prevalent next year. The new flash has twice the performance along with a Circuit Under Array (CUA) implementation.
Image 1 of 1
(Image credit: Tom’s Hardware)
In contrast to BiCS3, BiCS4 flash is not only faster thanks to a string-based start bias control scheme and smart Vt -tracking for improved reads, but has the benefit of improved efficiency over its predecessor with a low-pre-charge sense-amplifier bus scheme and sips down just 1.2V instead of 1.8V. WD’s Black SN850 leverages even-odd row decoding and shielded BL current sensing with this flash to enhance read throughput, too.
To scale the flash to new heights, the manufacturing process includes string-stacking two 48 wordline layer arrays on top of one another. While this is an easy way to increase cell array size, inefficiencies stem from additional circuitry, and wasted dummy layers. Concurrently, there is a risk of low yield due to stack misalignments. Scaling up to 96 wordline layers means using a total of 109 layers, including dummy gates and selectors in this instance, which is less efficient than Samsung’s V-NAND, which has yet to leverage string-stacking at up to 128 worldline layers (136 total layers).
The Semiconductor Industry Association (SIA) and Boston Consulting Group (BCG) published a report today detailing the, well, semiconductor industry’s weaknesses as the entire world attempts to figure out how to respond to the ongoing chip shortage.
Many enthusiasts probably know the biggest issue: The semiconductor industry relies on many companies around the globe, but most operate within a very specific niche, giving them significant influence over their domain. This results in a supply chain that is both dangerously small and dazzlingly large, geographically speaking.
“There are more than 50 points across the supply chain where one region holds more than 65% of the global market share,” SIA and BCG said in their report, adding that manufacturing is “a major focal point when it comes to the resilience of the global semiconductor supply chain.” They went on to explain:
“About 75% of semiconductor manufacturing capacity, as well as many suppliers of key materials — such as silicon wafers, photoresist, and other specialty chemicals — are concentrated in China and East Asia, a region significantly exposed to high seismic activity and geopolitical tensions. Furthermore, all of the world’s most advanced semiconductor manufacturing capacity — in nodes below 10 nanometers — is currently located in South Korea (8%) and Taiwan (92%). These are single points of failure that could be disrupted by natural disasters, infrastructure shutdowns, or internal conflicts, and may cause severe interruptions in the supply of chips.”
We’ve seen numerous examples of those dangers in the last few months. Natural disasters? See the December 10 earthquake that shut down two Micron fabs in Taiwan or the February storm that shut down a Samsung fab in Texas. Infrastructure shutdowns? See the ongoing water rationing in Taiwan caused by record droughts.
Those examples alone have already resulted in supply issues for flash memory, in Micron’s case, as well as SSD controllers in Samsung’s. The droughts in Taiwan have threatened production related to CPUs, GPUs, single-board computers, and display panels, among other things, despite manufacturers’ efforts to mitigate their effects.
There are significant barriers to reducing that risk, SIA and BGC said, one of the most important being the fact that those companies have very complicated specialties. They explained in the report:
“Specialization across the supply chain allows the deep focus required to innovate, often pushing the boundaries of science. There are more than 30 types of semiconductor product categories, each optimized for a particular function in an electronic subsystem. Developing a modern chip requires deep technical expertise in both hardware and software, and relies on advanced design tools and intellectual property (IP) provided by specialized firms. Fabrication then typically requires as many as 300 different inputs, including raw wafers, commodity chemicals, specialty chemicals, and bulk gases. These inputs are processed by more than 50 classes of highly engineered precision equipment. Most of this equipment, such as lithography and metrology tools, incorporates hundreds of technology subsystems such as modules, lasers, mechatronics, control chips, and optics. The highly specialized suppliers involved in semiconductor design and fabrication are often based in different countries. Chips then zigzag across the world in a global journey.”
The Argument Against Self-Sufficiency
The chip shortage has prompted governments around the world to question their reliance on this global network. The European Commission said in December that it planned to invest $170 billion (145B€) to increase its production, for example, and U.S. President Joe Biden ordered a review of critical supply chains in February.
China has also pushed its chip industry towards independence and enjoyed a series of wins despite U.S. restrictions on chip-making equipment meant to impede its progress. In recent months it’s announced its first DDR4 memory, first domestic SSDs, and first 7nm data center GPU; it’s also made progress on a chip fabbing tool.
Yet true self-sufficiency is nearly unattainable, per SIA and BGC, at least to governments that aren’t willing to spend a lot more than they are now. Just check out the estimate SIA and BGC shared in the exhibit below:
(Image credit: SIA-BGC)
That’s a global upfront investment of somewhere between $900 billion and $1.2 trillion accompanied by an incremental annual cost between $45 and $125 billion. SIA and BGC said the annual cost alone “would all but wipe out the profits of the industry, which amounted to $126 billion across the entire value chain in 2019.”
SIA and BGC said this could lead to “an average increase of 35-65% in the price of semiconductors” if the manufacturers’ higher costs are fully passed on to their customers. That would almost certainly lead to higher costs for consumers, too, so you still wouldn’t be able to buy the best CPUs or best graphics cards on the cheap.
But the real cost would be even higher. “Furthermore, it is also likely that siloed domestic industries shielded from foreign competition and deprived of global scale would lose in efficiency and ability to innovate,” SIA and BGC said. “Ultimately, it would reverse the decades-long trend of making increasingly powerful and more affordable electronic devices access fo consumers around the world.”
And that’s assuming various governments simply wanted to meet the demand for chips in 2019. SIA and BGC estimated that “the industry will have to almost double its capacity by 2030 to keep up with the expected 4% to 5% average annual growth in semiconductor demand.” That would make self-sufficient supplies even more costly.
A Proposed Solution
SIA and BGC proposed an alternative solution to fully self-sufficient supplies for major regions: targeted investments. They called for the U.S. to implement a $20 to $50 billion program, for example, that would support domestic production of semiconductors used in devices critical to national security and other vital areas.
They also said that “governments with significant national security concerns related to control over semiconductor technology should establish a stable framework for restrictions on semiconductor trade” that clearly defines policy goals, restrictions, and “the expected second-order impacts on industry players” that could result.
Their final plea was for policy makers to “significantly step up the efforts to address the looming shortage of high-skill talent that threatens to constrain the semiconductor industry’s ability to keep the current pace of innovation and growth.” It turns out that we haven’t yet developed an autonomous chip industry — and probably never will — which means the human factor can’t simply be ignored.
Will any of those solutions help in the short term? Not really. Increasing production capacity is an incredibly expensive process that also takes time to complete. TSMC didn’t make a plan to spend $100 billion over the next three years for no reason; if it could reduce either the financial investment or the length of time it needs, it would.
But at least now it’s clearer than ever why this chip shortage is happening, why it’s not going to be solved over night, and how industry players think it can be addressed in the near future.
MSI, a company best known for bold and flashy gaming laptops, has announced two additions to its new Summit Series business line. The Summit E13 Flip Evo and Summit E16 Flip are convertible notebooks powered by Intel’s Tiger Lake processors.
The big news is that the new Flips come with 16:10 displays. MSI says the new aspect ratio will provide 10 percent more visible screen space than a similarly sized 16:9 screen. That’s a good sign for business users — it means less scrolling and more room to multitask. Both models are also compatible with MSI’s proprietary MPP 2.0 stylus (the MSI pen), which the company says has 4,096 pressure levels.
Like the rest of the Summit Series, the two models include a number of features designed for remote meetings. These include a “noise-reduction” camera (with a physical shutter as well as a keyboard kill switch), and audio noise cancellation. MSI claims the notebooks will get 20 hours of battery life, which would certainly be a step up from the five-hour lifespan I got out of the Summit B15.
The Summit E16 is stylus-compatible.Image: MSI
The E13 Flip Evo is, as its name implies, certified through Intel’s Evo program. This means it’s met Intel’s various standards for top-performing Tiger Lake laptops, including responsive performance, quick boot time, all-day battery life, and other modern amenities like Thunderbolt 4 and Wi-Fi 6. MSI claims it “performs 10% higher than other 2-in-1 laptops of the same tier.” (I’ll have to test that claim for myself when I get my hands on a unit, of course.)
The E16 Flip looks to be more of a workstation device. MSI says it will include “the latest Nvidia graphics card” to lend a hand with content-creation tasks. It also comes with four microphones for conference calls.
Image: MSI
Models support Wi-Fi 6E and Bluetooth 5.2, as well as PCIe 4.0 NVMe SSD storage.
Pricing and availability are still to be announced. The current Summit E13 Flip costs $1,599.99, so it wouldn’t be surprising to see these two models somewhere above that range.
It might be time to add another supply issue to the list. Unizyx CEO Gordon Yang said the company is suffering the worst networking chip shortage in 30 years, DigiTimes today reported, and that it will likely have to raise its prices as a result.
Unizyx offers a wide array of networking products via its Zyxel and MitraStar DMS brands. Right now it’s seeing increased demand because several next-generation technologies—5G networks, Wi-Fi 6, and Wi-Fi 6E—are all becoming more popular.
Yang told DigiTimes that Unizyx can’t source enough networking chips to meet that demand. Even if it could, rising component and transportation costs would likely result in higher prices for Unizyx products anyway, according to the report.
That ought to sound familiar by now. Not just the networking chip shortage, although the global chip shortage is front-of-mind for many companies, but also the fact that transportation costs have risen as a result of the COVID-19 pandemic.
The coronavirus made it much harder for the shipping industry to ferry raw materials, components, and finished products around the world. That difficulty naturally resulted in higher transportation costs for… pretty much everyone.
That might change this year. MSI chairman Joseph Hsu said in March that he expected transportation costs to fall as the shipping industry recovers from the pandemic, and those savings could eventually be passed on to consumers.
But for now, the networking chip situation Yang described sounds a lot like many other parts of the industry, from CPUs and GPUs to mobile processors and SSD controllers, along with other components we simply haven’t been able to cover.
Unlike those issues, however, it’s not clear how much the networking chip shortage will affect consumers in the near term. It’s nearly impossible to find the best CPUs or the best graphics cards, and the best SSDs are likely to follow in the near future. Gauging the effect this could have on 5G wireless or Wi-Fi 6 and 6E rollouts is harder.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.