In recent years a number of PC makers have introduced small form-factor (SFF) and ultra-compact form-factor (UCFF) computers based on AMD’s latest accelerated processing units (APUs), but none of those systems are as tiny as Intel’s NUCs. Asus is one of a few manufacturers to actually offer its Mini PC PN-series AMD-powered UCFF machines that are just as compact as NUCs and they have just updated them to feature AMD Ryzen 5000 series APUs.
Asus freshly introduced Mini PC PN51 packs AMD’s Ryzen 5000U-series mobile processors with up to eight Zen 3 cores as well as up to Radeon Vega 7 graphics. The APU can be paired with a maximum of 32 GB of DDR4-3200 memory. Storage is via an M.2-2280 SSD with up to 1TB capacity, and a 2.5-inch 1 TB 7200-RPM hard drive. Meanwhile, the system still measures 115×115×49 mm has a 0.62-liter volume.
The diminutive size of the Asus Mini PC PN51 does not impact the choice of ports on offer. The desktop computer is equipped with an Intel Wi-Fi 6 + Bluetooth 5.0 module (or a Wi-Fi 5 + BT 4.2), a 2.5 GbE or a GbE connector, three USB 3.2 Gen 1 Type-A ports, two USB 3.2 Gen 2 Type-C receptacles (supporting DisplayPort mode), an HDMI output with CEC support, a 3-in-1 card reader, a configurable port (which can be used for Ethernet, DisplayPort, D-Sub or COM ports), and a combo audio jack.
Asus seems to position the Mini PC PN51 as a universal PC suitable for both home and office. The configurable I/O port that can add an Ethernet connector or a COM header is obviously aimed at corporate and business users. In addition, the PC has a TPM module on board. Meanwhile, the system also has an IR receiver (something that many modern UCFF and SFF PCs lost following Apple’s Mac Mini) and a Microsoft Cortana-compatible microphone array that will be particularly beneficial for home users that will use the PN51 as an HTPC.
As far as power consumption and noise levels are concerned, the PN51 consumes as little as 9W at idle, produces 21.9 dBA of noise at idle, and 34.7 dBA at full load.
The Asus Mini PC PN51 will be available shortly. Pricing will depend on configuration as Asus plans to offer numerous models based on the AMD Ryzen 3 5300U, AMD Ryzen 5 5500U, and AMD Ryzen 7 5700U processors with various memory and storage configurations.
An unidentified Radeon Pro graphics card has emerged overseas in China. The graphics card, which appeared on the Chiphell forums, could be one of AMD’s forthcoming Big Navi Radeon Pro offerings. But take this leak with a grain of salt.
Like AMD’s other Radeon Pro SKUs, the mysterious graphics card retains the dual-slot design and a shroud with the characteristic blue and silver theme. Given the silver stripe in the middle of the shroud, it should be a Radeon Pro W-series card as opposed to a Radeon Pro WX-series model. The cooler itself doesn’t resemble the existing designs on the Radeon Pro W5700 or W5500. Therefore, it’s safe for us to assume that the enigmatic graphics card may be a next-generation RDNA 2 Radeon Pro graphics card. The sticker clearly states that this particular unit is an engineering sample so the final design could be completely different.
The Chiphell forum user covered the serial number for obvious reasons, and the barcode is too small to decrypt. However, AMD’s Radeon Pro graphics card typically take after their mainstream counterparts. So far the chipmaker has released the Radeon RX 6900 XT, RX 6800 XT and RX 6700 XT so the graphics card in question is possibly based on one of the three launched models. If we had to take a guess, the graphics card is probably the Radeon Pro W6800. If so, then it should leverage the Navi 21 silicon, which is the die that AMD utilizes for the current Radeon RX 6900 XT and Radeon RX 6800 (XT).
Image 1 of 2
Image 2 of 2
Another sticker on the back of the graphics card points to Samsung 16GB, which alludes to the memory chips that are inside the graphics card. Sadly, this tiny bit of information doesn’t help us decode the exact silicon that the graphics card is based around. The Radeon RX 6900 XT, RX 6800 XT and RX 6700 XT all employ Samsung 16 Gbps GDDR6 memory chips. There is also mention of “Full Secure TT GLXL A1 ASIC,” which we haven’t been able to decipher.
The user’s photographs don’t reveal the graphics card’s power connectors or display outputs. AMD has been known to mix things up so the graphics card may offer standard DisplayPort or mini-DisplayPort outputs. However, the photographs show that AMD has finally endowed the Radeon Pro graphics card with a backplate. Unfortunately for us, it also blocks the back of the PCB so we can’t dig deeper into the memory chips.
With memory prices continuing to plummet, now is a great time to be looking for memory upgrades as there is competition from both Intel and AMD, and the Red brand has thoroughly fixed the memory issues of generations past. No longer do users have to worry about memory compatibility or shopping for expensive AMD-branded kits. With 3200 MHz natively supported on the new Ryzen platform, options for enthusiasts have never been more open.
Team Group is a Taiwan-based manufacturer largely rooted in the manufacture of flash memory products. The brand is not as well known here in North America, but has been gaining a lot of ground over the past few years with its gaming-centric “T-Force” product line. T-Force offers a whole roster of gaming memory with a focus on striking designs and high performance. While most of the T-Force brand has as its focus the current trend of large, bright RGB LED light bars, the brand has not forgotten the virtues of simplicity and efficiency.
The Team Group T-Force Dark Z is a call back to a simpler time when bold colors were all the rage before the LED revolution. The Team Group T-Force Dark Z FPS comes in speeds of up to 4000 MHz and capacities of up to 16 GB (2 x 8 GB), and a focus on Ryzen system stability. The stamped aluminium heat spreaders are anodized black.
The Team Group T-Force Dark Z FPS I have for testing today is a 16 GB (2x 8 GB) 4000 MHz kit at 16-18-18-38 and 1.45 V. This spec could be a great option for both Intel and AMD systems, with the potential 4000 MHz 1:1 IF overclock on new Ryzen processors, This kit is a twin to the Dark Z kits I reviewed previously. How does the Team Group T-Force Dark Z FPS differ from its Dark Z cousins? Let’s dig in and find out!
Alienware is releasing its first laptop with an AMD CPU since 2007. Its parent company, Dell, today announced the Alienware m15 Ryzen Edition R5, alongside a lower-end Dell G15 Ryzen Edition and a Dell G15 refresh with Intel chips.
The Alienware m15 Ryzen Edition R5 will use AMD’s Ryzen 5000 H-series chips paired with Nvidia GeForce RTX 30-series GPUs. Like the Asus ROG Zephyrus Duo 15 SE, it will go up to a Ryzen 9 5900HX, which might give it enough power for consideration on our list of the best gaming laptops (we’ll have to review it first, of course).
The last Alienware laptop to pair with an AMD CPU and an Nvidia GPU was the Aurora mALX, last seen in 2007. That line went up to 19 inches and featured an AMD Turion 64 ML-44 and two Nvidia GeForce Go 7900 GTX cards in SLI.
The new m15 Ryzen Edition R5 will also be the first 15-inch Alienware laptop going to DDR4 memory at 3,200 MHz, and that memory will be user-replaceable.
Specs
Alienware m15 Ryzen Edition R5
Dell G15 Ryzen Edition
Dell G15
CPU
Up to AMD Ryzen 9 5900HX
Up to AMD Ryzen 7 5800H
Up to Intel Core i7-10870H
GPU
Up to Nvidia GeForce RTX 3070
Up to Nvidia GeForce RTX 3060
Up to Nvidia GeForce RTX 3060
RAM
Up to 32GB DDR4-3200MHz, user-replaceable
Up to 32GB DDR4-3200
Up to 16GB DDR4-2933
Storage
Up to 4TB (2x 2TB PCIe M.2 SSD)
Up to 2TB (PCIe NVMe M.2 SSD)
Up to 2TB (PCIe NVMe M.2 SSD)
Display
15.6-inches: FHD at 165 Hz, QHD at 260 Hz or FHD at 360 Hz
15.6 inches: FHD at 120 Hz or 165 Hz
15.6 inches: FHD at 120 Hz or 165 Hz
Release Date
April 20 (United States), April 7 (China), May 4 (Global)
May 4 (Global), April 30 (China)
April 13 (Global), March 5 (China)
Starting Price
$2,229.99
$899.99
$899.99
The m15 Ryzen Edition will come only in the black “dark side of the moon paint job,” as Dell put it, and feature a new two-toned finish, marking the first real change to Alienware’s “Legend” design language. Inside, the laptop uses what Alienware refers to as “Silky-Smooth High-Endurance” paint, which it claims reduces stains and feels like a more premium product.
Alienware’s AMD machine will also benefit from the option of the Cherry MX keyboard it introduced recently on the m15 R4. Additionally, the m15 Ryzen Editionfeatures Alienware’s proprietary cooling, dubbed “Cryo-Tech.”
Image 1 of 3
Image 2 of 3
Image 3 of 3
There are three display options on the new Alienware: A 1080p (1920 x 1080), 360 Hz display for eSports aficionados, a 1440p (2560 x 1440) 240 Hz panel and a 1080p screen at 165 Hz.
The Alienware m15 Ryzen Edition R5 will start at $2,299.99 and go on sale first in China on April 7, in the U.S. on April 20 and with a global release on May 4.
New Dell G15 Gaming Laptops
The two new Dell G15 models use the redesigned chassis that the company
introduced in China in March
, with more aggressive angles and some new colors. The Dell G15 Ryzen Edition model will go up to an AMD Ryzen 7 5800H CPU and an Nvidia GeForce RTX 3060, while the Intel version will go up to a 10th Gen Intel Core i7-10870H (the Ryzen version uses the faster RAM, while the Intel version does not).
Both G15s will offer 15.6-inch displays with 1920 x 1080 resolution at either 120 Hz or 165 Hz.
The two Dell G15 laptops will start at $899.99, with the Intel version launching globally on April 13 and the AMD option hitting on May 4.
According to a leak reported at VideoCardz, at 6am PST AMD plans to release a special black edition of its Radeon RX 6800 XT.
AMD’s Radeon RX 6800 XT Midnight Black edition graphics board is based on the Navi 21 GPU featuring 4608 stream processors, 288 texture units (TUs), and 128 render output units (ROPs) that is paired with 16 GB of GDDR6 memory. As the name suggests, the Midnight Black edition is supposed to be all black, so expect it to look different from AMD’s usual Radeon RX 6800 XT reference design.
There is a catch about AMD’s Radeon RX 6800 XT Midnight Black though: it will be available only from AMD.com to members of the AMD Red Team community for a limited time and while supplies last. The product will be available starting from 6am PST/9am EST April 7, 2021.
“Based on community feedback and popular demand, we have created a select quantity of AMD Radeon RX 6800 XT Midnight Black graphics cards featuring the same great performance of the widely popular AMD Radeon RX 6800 XT,” a statement by AMD published by VideoCardz reads. “This is an exclusive advance notice to members of the AMD Red Team community and this offer has limited availability, while supplies last.”
At this point it is unclear whether the Radeon RX 6800 XT Midnight Black will cost $649, like other reference design RX 6800 XT boards, or will cost more since it is an exclusive product. Furthermore, it is unknown how many of such graphics cards will be made available.
Intel last week debuted the 11th Gen Core “Rocket Lake” desktop processor family, and we had launch-day reviews of the Core i9-11900K flagship and the mid-range Core i5-11600K. Today we bring you the Core i5-11400F—probably the most interesting model in the whole stack. The often-ignored SKU among Intel desktop processors among the past many generations, the Core i5-xx400, is also its most popular among gamers. Popular chips of this kind included the i5-8400, the i5-9400F, and the i5-10400F.
These chips feature the entire Core i5 feature-set at prices below $200, albeit lower clock speeds and locked overclocking. Even within these, Intel introduced a sub-segment of chips that lack integrated graphics, denoted by “F” in the model number; which shave a further $15-20 off the price. The Core i5-11400F starts at just $160, which is an impressive value proposition for gamers who use graphics cards and don’t need the iGPU anyway.
The new “Rocket Lake” microarchitecture brings four key changes that make it the company’s first major innovation for client desktop in several years. First, Intel is introducing the new “Cypress Cove” CPU core that promises an IPC gain of up to 19% over the previous-generation. Next up, is the new UHD 750 integrated graphics powered by the Intel Xe LP graphics architecture, promising up to 50% performance uplift over the UHD 650 Gen9.5 iGPU of the previous generation. Thirdly, a much needed update to the processor’s I/O, including PCI-Express 4.0 for graphics and a CPU-attached NVMe slot; and lastly, an updated memory controller that allows much higher memory overclocking potential, thanks to the introduction of a Gear 2 mode.
The Core i5-11400F comes with a permanently disabled iGPU and a locked multiplier. Intel has still enabled support for memory frequencies of up to DDR4-3200, which is now possible on even the mid-tier H570 and B560 motherboard chipsets. The i5-11400F is a 6-core/12-thread processor clocked at 2.60 GHz, with up to 4.40 GHz Turbo Boost frequency. Each of the processor’s six “Cypress Cove” CPU cores include 512 KB dedicated L2 cache, and the cores share 12 MB of L3 cache. Intel is rating the processor’s TDP at 65 W, just like the other non-K SKUs, although it is possible to tweak these power limits—adjusting PL1 and PL2 is not considered “overclocking” by Intel, so it is not locked.
At $170, the Core i5-11400F has no real competitor from AMD. The Ryzen 5 3600 starts around $200, and the company didn’t bother (yet?) with cheaper Ryzen 5 SKUs based on “Zen 3”. In this review, we take the i5-11400F for a spin to show you if this is really all you need for a mid-priced contemporary gaming rig.
We present several data sets in our Core i5-11400F review: “Gear 1” and “Gear 2” show performance results for the processor operating at stock, with the default power limit setting active, respecting a 65 W TDP. Next up we have two runs with the power limit setting raised to maximum: “Max Power Limit / Gear 1” and “Max Power Limit / Gear 2”. Last but not least, signifying the maximum performance you can possible achieve on this CPU, we have a run “Max Power + Max BCLK”, which operates at 102.9 MHz BCLK—the maximum allowed by the processor, at Gear 1 DDR4-3733, the memory controller won’t run higher.
Fujifilm has announced the newest addition to the Instax Mini line of instant cameras, the Mini 40. Much like the Instax Mini 11, which was released last March, the Mini 40 is an entry-level instant film camera with only two settings and two buttons. But what sets this camera apart is its vintage film camera look, complete with a plastic faux leather body and metallic-looking plastic rails. It’s a $100 toy camera that instantly creates printed memories — and of course, it’s a blast to play with.
Beyond the vintage camera look, the Mini 40 has the same mechanics as the $70 Mini 11. Pushing the large silver button under the lens compartment will pop the lens out and turn the camera on. Selfie mode is activated by pulling the outermost part of the lens out about half an inch more. And when you’re ready to pack it away, push the lens back into the camera to turn it off. The camera’s all-plastic housing makes it very light and easy to take anywhere.
There are two shooting modes on the Instax Mini 40: normal and selfie. Selfie mode adjusts the focal distance of the camera to allow subjects closer to the lens to be in focus. Beyond that, you have very little control. The flash will fire with every shutter press, and an Instax Mini film sheet will roll out to a mechanic hum. The results are unpredictable beyond knowing the printed photo will be slightly soft with a high contrast and be bound within the icon Polaroid frame. The magic comes when you place the print on a table, forget about it, and are reminded of a great memory no less than a minute and a half later.
When using any Instax camera, I can’t help but notice the amount of plastic used in each one of the 10-photo film cartridges. Although there is a recycling logo on the cartridge, it is in Japanese, and I am unable to tell what number plastic it is made from. In the US, many municipalities have specific plastic numbers they can and cannot recycle, and without this number clearly labeled on these photo cartridges, I was unable to know if I would be able to recycle them here in Brooklyn, New York. I reached out to Fujifilm for more information and will update this article if I get it.
Play both informs my creative style and relieves me of stress — which, as a person who is tasked with reviewing cameras, is hard to always satisfy when using a camera. But the Mini 40, much like the Mini 11, has so few options, a very lightweight feel, and, at times, such unpredictable results that I can sit back and just have fun when using it. Any further thought about photographic theory while using the Mini 40 is excessive and rarely yielded me better results.
At $100, the Mini 40 is a tad more expensive than the almost-identical Mini 11. Besides its new vintage look, there would be little reason to spend the extra $30. But if looking the film photographer part is important, the Mini 40’s design will stand out. Once Fujifilm addresses the amount of plastic used in every one of the 10-shot film packs, I will really be able to have a carefree experience with this camera.
Intel’s long-delayed 10nm+ third-gen Xeon Scalable Ice Lake processors mark an important step forward for the company as it attempts to fend off intense competition from AMD’s 7nm EPYC Milan processors that top out at 64 cores, a key advantage over Intel’s existing 14nm Cascade Lake Refresh that tops out at 28 cores. The 40-core Xeon Platinum 8380 serves as the flagship model of Intel’s revamped lineup, which the company says features up to a 20% IPC uplift on the strength of the new Sunny Cove core architecture paired with the 10nm+ process.
Intel has already shipped over 200,000 units to its largest customers since the beginning of the year, but today marks the official public debut of its newest lineup of data center processors, so we get to share benchmarks. The Ice Lake chips drop into dual-socket Whitley server platforms, while the previously-announced Cooper Lake slots in for quad- and octo-socket servers. Intel has slashed Xeon pricing up to 60% to remain competitive with EPYC Rome, and with EPYC Milan now shipping, the company has reduced per-core pricing again with Ice Lake to remain competitive as it targets high-growth markets, like the cloud, enterprise, HPC, 5G, and the edge.
The new Xeon Scalable lineup comes with plenty of improvements, like increased support for up to eight memory channels that run at a peak of DDR4-3200 with two DIMMs per channel, a notable improvement over Cascade Lake’s support for six channels at DDR4-2933 and matching EPYC’s eight channels of memory. Ice Lake also supports 6TB of DRAM/Optane per socket (4TB of DRAM) and 4TB of Optane Persistent Memory DIMMs per socket (8 TB in dual-socket). Unlike Intel’s past practices, Ice Lake also supports the full memory and Optane capacity on all models with no additional upcharge.
Intel has also moved forward from 48 lanes of PCIe 3.0 connectivity to 64 lanes of PCIe 4.0 (128 lanes in dual-socket), improving both I/O bandwidth and increasing connectivity to match AMD’s 128 available lanes in a dual-socket server.
Intel says that these additives, coupled with a range of new SoC-level optimizations, a focus on improved power management, along with support for new instructions, yield an average of 46% more performance in a wide range of data center workloads. Intel also claims a 50% uplift to latency-sensitive applications, like HammerDB, Java, MySQL, and WordPress, and up to 57% more performance in heavily-threaded workloads, like NAMD, signaling that the company could return to a competitive footing in what has become one of AMD’s strongholds — heavily threaded workloads. We’ll put that to the test shortly. First, let’s take a closer look at the lineup.
Intel Third-Gen Xeon Scalable Ice Lake Pricing and Specfications
We have quite the list of chips below, but we’ve actually filtered out the downstream Intel parts, focusing instead on the high-end ‘per-core scalable’ models. All told, the Ice Lake family spans 42 SKUs, with many of the lower-TDP (and thus performance) models falling into the ‘scalable performance’ category.
Intel also has specialized SKUs targeted at maximum SGX enclave capacity, cloud-optimized for VMs, liquid-cooled, networking/NFV, media, long-life and thermal-friendly, and single-socket optimized parts, all of which you can find in the slide a bit further below.
Cores / Threads
Base / Boost – All Core (GHz)
L3 Cache (MB)
TDP (W)
1K Unit Price / RCP
EPYC Milan 7763
64 / 128
2.45 / 3.5
256
280
$7,890
EPYC Rome 7742
64 / 128
2.25 / 3.4
256
225
$6,950
EPYC Milan 7663
56 / 112
2.0 / 3.5
256
240
$6,366
EPYC Milan 7643
48 / 96
2.3 / 3.6
256
225
$4.995
Xeon Platinum 8380
40 / 80
2.3 / 3.2 – 3.0
60
270
$8,099
Xeon Platinum 8368
38 / 76
2.4 / 3.4 – 3.2
57
270
$6,302
Xeon Platinum 8360Y
36 / 72
2.4 / 3.5 – 3.1
54
250
$4,702
Xeon Platinum 8362
32 / 64
2.8 / 3.6 – 3.5
48
265
$5,448
EPYC Milan 7F53
32 / 64
2.95 / 4.0
256
280
$4,860
EPYC Milan 7453
28 / 56
2.75 / 3.45
64
225
$1,570
Xeon Gold 6348
28 / 56
2.6 / 3.5 – 3.4
42
235
$3,072
Xeon Platinum 8280
28 / 56
2.7 / 4.0 – 3.3
38.5
205
$10,009
Xeon Gold 6258R
28 / 56
2.7 / 4.0 – 3.3
38.5
205
$3,651
EPYC Milan 74F3
24 / 48
3.2 / 4.0
256
240
$2,900
Intel Xeon Gold 6342
24 / 48
2.8 / 3.5 – 3.3
36
230
$2,529
Xeon Gold 6248R
24 / 48
3.0 / 4.0
35.75
205
$2,700
EPYC Milan 7443
24 / 48
2.85 / 4.0
128
200
$2,010
Xeon Gold 6354
18 / 36
3.0 / 3.6 – 3.6
39
205
$2,445
EPYC Milan 73F3
16 / 32
3.5 / 4.0
256
240
$3,521
Xeon Gold 6346
16 / 32
3.1 / 3.6 – 3.6
36
205
$2,300
Xeon Gold 6246R
16 / 32
3.4 / 4.1
35.75
205
$3,286
EPYC Milan 7343
16 / 32
3.2 / 3.9
128
190
$1,565
Xeon Gold 5317
12 / 24
3.0 / 3.6 – 3.4
18
150
$950
Xeon Gold 6334
8 / 16
3.6 / 3.7 – 3.6
18
165
$2,214
EPYC Milan 72F3
8 / 16
3.7 / 4.1
256
180
$2,468
Xeon Gold 6250
8 / 16
3.9 / 4.5
35.75
185
$3,400
At 40 cores, the Xeon Platinum 8380 reaches new heights over its predecessors that topped out at 28 cores, striking higher in AMD’s Milan stack. The 8380 comes at $202 per core, which is well above the $130-per-core price tag on the previous-gen flagship, the 28-core Xeon 6258R. However, it’s far less expensive than the $357-per-core pricing of the Xeon 8280, which had a $10,008 price tag before AMD’s EPYC upset Intel’s pricing model and forced drastic price reductions.
With peak clock speeds of 3.2 GHz, the 8380 has a much lower peak clock rate than the previous-gen 28-core 6258R’s 4.0 GHz. Even dipping down to the new 28-core Ice Lake 6348 only finds peak clock speeds of 3.5 GHz, which still trails the Cascade Lake-era models. Intel obviously hopes to offset those reduced clock speeds with other refinements, like increased IPC and better power and thermal management.
On that note, Ice Lake tops out at 3.7 GHz on a single core, and you’ll have to step down to the eight-core model to access these clock rates. In contrast, Intel’s previous-gen eight-core 6250 had the highest clock rate, 4.5 GHz, of the Cascade Lake stack.
Surprisingly, AMD’s EPYC Milan models actually have higher peak frequencies than the Ice Lake chips at any given core count, but remember, AMD’s frequencies are only guaranteed on one physical core. In contrast, Intel specs its chips to deliver peak clock rates on any core. Both approaches have their merits, but AMD’s more refined boost tech paired with the 7nm TSMC process could pay dividends for lightly-threaded work. Conversely, Intel does have solid all-core clock rates that peak at 3.6 GHz, whereas AMD has more of a sliding scale that varies based on the workload, making it hard to suss out the winners by just examining the spec sheet.
Ice Lake’s TDPs stretch from 85W up to 270W. Surprisingly, despite the lowered base and boost clocks, Ice Lake’s TDPs have increased gen-on-gen for the 18-, 24- and 28-core models. Intel is obviously pushing higher on the TDP envelope to extract the most performance out of the socket possible, but it does have lower-power chip options available (listed in the graphic below).
AMD has a notable hole in its Milan stack at both the 12- and 18-core mark, a gap that Intel has filled with its Gold 5317 and 6354, respectively. Milan still holds the top of the hierarchy with 48-, 56- and 64-core models.
Image 1 of 12
Image 2 of 12
Image 3 of 12
Image 4 of 12
Image 5 of 12
Image 6 of 12
Image 7 of 12
Image 8 of 12
Image 9 of 12
Image 10 of 12
Image 11 of 12
Image 12 of 12
The Ice Lake Xeon chips drop into Whitley server platforms with Socket LGA4189-4/5. The FC-LGA14 package measures 77.5mm x 56.5mm and has an LGA interface with 4189 pins. The die itself is predicted to measure ~600mm2, though Intel no longer shares details about die sizes or transistor counts. In dual-socket servers, the chips communicate with each other via three UPI links that operate at 11.2 GT/s, an increase from 10.4 GT/s with Cascade Lake. . The processor interfaces with the C620A chipset via four DMI 3.0 links, meaning it communicates at roughly PCIe 3.0 speeds.
The C620A chipset also doesn’t support PCIe 4.0; instead, it supports up to 20 lanes of PCIe 3.0, ten USB 3.0, and fourteen USB 2.0 ports, along with 14 ports of SATA 6 Gbps connectivity. Naturally, that’s offset by the 64 PCIe 4.0 lanes that come directly from the processor. As before, Intel offers versions of the chipset with its QuickAssist Technology (QAT), which boosts performance in cryptography and compression/decompression workloads.
Image 1 of 12
Image 2 of 12
Image 3 of 12
Image 4 of 12
Image 5 of 12
Image 6 of 12
Image 7 of 12
Image 8 of 12
Image 9 of 12
Image 10 of 12
Image 11 of 12
Image 12 of 12
Intel’s focus on its platform adjacencies business is a key part of its messaging around the Ice Lake launch — the company wants to drive home its message that coupling its processors with its own differentiated platform additives can expose additional benefits for Whitley server platforms.
The company introduced new PCIe 4.0 solutions, including the new 200 GbE Ethernet 800 Series adaptors that sport a PCIe 4.0 x16 connection and support RDMA iWARP and RoCEv2, and the Intel Optane SSD P5800X, a PCIe 4.0 SSD that uses ultra-fast 3D XPoint media to deliver stunning performance results compared to typical NAND-based storage solutions.
Intel also touts its PCIe 4.0 SSD D5-P5316, which uses the company’s 144-Layer QLC NAND for read-intensive workloads. These SSDs offer up to 7GBps of throughput and come in capacities stretching up to 15.36 TB in the U.2 form factor, and 30.72 TB in the E1.L ‘Ruler’ form factor.
Intel’s Optane Persistent Memory 200-series offers memory-addressable persistent memory in a DIMM form factor. This tech can radically boost memory capacity up to 4TB per socket in exchange for higher latencies that can be offset through software optimizations, thus yielding more performance in workloads that are sensitive to memory capacity.
The “Barlow Pass” Optane Persistent Memory 200 series DIMMs promise 30% more memory bandwidth than the previous-gen Apache Pass models. Capacity remains at a maximum of 512GB per DIMM with 128GB and 256GB available, and memory speeds remain at a maximum of DDR4-2666.
Intel has also expanded its portfolio of Market Ready and Select Solutions offerings, which are pre-configured servers for various workloads that are available in over 500 designs from Intel’s partners. These simple-to-deploy servers are designed for edge, network, and enterprise environments, but Intel has also seen uptake with cloud service providers like AWS, which uses these solutions for its ParallelCluster HPC service.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
Like the benchmarks you’ll see in this review, the majority of performance measurements focus on raw throughput. However, in real-world environments, a combination of throughput and responsiveness is key to deliver on latency-sensitive SLAs, particularly in multi-tenant cloud environments. Factors such as loaded latency (i.e., the amount of performance delivered to any number of applications when all cores have varying load levels) are key to ensuring performance consistency across multiple users. Ensuring consistency is especially challenging with diverse workloads running on separate cores in multi-tenant environments.
Intel says it focused on performance consistency in these types of environments through a host of compute, I/O, and memory optimizations. The cores, naturally, benefit from increased IPC, new ISA instructions, and scaling up to higher core counts via the density advantages of 10nm, but Intel also beefed up its I/O subsystem to 64 lanes of PCIe 4.0, which improves both connectivity (up from 48 lanes) and throughput (up from PCIe 3.0).
Intel says it designed the caches, memory, and I/O, not to mention power levels, to deliver consistent performance during high utilization. As seen in slide 30, the company claims these alterations result in improved application performance and latency consistency by reducing long tail latencies to improve worst-case performance metrics, particularly for memory-bound and multi-tenant workloads.
Image 1 of 12
Image 2 of 12
Image 3 of 12
Image 4 of 12
Image 5 of 12
Image 6 of 12
Image 7 of 12
Image 8 of 12
Image 9 of 12
Image 10 of 12
Image 11 of 12
Image 12 of 12
Ice Lake brings a big realignment of the company’s die that provides cache, memory, and throughput advances. The coherent mesh interconnect returns with a similar arrangement of horizontal and vertical rings present on the Cascade Lake-SP lineup, but with a realignment of the various elements, like cores, UPI connections, and the eight DDR4 memory channels that are now split into four dual-channel controllers. Here we can see that Intel shuffled around the cores on the 28-core die and now has two execution cores on the bottom of the die clustered with I/O controllers (some I/O is now also at the bottom of the die).
Intel redesigned the chip to support two new sideband fabrics, one controlling power management and the other used for general-purpose management traffic. These provide telemetry data and control to the various IP blocks, like execution cores, memory controllers, and PCIe/UPI controllers.
The die includes a separate peer-to-peer (P2P) fabric to improve bandwidth between cores, and the I/O subsystem was also virtualized, which Intel says offers up to three times the fabric bandwidth compared to Cascade Lake. Intel also split one of the UPI blocks into two, creating a total of three UPI links, all with fine-grained power control of the UPI links. Now, courtesy of dedicated PLLs, all three UPIs can modulate clock frequencies independently based on load.
Densely packed AVX instructions augment performance in properly-tuned workloads at the expense of higher power consumption and thermal load. Intel’s Cascade Lake CPUs drop to lower frequencies (~600 to 900 MHz) during AVX-, AVX2-, and AVX-512-optimized workloads, which has hindered broader adoption of AVX code.
To reduce the impact, Intel has recharacterized its AVX power limits, thus yielding (unspecified) higher frequencies for AVX-512 and AVX-256 operations. This is done in an adaptive manner based on three different power levels for varying instruction types. This nearly eliminates the frequency delta between AVX and SSE for 256-heavy and 512-light operations, while 512-heavy operations have also seen significant uplift. All Ice Lake SKUs come with dual 512b FMAs, so this optimization will pay off across the entire stack.
Intel also added support for a host of new instructions to boost cryptography performance, like VPMADD52, GFNI, SHA-NI, Vector AES, and Vector Carry-Less multiply instructions, and a few new instructions to boost compression/decompression performance. All rely heavily upon AVX acceleration. The chips also support Intel’s Total Memory Encryption (TME) that offers DRAM encryption through AES-XTS 128-bit hardware-generated keys.
Intel also made plenty of impressive steps forward on the microarchitecture, with improvements to every level of the pipeline allowing Ice Lake’s 10nm Sunny Cove cores to deliver far higher IPC than 14nm Cascade Lake’s Skylake-derivative architecture. Key improvements to the front end include larger reorder, load, and store buffers, along with larger reservation stations. Intel increased the L1 data cache from 32 KiB, the capacity it has used in its chips for a decade, to 42 KiB, and moved from 8-way to 12-way associativity. The L2 cache moves from 4-way to 8-way and is also larger, but the capacity is dependent upon each specific type of product — for Ice Lake server chips, it weighs in at 1.25 MB per core.
Intel expanded the micro-op cache (UOP) from 1.5K to 2.25K micro-ops, the second-level translation lookaside buffer (TLB) from 1536 entries to 2048, and moved from a four-wide allocation to five-wide to allow the in-order portion of the pipeline (front end) to feed the out-of-order (back end) portion faster. Additionally, Intel expanded the Out of Order (OoO) Window from 224 to 352. Intel also increased the number of execution units to handle ten operations per cycle (up from eight with Skylake) and focused on improving branch prediction accuracy and reducing latency under load conditions.
The store unit can now process two store data operations for every cycle (up from one), and the address generation units (AGU) also handle two loads and two stores each cycle. These improvements are necessary to match the increased bandwidth from the larger L1 data cache, which does two reads and two writes every cycle. Intel also tweaked the design of the sub-blocks in the execution units to enable data shuffles within the registers.
Intel also added support for its Software Guard Extensions (SGX) feature that debuted with the Xeon E lineup, and increased capacity to 1TB (maximum capacity varies by model). SGX creates secure enclaves in an encrypted portion of the memory that is exclusive to the code running in the enclave – no other process can access this area of memory.
Test Setup
We have a glaring hole in our test pool: Unfortunately, we do not have AMD’s recently-launched EPYC Milan processors available for this round of benchmarking, though we are working on securing samples and will add competitive benchmarks when available.
We do have test results for the AMD’s frequency-optimized Rome 7Fx2 processors, which represent AMD’s performance with its previous-gen chips. As such, we should view this round of tests largely through the prism of Intel’s gen-on-gen Xeon performance improvement, and not as a measure of the current state of play in the server chip market.
We use the Xeon Platinum Gold 8280 as a stand-in for the less expensive Xeon Gold 6258R. These two chips are identical and provide the same level of performance, with the difference boiling down to the more expensive 8280 coming with support for quad-socket servers, while the Xeon Gold 6258R tops out at dual-socket support.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
Intel provided us with a 2U Server System S2W3SIL4Q Software Development Platform with the Coyote Pass server board for our testing. This system is designed primarily for validation purposes, so it doesn’t have too many noteworthy features. The system is heavily optimized for airflow, with the eight 2.5″ storage bays flanked by large empty bays that allow for plenty of air intake.
The system comes armed with dual redundant 2100W power supplies, a 7.68TB Intel SSD P5510, an 800GB Optane SSD P5800X, and an E810-CQDA2 200GbE NIC. We used the Intel SSD P5510 for our benchmarks and cranked up the fans for maximum performance in our benchmarks.
We tested with the pre-installed 16x 32GB DDR4-3200 DIMMs, but Intel also provided sixteen 128GB Optane Persistent Memory DIMMs for further testing. Due to time constraints, we haven’t yet had time to test the Optane DIMMs, but stay tuned for a few demo workloads in a future article. As we’re not entirely done with our testing, we don’t want to risk prying the 8380 out of the socket yet for pictures — the large sockets from both vendors are becoming more finicky after multiple chip reinstalls.
Memory
Tested Processors
Intel S2W3SIL4Q
16x 32GB SK hynix ECC DDR4-3200
Intel Xeon Platinum 8380
Supermicro AS-1023US-TR4
16x 32GB Samsung ECC DDR4-3200
EPYC 7742, 7F72, 7F52
Dell/EMC PowerEdge R460
12x 32GB SK hynix DDR4-2933
Intel Xeon 8280, 6258R, 5220R, 6226R
To assess performance with a range of different potential configurations, we used a Supermicro 1024US-TR4 server with three different EPYC Rome configurations. We outfitted this server with 16x 32GB Samsung ECC DDR4-3200 memory modules, ensuring the chips had all eight memory channels populated.
We used a Dell/EMC PowerEdge R460 server to test the Xeon processors in our test group. We equipped this server with 12x 32GB Sk hynix DDR4-2933 modules, again ensuring that each Xeon chip’s six memory channels were populated.
We used the Phoronix Test Suite for benchmarking. This automated test suite simplifies running complex benchmarks in the Linux environment. The test suite is maintained by Phoronix, and it installs all needed dependencies and the test library includes 450 benchmarks and 100 test suites (and counting). Phoronix also maintains openbenchmarking.org, which is an online repository for uploading test results into a centralized database.
We used Ubuntu 20.04 LTS to maintain compatibility with our existing test results, and leverage the default Phoronix test configurations with the GCC compiler for all tests below. We also tested all platforms with all available security mitigations.
Naturally, newer Linux kernels, software, and targeted optimizations can yield improvements for any of the tested processors, so take these results as generally indicative of performance in compute-intensive workloads, but not as representative of highly-tuned deployments.
Linux Kernel, GCC and LLVM Compilation Benchmarks
Image 1 of 2
Image 2 of 2
AMD’s EPYC Rome processors took the lead over the Cascade Lake Xeon chips at any given core count in these benchmarks, but here we can see that the 40-core Ice Lake Xeon 8380 has tremendous potential for these type of workloads. The dual 8380 processors complete the Linux compile benchmark, which builds the Linux kernel at default settings, in 20 seconds, edging out the 64-core EPYC Rome 7742 by one second. Naturally, we expect AMD’s Milan flagship, the 7763, to take the lead in this benchmark. Still, the implication is clear — Ice Lake-SP has significantly-improved performance, thus reducing the delta between Xeon and competing chips.
We can also see a marked improvement in the LLVM compile, with the 8380 reducing the time to completion by ~20% over the prior-gen 8280.
Molecular Dynamics and Parallel Compute Benchmarks
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
NAMD is a parallel molecular dynamics code designed to scale well with additional compute resources; it scales up to 500,000 cores and is one of the premier benchmarks used to quantify performance with simulation code. The Xeon 8380’s notch a 32% improvement in this benchmark, slightly beating the Rome chips.
Stockfish is a chess engine designed for the utmost in scalability across increased core counts — it can scale up to 512 threads. Here we can see that this massively parallel code scales well with EPYC’s leading core counts. The EPYC Rome 7742 retains its leading position at the top of the chart, but the 8380 offers more than twice the performance of the previous-gen Cascade Lake flagship.
We see similarly impressive performance uplifts in other molecular dynamics workloads, like the Gromacs water benchmark that simulates Newtonian equations of motion with hundreds of millions of particles. Here Intel’s dual 8380’s take the lead over the EPYC Rome 7742 while pushing out nearly twice the performance of the 28-core 8280.
We see a similarly impressive generational improvement in the LAAMPS molecular dynamics workload, too. Again, AMD’s Milan will likely be faster than the 7742 in this workload, so it isn’t a given that the 8380 has taken the definitive lead over AMD’s current-gen chips, though it has tremendously improved Intel’s competitive positioning.
The NAS Parallel Benchmarks (NPB) suite characterizes Computational Fluid Dynamics (CFD) applications, and NASA designed it to measure performance from smaller CFD applications up to “embarrassingly parallel” operations. The BT.C test measures Block Tri-Diagonal solver performance, while the LU.C test measures performance with a lower-upper Gauss-Seidel solver. The EPYC Milan 7742 still dominates in this workload, showing that Ice Lake’s broad spate of generational improvements still doesn’t allow Intel to take the lead in all workloads.
Rendering Benchmarks
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Turning to more standard fare, provided you can keep the cores fed with data, most modern rendering applications also take full advantage of the compute resources. Given the well-known strengths of EPYC’s core-heavy approach, it isn’t surprising to see the 64-core EPYC 7742 processors retain the lead in the C-Ray benchmark, and that applies to most of the Blender benchmarks, too.
Encoding Benchmarks
Image 1 of 3
Image 2 of 3
Image 3 of 3
Encoders tend to present a different type of challenge: As we can see with the VP9 libvpx benchmark, they often don’t scale well with increased core counts. Instead, they often benefit from per-core performance and other factors, like cache capacity. AMD’s frequency-optimized 7F52 retains its leading position in this benchmark, but Ice Lake again reduces the performance delta.
Newer software encoders, like the Intel-Netflix designed SVT-AV1, are designed to leverage multi-threading more fully to extract faster performance for live encoding/transcoding video applications. EPYC Rome’s increased core counts paired with its strong per-core performance beat Cascade Lake in this benchmark handily, but the step up to forty 10nm+ cores propels Ice Lake to the top of the charts.
Compression, Security and Python Benchmarks
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
The Pybench and Numpy benchmarks are used as a general litmus test of Python performance, and as we can see, these tests typically don’t scale linearly with increased core counts, instead prizing per-core performance. Despite its somewhat surprisingly low clock rates, the 8380 takes the win in the Pybench benchmark and improves Xeon’s standing in Numpy as it takes a close second to the 7F52.
Compression workloads also come in many flavors. The 7-Zip (p7zip) benchmark exposes the heights of theoretical compression performance because it runs directly from main memory, allowing both memory throughput and core counts to heavily impact performance. As we can see, this benefits the core-heavy chips as they easily dispatch with the chips with lesser core counts. The Xeon 8380 takes the lead in this test, but other independent benchmarks show that AMD’s EPYC Milan would lead this chart.
In contrast, the gzip benchmark, which compresses two copies of the Linux 4.13 kernel source tree, responds well to speedy clock rates, giving the 16-core 7F52 the lead. Here we see that 8380 is slightly slower than the previous-gen 8280, which is likely at least partially attributable to the 8380’s much lower clock rate.
The open-source OpenSSL toolkit uses SSL and TLS protocols to measure RSA 4096-bit performance. As we can see, this test favors the EPYC processors due to its parallelized nature, but the 8380 has again made big strides on the strength of its higher core count. Offloading this type of workload to dedicated accelerators is becoming more common, and Intel also offers its QAT acceleration built into chipsets for environments with heavy requirements.
Conclusion
Admittedly, due to our lack of EPYC Milan samples, our testing today of the Xeon Platinum 8380 is more of a demonstration of Intel’s gen-on-gen performance improvements rather than a holistic view of the current competitive landscape. We’re working to secure a dual-socket Milan server and will update when one lands in our lab.
Overall, Intel’s third-gen Xeon Scalable is a solid step forward for the Xeon franchise. AMD has steadily chewed away data center market share from Intel on the strength of its EPYC processors that have traditionally beaten Intel’s flagships by massive margins in heavily-threaded workloads. As our testing, and testing from other outlets shows, Ice Lake drastically reduces the massive performance deltas between the Xeon and EPYC families, particularly in heavily threaded workloads, placing Intel on a more competitive footing as it faces an unprecedented challenge from AMD.
AMD will still hold the absolute performance crown in some workloads with Milan, but despite EPYC Rome’s commanding lead in the past, progress hasn’t been as swift as some projected. Much of that boils down to the staunchly risk-averse customers in the enterprise and data center; these customers prize a mix of factors beyond the standard measuring stick of performance and price-to-performance ratios, instead focusing on areas like compatibility, security, supply predictability, reliability, serviceability, engineering support, and deeply-integrated OEM-validated platforms.
AMD has improved drastically in these areas and now has a full roster of systems available from OEMs, along with broadening uptake with CSPs and hyperscalers. However, Intel benefits from its incumbency and all the advantages that entails, like wide software optimization capabilities and platform adjacencies like networking, FPGAs, and Optane memory.
Although Ice Lake doesn’t lead in all metrics, it does improve the company’s positioning as it moves forward toward the launch of its Sapphire Rapids processors that are slated to arrive later this year to challenge AMD’s core-heavy models. Intel still holds the advantage in several criteria that appeal to the broader enterprise market, like pre-configured Select Solutions and engineering support. That, coupled with drastic price reductions, has allowed Intel to reduce the impact of a fiercely-competitive adversary. We can expect the company to redouble those efforts as Ice Lake rolls out to the more general server market.
We have with us the ASRock Radeon RX 6700 XT Phantom Gaming D 12 GB OC graphics card. The latest entrant to the custom-design graphics card space debuting with the RX 5000 series, ASRock has established itself as a serious design-house for premium custom Radeon RX graphics cards. The Phantom Gaming D is the company’s top RX 6700 XT product so far. A successor to the Radeon RX 5700 XT, which stirred things up in the sub-$500 segment last year; the new RX 6700 XT is based on the RDNA2 graphics architecture and provides full DirectX 12 Ultimate support, including real-time raytracing. It’s being offered as a 1440p maxed out gaming product, and AMD claims competitiveness with not only the GeForce RTX 3060 Ti, but also the RTX 3070.
The new RDNA2 graphics architecture powers not just AMD’s Radeon RX 6000 series, but also next-generation game consoles. This makes it easier for game developers to optimize for the RX 6000. AMD’s approach to real-time raytracing involves Ray Accelerators, special hardware for ray intersection computation, while compute shaders are used for almost every other raytracing aspect, including denoising. To achieve this, AMD has had to significantly increase the SIMD performance of its new generation GPUs through not just higher IPC for the new RDNA2 compute units, but also significantly higher engine clocks. A side-effect of this approach is that these GPUs offer high levels of performance on the majority of conventional raster 3D games.
With the RX 6700 XT, AMD has increased the standard memory amount for this segment to 12 GB, up from 8 GB on the RX 5700 XT, but the memory bus is narrower, at 192-bit. AMD has attempted to shore up the memory bus width deficit by using the fastest JEDEC-standard 16 Gbps memory chips and Infinity Cache, a fast 96 MB on-die cache that speeds up the memory sub-system.
The ASRock RX 6700 XT Phantom Gaming D comes with a powerful triple-slot, triple-fan cooling solution that doesn’t shy away from copious amounts of RGB LED bling. It also has an ARGB header you may use to synchronize the rest of your lighting to the card. ASRock is also packing a factory overclock of up to 2548 MHz Game Clock (vs. 2424 MHz reference).
Sennheiser’s latest premium wired earbuds certainly have their strengths, but class-leading insight isn’t one of them
For
Excellent build and comfort
Impressive bass depth
Lush, full midrange
Against
Lacks class-leading subtlety
Rhythmically and dynamically outclassed
No in-line remote
In a headphones market that revels in innovative features such as true wirelessness, active noise-cancelling and voice control, the spec sheet for a pair of wired earbuds can seem rather prosaic – like a Henry vacuum cleaner in a field of cordless Dysons.
But such features have little to do with sound quality, and in the context of performance-per-pound value, wired models usually have the advantage over their wireless peers.
The Sennheiser IE 300 are the latest wired earbuds from the German brand, and despite not ticking some of the boxes in terms of popular features, there’s plenty to talk about where technical design and performance is concerned.
Build and comfort
Sennheiser says every component inside these lightweight (4g without cable) in-ear headphones has been carefully tuned to produce optimal performance, from a low resonance membrane foil to the resonator chamber, which is designed to compensate for the masking effects of trapped air in the ear canal with the buds in use.
Sennheiser IE 300 tech specs
Finishes x1
Cable length 1.25m
Eartips 3x memory foam, 3x silicone
Weight 4g (without cable)
The company has refined the 7mm ‘Extra Wide Band’ drive unit also found in the IE 800, as well as the chamber-within-a-chamber structure that helps manage airflow in an effort to produce a defined bass and equally satisfying midrange.
Its pro audio-inspired design is most obvious in the quality of its detachable cable; the inclusion of memory foam eartips in addition to silicone (three sizes of each are provided); and the availability of optional balanced (2.5mm or 4.4mm) cables. Everything from the earbud casing’s glitter-speckled finish to the thick 1.2m cable, which has been reinforced by para-aramid fibre for its durable characteristic, is an illustration of the IE 300’s build quality, too.
While the compact, hard-shell carry case doesn’t exude quite the same luxury as the earbuds, it is pocket-friendly and hardy, and undoubtedly a practical accessory. The same could be said for an in-line remote, the absence of which is a small negative mark here.
The earbud housings are practically compact too, and while a little fiddling is required to bend the thicker top part of the cable around your ears, when they’re in, they stay in – and you’d be just as likely to remove them from your ears due to interruption from a natural disaster than because of any discomfort. When it comes to physical build and ergonomic comfort, Sennheiser’s formidable reputation has been maintained here.
Sound
What’s less certain with Sennheiser earbuds is the sound character we’re going to be met with. The company has nailed class-leading insight and tonal neutrality in countless models, but has also shown an inclination towards richer, fuller and not so universally appealing sonic signatures. Unfortunately with the IE 300, it’s more a case of the latter.
The IE 300 make a great first impression, producing the depth of bass, frequency-wide solidity and general scale of sound that is hard not to be impressed by from such tiny units. We play Mogwai’s Ceiling Granny and there’s meat and grubbiness behind the oppressive guitar lines, not to mention hefty sting to the blistering electric squeals. The Sennheiser’s low-end weight, which isn’t particularly agile but not exactly ponderous either, can really anchor a song that warrants it in ways earbuds rarely do, such as LNZNDRF’s Barton Springs At Dusk.
The lows are rather overstated, though, stealing the spotlight like a lead singer in a band and relegating the treble to the role of the drummer at the back of the stage. The varied instrumental that rides above the bedrock of deep, uncompromising bass in the 20-minute track – the keys, percussion and series of kaleidoscopic synths – is somewhat muted in comparison. Richness at the low-end comes at the expense of some midrange crispness, shedding excitement as a consequence.
Meanwhile, the class leaders at this level, the Award-winning Shure Aonic 3 (£179, $199), set a better example, trading the Sennheiser’s impressive richness for a more agreeable neutral tonality. They may not match the IE 300’s bass depth or the midrange solidity – these Sennheisers sound wonderfully full and smooth with voices – but they offer greater clarity, crispness, agility and, in turn, a snappier presentation.
Play the vocal-led Cassandra Jenkins’ Michelangelo, and while the Sennheisers showboat with a bold soundfield filled with lush-sounding acoustic guitars and warm, solid vocals, the intricacies of Jenkins’ delivery and the fine instrumental textures are overlooked in comparison.
The Shures are notably more rhythmically and dynamically proficient. Playing the first track of Black Country, New Roads’ debut album, Instrumental, they offer the more compelling listen; cymbals are convincingly cutting as opposed to softened, there’s greater texture to the oboe and trombone melodies, and the drum rhythm is tighter and faster, propelling the track’s rightful frenetic energy.
They focus on every musical strand, even as new ones come in. It would be unfair to call the Sennheisers boring or vague, but they are notably less astute and more subdued than their more talented, more affordable rival.
Verdict
Without having to concern itself with contemporary features, Sennheiser has been able to prioritise the fundamentals for the IE 300 – build and sound quality. It nails the former, and there’s plenty to like in the latter: they’re bolder and more authoritative-sounding than most, with majestic bass depth and wonderfully rich vocals among the highlights.
However, they aren’t convincing all-rounders, and just fall short of the transparency and entertainment of the more affordable, class-leading competition at this level.
Asus is apparently preparing what could be the ultimate AMD gaming laptop. According to a CPU-Z posting, the new iteration of the ROG Strix G15 will arrive with a lethal combination of AMD’s Ryzen 9 5900HX (Cezanne) processor and Radeon RX 6800M graphics cards.
The unreleased laptop (via Tum_Apisak) sports the G513QY model number. Unless Asus is working on a new gaming laptop, the G513 corresponds to the brand’s ROG Strix G15 G513, which was previously only available with discrete graphics options from Nvidia.
For starters, the G513QY will be based on the flagship Ryzen 9 5900HX processor. Asus already offers the ROG Strix G15 with the aforementioned processor, though. The octa-core Zen 3 chip features a 3.3 GHz base clock and a 4.6 GHz boost clock. However, the Ryzen 9 5900HX supports overclocking and a cTDP up to 54W, so there is enough wiggle room for overclocking.
In terms of discrete graphics, the G513QY will rely on the forthcoming Radeon RX 6800M, which is the mobile version of the Radeon RX 6800. AMD hasn’t officially announced the mobile RDNA 2 (Big Navi) graphics cards yet, so the specifications are unknown. However, the CPU-Z submission points to the Radeon RX 6800M having up to 12GB of GDDR6 memory, only 4GB less than the desktop counterpart.
Having an AMD processor and graphics card in the same device obviously brings benefits. The fusion will enable the G513QY to leverage AMD’s SmartShift technology that balances the power between the processor and graphics card according to the workload. AMD touts a performance boost of up to 14% with SmartShift enabled. The technology debuted with Dell’s G5 15 SE, so it’s good to see other vendors going all-in with AMD.
The Radeon RX 6800M will logically be the bell cow of the mobile RDNA 2 army. Assuming that AMD will replace every mobile RDNA 1 part with an equivalent, we could be seeing up to three more SKUs, such as the Radeon RX 6700M, RX 6600M and maybe even a RX 6500M. AMD hasn’t given any clues when it will unleash its mobile RDNA 2 offerings though.
Apple’s computers have been notorious for their lack of upgradeability, particularly since the introduction of Apple’s M1 chip that integrates memory directly into the package. But as spotted via Twitter, if you want to boost the power of your Mac, it may be possible with money, skill, time and some real desire by removing the DRAM and NAND chips and adding more capacious versions, much like we’ve seen multiple times with enthusiasts soldering on more VRAM to graphics cards.
With the ongoing transition to custom Apple system-on-chips (SoCs), it will get even harder to upgrade Apple PCs. But one Twitter user points to “maintenance engineers” that did just that.
By any definition, such modifications void the warranty, so we strongly do not recommend doing them on your own: It obviously takes a certain level of skill, and patience, to pull off this type of modification.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
With a soldering station (its consumer variant is not that expensive at $60), DRAM memory chips and NAND flash memory chips, (which are close to impossible to buy on the consumer level), the engineers reportedly upgraded the Apple M1-based Mac Mini with 8GB of RAM and 256GB of storage to 16GB and 1TB, respectively, by de-soldering the existing components and adding more capacious chips. According to the post, no firmware modifications were necessary.
Chinese maintenance engineers can already expand the capacity of the Apple M1. The 8GB memory has been expanded to 16GB, and the 256GB hard drive has been expanded to 1TB. pic.twitter.com/2Fyf8AZfJRApril 4, 2021
See more
Using their soldering station, the engineers removed 8GB of GDDR4X memory and installed chips with a 16GB capacity. Removing the NAND chips from the motherboard using the same method was not a problem. The chips were then replaced with higher-capacity devices.
The details behind the effort are slight, though the (very) roughly translated Chinese text in one of the images reads, “The new Mac M1 whole series the first time 256 and upgrade to 1TB, memory is 8L 16G, perfect! This is a revolutionary period the companies are being reshuffled. In the past, if you persevered, there was hope, but today, if you keep on the original way, a lot of them will disappear unless we change our way of thinking. We have to evolve, update it, and start again. Victory belongs to those who adapt; we have to learn to make ourselves more valuable.”
Of course, Apple is not the only PC maker to opt for SoCs and soldered components. Both Intel and AMD offer PC makers SoCs, and Intel even offers reference designs for building soldered down PC platforms.
use? It’s an important question, and while the performance we show in our
GPU benchmarks
hierarchy is useful, one of the true measures of a GPU is how efficient it is. To determine GPU power efficiency, we need to know both performance and power use. Measuring performance is relatively easy, but measuring power can be complex. We’re here to press the reset button on GPU power measurements and do things the right way.
There are various ways to determine power use, with varying levels of difficulty and accuracy. The easiest approach is via software like
GPU-Z
, which will tell you what the hardware reports. Alternatively, you can measure power at the outlet using something like a
Kill-A-Watt
power meter, but that only captures total system power, including PSU inefficiencies. The best and most accurate means of measuring the power use of a graphics card is to measure power draw in between the power supply (PSU) and the card, but it requires a lot more work.
We’ve used GPU-Z in the past, but it had some clear inaccuracies. Depending on the GPU, it can be off by anywhere from a few watts to potentially 50W or more. Thankfully, the latest generation AMD Big Navi and Nvidia Ampere GPUs tend to report relatively accurate data, but we’re doing things the right way. And by “right way,” we mean measuring in-line power consumption using hardware devices. Specifically, we’re using Powenetics software in combination with various monitors from TinkerForge. You can read our Powenetics project overview for additional details.
Image 1 of 2
Image 2 of 2
Tom’s Hardware GPU Testbed
After assembling the necessary bits and pieces — some soldering required — the testing process is relatively straightforward. Plug in a graphics card and the power leads, boot the PC, and run some tests that put a load on the GPU while logging power use.
We’ve done that with all the legacy GPUs we have from the past six years or so, and we do the same for every new GPU launch. We’ve updated this article with the latest data from the GeForce RTX 3090, RTX 3080, RTX 3070, RTX 3060 Ti, and RTX 3060 12GB from Nvidia; and the Radeon RX 6900 XT, RX 6800 XT, RX 6800, and RX 6700 XT from AMD. We use the reference models whenever possible, which means only the EVGA RTX 3060 is a custom card.
If you want to see power use and other metrics for custom cards, all of our graphics card reviews include power testing. So for example, the RX 6800 XT roundup shows that many custom cards use about 40W more power than the reference designs, thanks to factory overclocks.
Test Setup
We’re using our standard graphics card testbed for these power measurements, and it’s what we’ll use on graphics card reviews. It consists of an MSI MEG Z390 Ace motherboard,
Intel Core i9-9900K CPU
, NZXT Z73 cooler, 32GB Corsair DDR4-3200 RAM, a fast M.2 SSD, and the other various bits and pieces you see to the right. This is an open test bed, because the Powenetics equipment essentially requires one.
There’s a PCIe x16 riser card (which is where the soldering came into play) that slots into the motherboard, and then the graphics cards slot into that. This is how we accurately capture actual PCIe slot power draw, from both the 12V and 3.3V rails. There are also 12V kits measuring power draw for each of the PCIe Graphics (PEG) power connectors — we cut the PEG power harnesses in half and run the cables through the power blocks. RIP, PSU cable.
Powenetics equipment in hand, we set about testing and retesting all of the current and previous generation GPUs we could get our hands on. You can see the full list of everything we’ve tested in the list to the right.
From AMD, all of the latest generation Big Navi / RDNA2 GPUs use reference designs, as do the previous gen RX 5700 XT, RX 5700 cards,
Radeon VII
,
Vega 64
and
Vega 56
. AMD doesn’t do ‘reference’ models on most other GPUs, so we’ve used third party designs to fill in the blanks.
For Nvidia, all of the Ampere GPUs are Founders Edition models, except for the EVGA RTX 3060 card. With Turing, everything from the
RTX 2060
and above is a Founders Edition card — which includes the 90 MHz overclock and slightly higher TDP on the non-Super models — while the other Turing cards are all AIB partner cards. Older GTX 10-series and GTX 900-series cards use reference designs as well, except where indicated.
Note that all of the cards are running ‘factory stock,’ meaning there’s no manual
overclocking
or
undervolting
is involved. Yes, the various cards might run better with some tuning and tweaking, but this is the way the cards will behave if you just pull them out of their box and install them in your PC. (RX Vega cards in particular benefit from tuning, in our experience.)
Our testing uses the Metro Exodus benchmark looped five times at 1440p ultra (except on cards with 4GB or less VRAM, where we loop 1080p ultra — that uses a bit more power). We also run Furmark for ten minutes. These are both demanding tests, and Furmark can push some GPUs beyond their normal limits, though the latest models from AMD and Nvidia both tend to cope with it just fine. We’re only focusing on power draw for this article, as the temperature, fan speed, and GPU clock results continue to use GPU-Z to gather that data.
GPU Power Use While Gaming: Metro Exodus
Due to the number of cards being tested, we have multiple charts. The average power use charts show average power consumption during the approximately 10 minute long test. These charts do not include the time in between test runs, where power use dips for about 9 seconds, so it’s a realistic view of the sort of power use you’ll see when playing a game for hours on end.
Besides the bar chart, we have separate line charts segregated into groups of up to 12 GPUs, and we’ve grouped cards from similar generations into each chart. These show real-time power draw over the course of the benchmark using data from Powenetics. The 12 GPUs per chart limit is to try and keep the charts mostly legible, and the division of what GPU goes on which chart is somewhat arbitrary.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
Kicking things off with the latest generation GPUs, the overall power use is relatively similar. The 3090 and 3080 use the most power (for the reference models), followed by the three Navi 10 cards. The RTX 3070, RX 3060 Ti, and RX 6700 XT are all pretty close, with the RTX 3060 dropping power use by around 35W. AMD does lead Nvidia in pure power use when looking at the RX 6800 XT and RX 6900 XT compared to the RTX 3080 and RTX 3090, but then Nvidia’s GPUs are a bit faster so it mostly equals out.
Step back one generation to the Turing GPUs and Navi 1x, and Nvidia had far more GPU models available than AMD. There were 15 Turing variants — six GTX 16-series and nine RTX 20-series — while AMD only had five RX 5000-series GPUs. Comparing similar performance levels, Nvidia Turing generally comes in ahead of AMD, despite using a 12nm process compared to 7nm. That’s particularly true when looking at the GTX 1660 Super and below versus the RX 5500 XT cards, though the RTX models are closer to their AMD counterparts (while offering extra features).
It’s pretty obvious how far AMD fell behind Nvidia prior to the Navi generation GPUs. The various Vega and Polaris AMD cards use significantly more power than their Nvidia counterparts. RX Vega 64 was particularly egregious, with the reference card using nearly 300W. If you’re still running an older generation AMD card, this is one good reason to upgrade. The same is true of the legacy cards, though we’re missing many models from these generations of GPU. Perhaps the less said, the better, so let’s move on.
GPU Power with FurMark
FurMark, as we’ve frequently pointed out, is basically a worst-case scenario for power use. Some of the GPUs tend to be more aggressive about throttling with FurMark, while others go hog wild and dramatically exceed official TDPs. Few if any games can tax a GPU quite like FurMark, though things like cryptocurrency mining can come close with some algorithms (but not Ehterium’s Ethash, which tends to be limited by memory bandwidth). The chart setup is the same as above, with average power use charts followed by detailed line charts.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
The latest Ampere and RDNA2 GPUs are relatively evenly matched, with all of the cards using a bit more power in FurMark than in Metro Exodus. One thing we’re not showing here is average GPU clocks, which tend to be far lower than in gaming scenarios — you can see that data, along with fan speeds and temperatures, in our graphics card reviews.
The Navi / RDNA1 and Turing GPUs start to separate a bit more, particularly in the budget and midrange segments. AMD didn’t really have anything to compete against Nvidia’s top GPUs, as the RX 5700 XT only matched the RTX 2070 Super at best. Note the gap in power use between the RTX 2060 and RX 5600 XT, though. In gaming, the two GPUs were pretty similar, but in FurMark the AMD chip uses nearly 30W more power. Actually, the 5600 XT used more power than the RX 5700, but that’s probably because the Sapphire Pulse we used for testing has a modest factory overclock. The RX 5500 XT cards also draw more power than any of the GTX 16-series cards.
With the Pascal, Polaris, and Vega GPUs, AMD’s GPUs fall toward the bottom. The Vega 64 and Radeon VII both use nearly 300W, and considering the Vega 64 competes with the GTX 1080 in performance, that’s pretty awful. The RX 570 4GB (an MSI Gaming X model) actually exceeds the official power spec for an 8-pin PEG connector with FurMark, pulling nearly 180W. That’s thankfully the only GPU to go above spec, for the PEG connector(s) or the PCIe slot, but it does illustrate just how bad things can get in a worst-case workload.
The legacy charts are even worse for AMD. The R9 Fury X and R9 390 go well over 300W with FurMark, though perhaps that’s more of an issue with the hardware not throttling to stay within spec. Anyway, it’s great to see that AMD no longer trails Nvidia as badly as it did five or six years ago!
Analyzing GPU Power Use and Efficiency
It’s worth noting that we’re not showing or discussing GPU clocks, fan speeds or GPU temperatures in this article. Power, performance, temperature and fan speed are all interrelated, so a higher fan speed can drop temperatures and allow for higher performance and power consumption. Alternatively, a card can drop GPU clocks in order to reduce power consumption and temperature. We dig into this in our individual GPU and graphics card reviews, but we just wanted to focus on the power charts here. If you see discrepancies between previous and future GPU reviews, this is why.
The good news is that, using these testing procedures, we can properly measure the real graphics card power use and not be left to the whims of the various companies when it comes to power information. It’s not that power is the most important metric when looking at graphics cards, but if other aspects like performance, features and price are the same, getting the card that uses less power is a good idea. Now bring on the new GPUs!
Here’s the final high-level overview of our GPU power testing, showing relative efficiency in terms of performance per watt. The power data listed is a weighted geometric mean of the Metro Exodus and FurMark power consumption, while the FPS comes from our GPU benchmarks hierarchy and uses the geometric mean of nine games tested at six different settings and resolution combinations (so 54 results, summarized into a single fps score).
This table combines the performance data for all of the tested GPUs with the power use data discussed above, sorts by performance per watt, and then scales all of the scores relative to the most efficient GPU (currently the RX 6800). It’s a telling look at how far behind AMD was, and how far it’s come with the latest Big Navi architecture.
Efficiency isn’t the only important metric for a GPU, and performance definitely matters. Also of note is that all of the performance data does not include newer technology like ray tracing and DLSS.
The most efficient GPUs are a mix of AMD’s Big Navi GPUs and Nvidia’s Ampere cards, along with some first generation Navi and Nvidia Turing chips. AMD claims the top spot with the Navi 21-based RX 6800, and Nvidia takes second place with the RTX 3070. Seven of the top ten spots are occupied by either RDNA2 or Ampere cards. However, Nvidia’s GDDR6X-equipped GPUs, the RTX 3080 and 3090, rank 17 and 20, respectively.
Given the current GPU shortages, finding a new graphics card in stock is difficult at best. By the time things settle down, we might even have RDNA3 and Hopper GPUs on the shelves. If you’re still hanging on to an older generation GPU, upgrading might be problematic, but at some point it will be the smart move, considering the added performance and efficiency available by more recent offerings.
Today saw new test results being published for an as-yet-unannounced Intel CPU, and the results are low enough compared to what’s already on the market to make us wonder what Team Blue’s plan is here.
Puget Systems, a well-known maker of PCs and workstations that also publishes benchmark results for exotic hardware in real-world applications, has revealed some early test results of Intel’s unannounced Core i7-1195G7 processor. The CPU might be Intel’s ‘off-roadmap’ semi-custom offering available to select clients, or a member of its yet-to-be-unveiled Tiger Lake Refresh family.
The 11th Gen Core i7-1195G7 processor is a quad-core CPU based on the Willow Cove architecture that is equipped with Intel’s Xe-LP GPU with 80 or 96 execution units. The chip has the same capabilities as all the other Core i7-11x5Gx ‘Tiger Lake’ products with an up to 28W TDP, but since it sits above the current flagship (the model i7-1185G7), it likely has higher base and boost clock frequencies.
Puget Systems tested a PC based on the Intel Core i7-1195G7 clocked at 2.90 GHz, which is a bit of an odd frequency as the current flagship Core i7-1185G7 has higher a TDP-up frequency of 3.0 GHz. Therefore, it is not really surprising that the i7-1195G7-powered system nicked a slightly lower score (859 overall, 94 active, 77.8 passive) than Puget’s i7-1185G7-based PC (868 overall, 93.4 active, 80.2 passive) in PugetBench for Lightroom Classic 0.92, Lightroom Classic 10.2. Both systems were equipped with 16GB of LPDDR4X-4266 memory.
At this point, we don’t know whether Intel’s planning a full-blown Tiger Lake Refresh lineup with higher clocks and some additional features, or just plans to fill some gaps in the Tiger Lake family it has today. Last year, Intel planned to release versions of its Tiger Lake processors with LPDDR5 support, which would be beneficial for integrated graphics. But cutting clock speeds on such CPUs would be a strange choice.
From a manufacturing perspective, Intel can probably launch speedier versions of its TGL CPUs. Like other chipmakers, Intel performs continuous process improvements (CPI) through the means of statistical process control (SPC) to increase yields and reduce performance variations. With tens of millions of Tiger Lake processors sold, Intel has gathered enough information on how it can improve yields and reduce performance variability, which opens doors to frequency boosts. Furthermore, Intel has quite a few model numbers left unused in the 11th Gen lineup, so introducing new parts might be just what the company planned originally.
Since the Core i7-1195G7 has not yet been launched, Intel has declined comment about this part, even though it clearly exists in the labs of at least some PC makers.
AMD’d next-generation Ryzen Threadripper might be coming soon, according to the latest patch notes for the popular HWiNFO diagnostic suite.
Realix, the developer behind HWiNFO, said earlier today that the upcoming version of the software will improve its work with AMD’s Ryzen Threadripper Pro as well as “next-generation Ryzen Threadripper” platforms. This is essentially one of the first public signs of AMD’s 4th Generation Threadripper, which is allegedly based on the Epyc ‘Milan’ design.
“Improved detection of AMD ThreadRipper Pro and next-generation ThreadRipper,” a line in the HWiNFO changelog reads.
Unfortunately, we’re still not certain if HWiNFO got word from AMD or is simply adapting its Milan knowledge to fit the new Threadripper.
That’s because, at this point, we don’t know much about AMD’s next-generation Ryzen Threadripper and what ‘improved detection’ means in its case. We are almost certain that the upcoming Ryzen Threadripper will be based on the 3rd Generation Epyc 7003-series ‘Milan’ design and will therefore feature Zen 3-based chiplets with a unified core complex and L3 cache architecture. We can also assume that these CPUs feature slightly different sensors, a new memory controller, and other changes. So, if HWiNFO can properly detect Epyc 7003-series, it should be able to detect most of the next-generation Threadripper’s features correctly without help from AMD.
Still, diagnostic software is also vital for hardware developers and enthusiasts that play with the latest parts. Therefore, hardware developers are eager to add support for their new and upcoming products to diagnostic software in a bid to make the lives of their partners a bit easier. That’s why it’s not uncommon to learn news about future products from various third-party software makers.
So, in the case of HWiNFO’s next-generation Threadripper announcement, we can’t confirm whether Realix got preliminary information from AMD or just learned how to use Epyc 7003-series ‘Milan’ information in context of the next-generation Ryzen Threadripper.
In any case, now that Milan is out, AMD’s 4th Generation Ryzen Threadripper is a bit closer to release too.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.