A user from the Chiphell forums has evaluated the impact of Resizable BAR on Nvidia’s flagship GeForce RTX 3090. While the results aren’t phenomenal, the extra performance comes free of charge, so we welcome it with open arms.
While Nvidia has pledged to bring Resizable BAR to its entire stack of Ampere graphics cards, only the more recent GeForce RTX 3060 comes with a vBIOS that’s primed for the feature. Other Ampere offerings will need an updated vBIOS to enjoy the same benefits. Nvidia and its partners are expected to release vBIOS updates for their corresponding graphics card tomorrow. However, Galax and Gainward have already started deploying the new updates, which has enabled the Chiphell forum user to test the Resizable BAR feature ahead of everyone else.
Resizable BAR is only supported on Nvidia’s Ampere offerings. On the platform side, however, support includes AMD and Intel platforms, more specifically the 400-and 500-series chipsets from both chipmakers. In terms of processors, AMD’s Zen 3 and Intel’s Comet Lake and Rocket Lake are on the compatibility list. Many motherboard manufacturers have released new firmware to support Resizable BAR on Ampere so there shouldn’t be any issues there.
Nvidia GeForce RTX 3090 Resizable BAR Benchmarks
The Chiphell user’s testbed was based on a Ryzen 9 5950X processor that was paired with 32GB of memory and a GeForce RTX 3090 Founders Edition graphics card. He did his testing at the 4K (3840 x 2160) resolution.
Currently, only a few titles support Resizable BAR with Ampere. The short list includes Assassin’s Creed Valhalla, Battlefield V, Borderlands 3, Forza Horizon 4, Gears 5, Metro Exodus, Red Dead Redemption 2 and Watch Dogs: Legion. The user tested all, but Battlefield V since it doesn’t come with a built-in benchmark tool.
Game
Resizable BAR Off
Resizable BAR On
Difference
Assassin’s Creed Valhalla
69.00
72.00
4.3%
Borderlands 3
80.09
81.60
1.9%
Forza Horizon 4
175.00
181.00
3.4%
Gears 5
86.30
89.20
3.4%
Watch Dogs: Legion
62.00
65.00
4.8%
Red Dead Redemption 2
73.62
74.47
1.2%
Metro Exodus
66.29
66.29
0%
According to the results, Resizable BAR provides the GeForce RTX 3090 with performance boosts anywhere between 1.2% and 4.8%, depending on the game. If you want to put that into a single number, we’re looking an at average of 3.2%. Of course, there are some titles, such as Metro Exodus that won’t benefit from Resizable BAR whatsoever.
As minor as the improvement may be, it’s free so it doesn’t hurt to enable Resizable BAR even though it’s a placebo in some occasions. The Chiphell user performed his tests at 4K so the performance gains could be higher at lower resolutions at 1440p (2560 x 1440) or 1080p (1920 x 1080) since the graphics card will be less bottlenecked. We’ll be doing some testing of our own shortly so don’t forget to check back.
The first benchmark results for Qualcomm’s 3rd Generation Snapdragon 8cx system-on-chip (SoC) for always-connected PCs has been posted to the Geekbench 5 database. The numbers show the Snapdragon 8cx Gen 3 beating its predecessors and even competing with Intel’s latest 11th Gen Core i7 “Tiger Lake” mobile chip in multi-threaded workloads.
Qualcomm has been fairly consistent in updating its Snapdragon 8cx family of SoCs for notebooks annually. This year, the company is expected to launch its third-generation Snapdragon 8cx chip, which is rumored to significantly change its architecture. Instead of integrating four high-performance CPU cores and four low-power ones, the Snapdragon 8cx Gen 3 is expected to pack eight high-performance cores working at different clock speeds, omitting low-power cores. This should improve performance, but it’s unclear whether the chip will match its predecessor’s 7W thermal envelope.
Qualcomm yet has to formally announce its Snapdragon 8cx Gen 3, but someone has already submitted test results of a Qualcomm Reference Design (QRD) platform running the new SoC to the Geekbench database, as spotted by NotebookCheck.
Just like other notebook development platforms, QRD platforms are meant for developers of hardware and software, so performance usually differs from that of retail products. Nonetheless, such platforms still tend to give a good hint of what to expect from new chips.
Qualcomm Snapdragon 8cx Gen 3 Benchmarks
CPU
Single-Core
Multi-Core
Cores/Threads, uArch
Cache
Clocks
TDP
Qualcomm Snapdragon 8cx Gen 3*
982
4,918
4C Kryo Gold+ + 4C Kryo Gold
? MB
2.69 GHz
?
Qualcomm Snapdragon 8cx Gen 2
795
3,050
4C Kryo 495 Gold + 4C Kryo 495 Silver
? MB
3.15 GHz + 2.42 GHz
7W
Qualcomm Snapdragon 8cx Gen 1
725
2,884
4C Kryo 495 Gold + 4C Kryo 495 Silver
? MB
2.84 GHz + 1.80 GHz
7W
AMD Ryzen 9 5980HS
1,540
8,225
8C/16T, Zen 3
16MB
3.30 ~ 4.53 GHz
35W
AMD Ryzen 9 4900H
1,230
7,125
8C/16T, Zen 2
8MB
3.30 ~ 4.44 GHz
35~54W
Intel Core i7-1160G7
1,400
5,000
4C/8T, Willow Cove
12MB
2.10 ~ 4.40 GHz
15W
Intel Core i7-1185G7
1,550
5,600
4C/8T, Willow Cove
12MB
3.0 ~ 4.80 GHz
28W
Apple M1
1,710
7,660
4C Firestorm + 4C Icestorm
12MB + 4MB
3.20 GHz
20~24W
*Chip not confirmed by Qualcomm
The Snapdragon 8cx Gen 3 showed notably higher results in single-thread workloads when compared to previous generations. It was 35% faster than the 8cx Gen 1 and 24% faster than the 8cx Gen 2. We don’t yet know the frequency of the 8cx Gen 3’s cores for sure, but it appears that the 8cx Gen 3 packs something better than Qualcomm’s Kryo 495 Gold (a custom version of Arm’s Cortex-A76).
On the other hand, the Snapdragon 8cx Gen 3’s performance paled in comparison to chips from AMD and Intel competing with the best CPUs for desktops. The latest Zen 3 and Willow Core microarchitectures can run at higher clocks and consume more power. Meanwhile, Apple’s M1 beat Qualcomm’s Snapdragon 8cx Gen 3 (at least in its current form) in single-threaded workloads by 74%.
When it came to performance in multi-threaded workloads, the Snapdragon 8cx Gen 3 clearly benefits from eight high-performance cores (albeit running at different clocks) inside. The new SoC outperformed the 8cx Gen 2 by over 60% and is on par with Intel’s four-core, eight-thread Core i7-1160G7, a 15W SoC.
The Snapdragon 8cx Gen 3 tested couldn’t compete with the higher-wattage Apple M1 and AMD’s Ryzen SoCs, but systems based on Qualcomm’s 8cx platforms are not really meant to compete against higher-end machines in terms of performance.
Overall, the benchmark results show the Snapdragon 8cx Gen 3 demonstrating single-thread and multi-thread performance improvements in a synthetic benchmark. Of course, it remains to be seen how commercial devices based on the new SoC will stack up against rivals in real-world applications.
Finally, AMD’s motherboard partners have begun rolling out new BIOS updates to fix the widespread USB stability and connectivity issues. However, the current revisions are still in Beta form, with final firmware revisions due in April.
The firmware addresses widespread USB connectivity issues present on a number of Ryzen based systems equipped with Zen 2 or Zen 3 CPUs and 400- or 500-series motherboards. The problems center around random dropouts for USB-connected devices that impact several different types of devices, including unresponsive external capture devices, momentary keyboard connection drops, slow mouse responses, issues with VR headsets, external storage devices, and USB-connected CPU coolers.
The new BIOS patch appears to address the USB 2.0 controllers on 400- and 500-series motherboards. We still aren’t sure if other USB devices, like USB 3.0 headers connected to the CPU directly or other USB 3.0/3.1 controllers, were affected. Also, there is no information yet on whether or not the fix has any impact on performance.
When checking to see if your motherboard has the new fix, your board maker should address the USB 2.0 fixes in the description of the latest BIOS on the board partners’ web page. The fix’s presence is harder to detect because AMD did not update the AGESA code with a new version — instead, this fix still runs on the latest ComboV2 1.2.0.1 AGESA code.
But be patient; most boards still do not have a new BIOS ready with the new AGESA code, with only a few 500-series boards (and no 400-series boards) having the update at this time. Presumably, it will be a few weeks before all mainstream 400- and 500-series motherboards receive the update.
Performance results for Intel’s unreleased eight-core Tiger Lake-H parts are already being posted online. Benchleaks shared Geekbench 5 scores of the upcoming Core i7-11800H Tiger Lake-H CPU with impressive results.
Rumor has it that the Core i7-11800H will be one of Intel’s beefy 45W Tiger Lake-H parts featuring eight cores and 16 threads to compete with the likes of AMD’s Ryzen 7 5800H. Like Intel’s current U-series and H35 products, the eight-core Tiger Lake variants will feature Intel’s latest Willow Cove cores powered by the 10nm SuperFin architecture, allowing for up to 20% higher clock speeds than the previous models.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Strangely we have not just one, but three Geekbench results for the i7-11800H. Presumably, this was done to attain a more realistic Geekbench 5 result, as executing multiple benchmark runs and averaging the results can be more realistic than running a benchmark a single time.
When we average the three results together, the i7-11700H managed a single-threaded score of 1474 points and a multi-threaded score of 8116 points. That makes the i7-11800H around 15% faster than its predecessor, the Core i7-10875H, suggesting a healthy gen-on-gen performance improvement.
However, if we compare the i7-11800H to the best Ryzen 7 5800H Geekbench 5 scores, that puts the 5800H and 11800H within 2% of each other. That lands within the margin of error, so we can safely say that both chips offer similar performance in Geekbench 5.
If these results are true, then Intel is poised to make a major comeback in the notebook segment, finally catching up to AMD’s impressive Zen 3 notebook processors.
AMD has announced its new Ryzen Pro 5000 series mobile processors, its competitor to Intel’s vPro platform. The company claims the chips, based on the same Zen 3 architecture as most of its consumer-focused Ryzen 5000 series, will provide “uncompromised performance and battery life” for thin-and-light business laptops. They’ll appear in a slate of business notebooks including Elitebooks, ProBooks, ThinkPads, and ThinkBooks throughout this year.
On paper, the chips look pretty similar to their Ryzen 5000 counterparts. The headliner is the Ryzen 7 Pro 5850U, with eight cores, 16 threads, 20MB cache, and base frequency of 1.9 GHz with boost up to 4.4 GHz. AMD’s Ryzen line currently contains the only processors for thin-and-light laptops that use “eight high-performing cores.” Intel’s Tiger Lake vPro line is all quad-core at the moment (though its H-series has an eight-core chip on the way, and that line does appear in ultraportables from time to time) and Apple’s M1 chip uses a combination of high-power and high-efficiency cores.
The line also includes the Ryzen 5 Pro 5650U (six cores, 12 threads) and the Ryzen 3 Pro 5450U (four cores, eight threads). The three chips are identical in specs to the Ryzen 7 5800U, Ryzen 5 5600U, and the Ryzen 3 5400U, respectively; all have 15W TDP. We’ll be testing a 5800U system shortly and will have a better sense of how these chips will perform after that.
What the new chips have to offer businesses specifically are some new security features. They include a new Shadow Stack (here’s an explainer if you’re curious) designed to protect against malware attacks. AMD says the chips also include “deep integration with Microsoft and OEMs” for better security, and that PCs will have FIPS encryption certification.
The chips also include AMD’s Pro Manageability platform, which is AMD’s competitor to Intel’s Active Management Technology, and include “full spectrum manageability features.” As the Ryzen Pro 4000 series did, the 5000 line supports Microsoft’s Endpoint Manager, a platform for IT workers to manage PCs, servers, and other devices in their organization.
AMD is moving its mobile Ryzen 5000 processors into business with Ryzen 5000 Pro, the company announced today. The new series consists of three chips, the Ryzen 7 Pro 5850U, Ryzen 5 Pro 5650U and Ryzen 3 5450U, and AMD claims the processors will show up in 63 laptop designs this year, including laptops from Lenovo and HP.
All three processors are on AMD’s Zen 3 architecture and 7nm process. (In fact, they are almost exactly identical, except for cache, on specs with the consumer-focused Ryzen 7 5800U, Ryzen 5 5600U and Ryzen 3 5400U)
Image 1 of 2
Image 2 of 2
Cores / Threads
Frequency
Architecture
Node
L2 + L3 Cache
TDP
Ryzen 7 Pro 5850U
16-Aug
1.9 GHz base, up to 4.4 GHz
Zen 3
7nm
20 MB
15W
Ryzen 5 Pro 5650U
12-Jun
2.3 GHz base, up to 4.2 GHz
Zen 3
7nm
19 MB
15W
Ryzen 3 Pro 5450U
8-Apr
2.6 GHz. up to 4.0 GHz
Zen 3
7nm
10 MB
15W
In benchmarks released by the company, it compared the top-of-the-line, Cezanne-based AMD Ryzen 7 Pro 5850U to Intel’s 28W Core i7-1185G7 “Tiger Lake” part.
AMD admitted to a 3% loss against the Core i7 in single-threaded performance (measured in Cinebench R20) but showed 65% gains in Cinebench R20 multi-thread and Passmark 10 CPU Mark, as well as Geekbench 5’s multi-core (single-core scores weren’t listed). In these tests, Intel’s chip was housed in a Dell Latitude 5420 with 32GB of RAM at 3,200 MHz and a 512GB SSD from SK Hynix, while the Ryzen Pro was in a reference platform with 16GB of LPDDR4 RAM at 4,266 MHz and a 512GB Samsung 970 Pro SSD.
In productivity, the two tied in Microsoft Word and the Edge browser in AMD’s tests, but the Cezanne chip came out between 4% and 23% in other productivity benchmarks. Those tests switched the Intel laptop to an MSI Prestige 14 Evo with a 28W TDP, 16GB of RAM at 4,267 MHz, and a Kingston SSD of unspecified size. The AMD machine remained the reference design.
Just to show off, AMD also picked some benchmarks comparing the Ryzen 5 Pro 5650U and the Core i7-1185G7, where its chip outperformed Intel in Passmark 10 CPU Mark (+25%), Geekbench 5 multi-core (+26%), PCMark 10 Apps (+4%) and PCMark 10 Benchmark (+20%). This round of testing also used the MSI Prestige 14 Evo and the reference design.
Compared to the Latitude with Intel Core i7-1185G7, AMD claims that the Ryzen 7 Pro 5850U is up to 10% faster while running a 49-participant Zoom call and running the PCMark 10 applications benchmark.
For battery life, AMD compared to previous generation Ryzen Pro chips, suggesting the 7nm process helps the new Ryzen 7 reach 17.5 hours on Mobile Mark 2018’s general computing test.
The company is touting new security features for this year. AMD Shadow Stack is at the hardware level to prevent malware. It’s part of the Secured Core PC program, which Microsoft announced with Intel, AMD, and Qualcomm in late 2019, and also meets the United States National Institute of Standards and Technology’s Federal Information Processing Standards (FIPS).
Image 1 of 2
Image 2 of 2
To mark the launch, AMD is also showcasing six laptops coming from partners HP and Lenovo. The HP Probook Aero 635 G2 and HP Probook x360 435 G8 will be exclusive for 2021, and the Lenovo ThinkBook 16) is listed as an “AMD exclusive creator platform.” The company also listed the HP EliteBook 845 G8, ThinkPad T14S and ThinkBook 14S as highlighted notebooks.
AMD unveiled its EPYC 7003 ‘Milan’ processors today, claiming that the chips, which bring the company’s powerful Zen 3 architecture to the server market for the first time, take the lead as the world’s fastest server processor with its flagship 64-core 128-thread EPYC 7763. Like the rest of the Milan lineup, this chip comes fabbed on the 7nm process and is drop-in compatible with existing servers. AMD claims it brings up to twice the performance of Intel’s competing Xeon Cascade Lake Refresh chips in HPC, Cloud, and enterprise workloads, all while offering a vastly better price-to-performance ratio.
Milan’s agility lies in the Zen 3 architecture and its chiplet-based design. This microarchitecture brings many of the same benefits that we’ve seen with AMD’s Ryzen 5000 series chips that dominate the desktop PC market, like a 19% increase in IPC and a larger unified L3 cache. Those attributes, among others, help improve AMD’s standing against Intel’s venerable Xeon lineup in key areas, like single-threaded work, and offer a more refined performance profile across a broader spate of applications.
The other attractive features of the EPYC lineup are still present, too, like enhanced security, leading memory bandwidth, and the PCIe 4.0 interface. AMD also continues its general approach of offering all features with all of its chips, as opposed to Intel’s strict de-featuring that it uses to segment its product stack. As before, AMD also offers single-socket P-series models, while its standard lineup is designed for dual-socket (2P) servers.
The Milan launch promises to reignite the heated data center competition once again. Today marks the EPYC Milan processors’ official launch, but AMD actually began shipping the chips to cloud service providers and hyperscale customers last year. Overall, the EPYC Milan processors look to be exceedingly competitive against Intel’s competing Xeon Cascade Lake Refresh chips.
Like AMD, Intel has also been shipping to its largest customers; the company recently told us that it has already shipped 115,000 Ice Lake chips since the end of last year. Intel also divulged a few details about its Ice Lake Xeons at Hot Chips last year; we know the company has a 32-core model in the works, and it’s rumored that the series tops out at 40 cores. As such, Ice Lake will obviously change the competitive landscape when it comes to the market.
AMD has chewed away desktop PC and notebook market share at an amazingly fast pace, but the data center market is a much tougher market to crack. While this segment represents the golden land of high-volume and high-margin sales, the company’s slow and steady gains lag its radical advance in the desktop PC and notebook markets.
Much of that boils down to the staunchly risk-averse customers in the enterprise and data center; these customers prize a mix of factors beyond the standard measuring stick of performance and price-to-performance ratios, instead focusing on areas like compatibility, security, supply predictability, reliability, serviceability, engineering support, and deeply-integrated OEM-validated platforms. To cater to the broader set of enterprise customers, AMD’s Milan launch also carries a heavy focus on broadening AMD’s hardware and software ecosystems, including full-fledged enterprise-class solutions that capitalize on the performance and TCO benefits of the Milan processors.
AMD’s existing EPYC Rome processors already hold the lead in performance-per-socket and pricing, easily outstripping Intel’s Xeon at several key price points. Given AMD’s optimizations, Milan will obviously extend that lead, at least until the Ice Lake debut. Let’s see how the hardware stacks up.
AMD EPYC 7003 Series Milan Specifications and Pricing
Cores / Threads
Base / Boost (GHz)
L3 Cache (MB)
TDP (W)
1K Unit Price
EPYC Milan 7763
64 / 128
2.45 / 3.5
256
280
$7,890
EPYC Milan 7713
64 / 128
2.0 / 3.675
256
225
$7,060
EPYC Rome 7H12
64 / 128
2.6 / 3.3
256
280
?
EPYC Rome 7742
64 / 128
2.25 / 3.4
256
225
$6,950
EPYC Milan 7663
56 / 112
2.0 / 3.5
256
240
$6,366
EPYC Milan 7643
48 / 96
2.3 / 3.6
256
225
$4.995
EPYC Milan 7F53
32 / 64
2.95 / 4.0
256
280
$4,860
EPYC Milan 7453
28 / 56
2.75 / 3.45
64
225
$1,570
Xeon Gold 6258R
28 / 56
2.7 / 4.0
38.5
205
$3,651
EPYC Milan 74F3
24 / 48
3.2 / 4.0
256
240
$2,900
EPYC Rome 7F72
24 / 48
3.2 / ~3.7
192
240
$2,450
Xeon Gold 6248R
24 / 48
3.0 / 4.0
35.75
205
$2,700
EPYC Milan 7443
24 / 48
2.85 / 4.0
128
200
$2,010
EPYC Rome 7402
24 / 48
2.8 / 3.35
128
180
$1,783
EPYC Milan 73F3
16 / 32
3.5 / 4.0
256
240
$3,521
EPYC Rome 7F52
16 / 32
3.5 / ~3.9
256
240
$3,100
Xeon Gold 6246R
16 / 32
3.4 / 4.1
35.75
205
$3,286
EPYC Milan 7343
16 / 32
3.2 / 3.9
128
190
$1,565
EPYC Rome 7302
16 / 32
3.0 / 3.3
128
155
$978
EPYC Milan 72F3
8 / 16
3.7 / 4.1
256
180
$2,468
EPYC Rome 7F32
8 / 16
3.7 / ~3.9
128
180
$2,100
Xeon Gold 6250
8 / 16
3.9 / 4.5
35.75
185
$3,400
AMD released a total of 19 EPYC Milan SKUs today, but we’ve winnowed that down to key price bands in the table above. We have the full list of the new Milan SKUs later in the article.
As with the EPYC Rome generation, Milan spans from eight to 64 cores, while Intel’s Cascade Lake Refresh tops out at 28 cores. All Milan models come with threading, support up to eight memory channels of DDR4-3200, 4TB of memory capacity, and 128 lanes of PCIe 4.0 connectivity. AMD supports both standard single- and dual-socket platforms, with the P-series chips slotting in for single-socket servers (we have those models in the expanded list below). The chips are drop-in compatible with the existing Rome socket.
AMD added frequency-optimized 16-, 24-, and 32-core F-series models to the Rome lineup last year, helping the company boost its performance in frequency-bound workloads, like databases, that Intel has typically dominated. Those models return with a heavy focus on higher clock speeds, cache capacities, and TDPs compared to the standard models. AMD also added a highly-clocked 64-core 7H12 model for HPC workloads to the Rome lineup, but simply worked that higher-end class of chip into its standard Milan stack.
As such, the 64-core 128-thread EPYC 7763 comes with a 2.45 / 3.5 GHz base/boost frequency paired with a 280W TDP. This flagship part also comes armed with 256MB of L3 cache and supports a configurable TDP that can be adjusted to accommodate any TDP from 225W to 280W.
The 7763 marks the peak TDP rating for the Milan series, but the company has a 225W 64-core 7713 model that supports a TDP range of 225W to 240W for more mainstream applications.
All Milan models come with a default TDP rating (listed above), but they can operate between a lower minimum (cTDP Min) and a higher maximum (cTDP Max) threshold, allowing quite a bit of configurability within the product stack. We have the full cTDP ranges for each model listed in the expanded spec list below.
Milan’s adjustable TDPs now allow customers to tailor for different thermal ranges, and Forrest Norrod, AMD’s SVP and GM of the data center and embedded solutions group, says that the shift in strategy comes from the lessons learned from the first F- and H-series processors. These 280W processors were designed for systems with robust liquid cooling, which tends to add quite a bit of cost to the platform, but OEMs were surprisingly adept at engineering air-cooled servers that could fully handle the heat output of those faster models. As such, AMD decided to add a 280W 64-core model to the standard lineup and expanded the ability to manipulate TDP ranges across its entire stack.
AMD also added new 28- and 56-core options with the EPYC 7453 and 7663, respectively. Norrod explained that AMD had noticed that many of its customers had optimized their applications for Intel’s top-of-the-stack servers that come with multiples of 28 cores. Hence, AMD added new models that would mesh well with those optimizations to make it easier for customers to port over applications optimized for Xeon platforms. Naturally, AMD’s 28-core’s $1,570 price tag looks plenty attractive next to Intel’s $3,651 asking price for its own 28-core part.
AMD made a few other adjustments to the product stack based on customer buying trends, like reducing three eight-core models to one F-series variant, and removing a 12-core option entirely. AMD also added support for six-way memory interleaving on all models to lower costs for workloads that aren’t sensitive to memory throughput.
Overall, Milan has similar TDP ranges, memory, and PCIe support at any given core count than its predecessors but comes with higher clock speeds, performance, and pricing.
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
Milan also comes with the performance uplift granted by the Zen 3 microarchitecture. Higher IPC and frequencies, not to mention more refined boost algorithms that extract the utmost performance within the thermal confines of the socket, help improve Milan’s performance in the lightly-threaded workloads where Xeon has long held an advantage. The higher per-core performance also translates to faster performance in threaded workloads, too.
Meanwhile, the larger unified L3 cache results in a simplified topology that ensures broader compatibility with standard applications, thus removing the lion’s share of the rare eccentricities that we’ve seen with prior-gen EPYC models.
The Zen 3 microarchitecture brings the same fundamental advantages that we’ve seen with the desktop PC and notebook models (you can read much more about the architecture here), like reduced memory latency, doubled INT8 and floating point performance, and higher integer throughput.
AMD also added support for memory protection keys, AVX2 support for VAES/VPCLMULQD instructions, bolstered security for hypervisors and VM memory/registers, added protection against return oriented programming attacks, and made a just-in-time update to the Zen 3 microarchitecture to provide in-silicon mitigation for the Spectre vulnerability (among other enhancements listed in the slides above). As before, Milan remains unimpacted by other major security vulnerabilities, like Meltdown, Foreshadow, and Spoiler.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
The EPYC Milan SoC adheres to the same (up to) nine-chiplet design as the Rome models and is drop-in compatible with existing second-gen EPYC servers. Just like the consumer-oriented chips, Core Complex Dies (CCDs) based on the Zen 3 architecture feature eight cores tied to a single contiguous 32MB slice of L3 cache, which stands in contrast to Zen 2’s two four-core CCXes, each with two 16MB clusters. The new arrangement allows all eight cores to communicate to have direct access to 32MB of L3 cache, reducing latency.
This design also increases the amount of cache available to a single core, thus boosting performance in multi-threaded applications and enabling lower-core count Milan models to have access to significantly more L3 cache than Rome models. The improved core-to-cache ratio boosts performance in HPC and relational database workloads, among others.
Second-gen EPYC models supported either 8- or 4-channel memory configurations, but Milan adds support for 6-channel interleaving, allowing customers that aren’t memory bound to use less system RAM to reduce costs. The 6-channel configuration supports the same DDR4-3200 specification for single DIMM per channel (1DPC) implementations. This feature is enabled across the full breadth of the Milan stack, but AMD sees it as most beneficial for models with lower core counts.
Milan also features the same 32-bit AMD Secure Processor in the I/O Die (IOD) that manages cryptographic functionality, like key generation and management for AMD’s hardware-based Secure Memory Encryption (SME) and Secure Encrypted Virtualization (SEV) features. These are key advantages over Intel’s Cascade Lake processors, but Ice Lake will bring its own memory encryption features to bear. AMD’s Secure Processor also manages its hardware-validated boot feature.
AMD EPYC Milan Performance
Image 1 of 13
Image 2 of 13
Image 3 of 13
Image 4 of 13
Image 5 of 13
Image 6 of 13
Image 7 of 13
Image 8 of 13
Image 9 of 13
Image 10 of 13
Image 11 of 13
Image 12 of 13
Image 13 of 13
AMD provided its own performance projections based on its internal testing. However, as with all vendor-provided benchmarks, we should view these with the appropriate level of caution. We’ve included the testing footnotes at the end of the article.
AMD claims the Milan chips are the fastest server processors for HPC, cloud, and enterprise workloads. The first slide outlines AMD’s progression compared to Intel in SPECrate2017_int_base over the last few years, highlighting its continued trajectory of significant generational performance improvements. The second slide outlines how SPECrate2017_int_base scales across the Milan product stack, with Intel’s best published scores for two key Intel models, the 28-core 6258R and 16-core 4216, added for comparison.
Moving on to a broader spate of applications, AMD says existing two-socket 7H12 systems already hold an easy lead over Xeon in the SPEC2017 floating point tests, but the Milan 7763 widens the gap to a 106% advantage over the Xeon 6258R. AMD uses this comparison for the two top-of-the-stack chips, but be aware that this is a bit lopsided: The 6258R carries a tray price of $3,651 compared to the 7763’s $7,890 asking price. AMD also shared benchmarks comparing the two in SPEC2017 integer tests, claiming a similar 106% speedup. In SPECJBB 2015 tests, which AMD uses as a general litmus for enterprise workloads, AMD claims 117% more performance than the 6258R.
The company also shared a few test results showing performance in the middle of its product stack compared to Intel’s 6258R, claiming that its 32-core part also outperforms the 6258R, all of which translates to improved TCO for customers due to the advantages of lower pricing and higher compute density that translates to fewer servers, lower space requirements, and lower overall power consumption.
Finally, AMD has a broad range of ecosystem partners with fully-validated platforms available from top-tier OEMs like Dell, HP, and Lenovo, among many others. These platforms are fed by a broad constellation of solutions providers as well. AMD also has an expansive list of instances available from leading cloud service providers like AWS, Azure, Google Cloud, and Oracle, to name a few.
Image 1 of 2
Image 2 of 2
Model #
Cores
Threads
Base Freq (GHz)
Max Boost Freq (up to GHz11)
Default TDP (w)
cTDP Min (w)
cTDP Max (w)
L3 Cache (MB)
DDR Channels
Max DDR Freq (1DPC)
PCIe 4
1Ku Pricing
7763
64
128
2.45
3.50
280
225
280
256
8
3200
x128
$7,890
7713
64
128
2.00
3.68
225
225
240
256
8
3200
X128
$7,060
7713P
64
128
2.00
3.68
225
225
240
256
8
3200
X128
$5,010
7663
56
112
2.00
3.50
240
225
240
256
8
3200
x128
$6,366
7643
48
96
2.30
3.60
225
225
240
256
8
3200
x128
$4,995
75F3
32
64
2.95
4.00
280
225
280
256
8
3200
x 128
$4,860
7543
32
64
2.80
3.70
225
225
240
256
8
3200
x128
$3,761
7543P
32
64
2.80
3.70
225
225
240
256
8
3200
X128
$2,730
7513
32
64
2.60
3.65
200
165
200
128
8
3200
x128
$2,840
7453
28
56
2.75
3.45
225
225
240
64
8
3200
x128
$1,570
74F3
24
48
3.20
4.00
240
225
240
256
8
3200
x128
$2,900
7443
24
48
2.85
4.00
200
165
200
128
8
3200
x128
$2,010
7443P
24
48
2.85
4.00
200
165
200
128
8
3200
X128
$1,337
7413
24
48
2.65
3.60
180
165
200
128
8
3200
X128
$1,825
73F3
16
32
3.50
4.00
240
225
240
256
8
3200
x128
$3,521
7343
16
32
3.20
3.90
190
165
200
128
8
3200
x128
$1,565
7313
16
32
3.00
3.70
155
155
180
128
8
3200
X128
$1,083
7313P
16
32
3.00
3.70
155
155
180
128
8
3200
X128
$913
72F3
8
16
3.70
4.10
180
165
200
256
8
3200
x128
$2,468
Thoughts
AMD’s general launch today gives us a good picture of the company’s data center chips moving forward, but we won’t know the full story until Intel releases the formal details of its 10nm Ice Lake processors.
The volume ramp for both AMD’s EPYC Milan and Intel’s Ice Lake has been well underway for some time, and both lineups have been shipping to hyperscalers and CSPs for several months. The HPC and supercomputing space also tend to receive early silicon, so they also serve as a solid general litmus for the future of the market. AMD’s EPYC Milan has already enjoyed brisk uptake in those segments, and given that Intel’s Ice Lake hasn’t been at the forefront of as many HPC wins, it’s easy to assume, by a purely subjective measure, that Milan could hold some advantages over Ice Lake.
Intel has already slashed its pricing on server chips to remain competitive with AMD’s EPYC onslaught. It’s easy to imagine that the company will lean on its incumbency and all the advantages that entails, like its robust Server Select platform offerings, wide software optimization capabilities, platform adjacencies like networking, FPGA, and Optane memory, along with aggressive pricing to hold the line.
AMD has obviously prioritized its supply of server processors during the pandemic-fueled supply chain disruptions and explosive demand that we’ve seen over the last several months. It’s natural to assume that the company has been busy building Milan inventory for the general launch. We spoke with AMD’s Forrest Norrod, and he tells us that the company is taking steps to ensure that it has an adequate supply for its customers with mission-critical applications.
One thing is clear, though. Both x86 server vendors benefit from a rapidly expanding market, but ARM-based servers have become more prevalent than we’ve seen in the recent past. For now, the bulk of the ARM uptake seems limited to cloud service providers, like AWS with its Graviton 2 chips. In contrast, uptake is slow in the general data center and enterprise due to the complexity of shifting applications to the ARM architecture. Continuing and broadening uptake of ARM-based platforms could begin to change that paradigm in the coming years, though, as x86 faces its most potent threat in recent history. Both x86 vendors will need a steady cadence of big performance improvements in the future to hold the ARM competition at bay.
Unfortunately, we’ll have to wait for Ice Lake to get a true view of the competitive x86 landscape over the next year. That means the jury is still out on just what the data center will look like as AMD works on its next-gen Genoa chips and Intel readies Sapphire Rapids.
AMD will unveil its EPYC 7003 Milan processors during a live webcast that you can watch here on March 15, 2021, at 11am ET (8am PT), marking the company’s first release of processors for the data center based on the Zen 3 architecture. The live stream will include presentations from AMD CEO Lisa Su, CTO Mark Papermaster, and SVP and GM of the data center group, Forrest Norrod.
Update: The NDA has expired. You can see our full breakdown and analysis here, which covers the finer details of the live stream below.
Beyond an accidentally-posted presentation in 2019, AMD hasn’t officially revealed many details around its Milan lineup. However, it recently teased a performance benchmark at CES 2021, and a vendor recently posted specifications and pricing for several models.
Early indications suggest that, as with the current-gen EPYC Rome processors, AMD fabs the EPYC Milan chips with the 7nm process, and they top out at 64 cores. The most significant change to the series comes with the infusion of the Zen 3 microarchitecture that lends a 19% in instruction per cycle (IPC) throughput improvement through several changes, like a unified L3 cache and better thermal management that allows the chip to extract more performance within any given TDP range.
Even though we’ve seen shortages on the consumer side of AMD’s business, it has obviously prioritized server chips production. As a result, it has continued to slowly whittle away at Intel’s commanding lead in the data center. Faced with unrelenting pressure from a surprisingly nimble competitor, Intel has significantly reduced gen-on-gen pricing with the debut of its Cascade Lake Refresh Xeon models, by 60% in some cases, by slightly adjusting the capabilities of the chips in a way that largely results in a price reduction that comes in the guise of new chips.
To counter, AMD bulked up its EPYC Rome lineup with its workload-optimized 7F and 7H parts, which come with higher power consumption and thermals than the standard 7002 series chips but feature higher frequencies, allowing AMD to challenge Intel’s traditional lead in per-core performance.
But now the landscape will change once again. The Milan launch, not to mention Intel’s pending 10nm Ice Lake launch, promises to reignite the heated data center competition. You can watch the presentation here live, but be sure to check out our full analysis after the announcement.
AMD still has its Zen 3 desktop APUs under wraps, but a Chinese eBay merchant already started selling engineering samples. The AMD Ryzen 3 5300G, which was previously sold for $176.99, is no longer available on eBay, but we still have the benchmarks that were listed.
The Zen 3 microarchitecture powers AMD’s latest 7nm processors, spanning from the mobile chips to the core-heavy server offerings. While the chipmaker has already released its Ryzen 5000 mobile (Cezanne) parts, the DIY market is still awaiting the desktop variants, which may be able to compete with the best CPUs. It’s expected that AMD’s next-generation APUs will leverage Zen 3 cores and slot into the AM4 CPU socket. Based on AMD’s history, the chips will likely come with Vega graphics but with a small generational uplift.
The Zen 3 processor listed on eBay carries the 100-000000262-30_Y designation, which is the orderable part number, and the poster listed it as a Ryzen 3 5300G. Without AMD’s confirmation though, we can’t know for sure. It’s possible the chip will come out as the Ryzen 3 Pro 5350G, with equal specs but bringing extra features around things like security. In any case, the chip listed should be the baby brother to the Ryzen 7 5700G or Ryzen 7 Pro 5750G.
AMD Ryzen 3 5300G Specifications
Processor
Cores / Threads
Base Clock (GHz)
L2 Cache (MB)
L3 Cache (MB)
TDP (W)
Ryzen 3 5300G*
4 / 8
3.5 / ?
2
8
65
Ryzen 3 3300X
4 / 8
3.8 / 4.3
2
16
65
Ryzen 3 Pro 4350G
4 / 8
3.8 / 4.0
2
4
65
Ryzen 3 3100
4 / 8
3.6 / 3.9
2
16
65
Core i3-10100
4 / 8
3.6 / 4.3
1
6
65
*Specs not confirmed by AMD
Based on the eBay listing, the Ryzen 3 3500G will arrive as a quad-core, 7nm processor with simultaneous multithreading (SMT) enabled. The APU appears to have a 3.5 GHz base clock, but the boost clock wasn’t shared. It seemingly clocks in lower than its predecessors, but remember that Zen 3’s performance uplift comes from the IPC advancements rather than high clock speeds. On top of that, the clock speeds should be taken with a bit of salt, since the processor in question is an engineering sample.
Cezanne offers twice the amount of L3 cache in comparison to Renoir APUs. So it’s not surprising to see the Ryzen 3 5300G come equipped with an 8MB L3 cache. However, it’s still two times less than what’s found on Ryzen Zen 2 desktop chips.
Given the model name, the Ryzen 3 5300G should be the successor to the Ryzen 3 4300G. Unfortunately, AMD decided to reserve desktop Renoir for pre-built OEM systems. You could still pick one up from the grey market, but it doesn’t come with any support or a warranty.
It’s uncertain if AMD will change its mind with desktop Cezanne. However, the rumors point to the possibility of the Zen 3 APUs arriving on the DIY market.
AMD Ryzen 3 5300G Benchmarks
Processor
CPU-Z Single Thread
CPU-Z Multi Thread
Fritz Chess Benchmark
Cinebench R15
Ryzen 3 5300G
553.22
2,985.12
20,072
1,117
Ryzen 3 3300X
528
2,824
19,674
1,101
Ryzen 3 Pro 4350G
501
2,766
17,831.2
957.46
Ryzen 3 3100
474
2,645
17,251
1,015
Core i3-10100
N/A
2,461
16,037
1,001
In CPU-Z benchmark shared on eBay, the Ryzen 3 5300G reportedly delivered 10.4% and 4.8% higher single-threaded performance than the Ryzen 3 Pro 4350G (Zen 2) and Ryzen 3 3300X (Zen 2), respectively. When it came to multi-threaded performance, the Ryzen 3 5300G was up to 7.9% faster than the Ryzen 3 Pro 4350G and up to 21.3% faster than the Core i3-10100 (Comet Lake).
The Ryzen 3 5300G’s dominance also extended to the other tests, including the Fritz Chess and Cinebench R15 benchmarks. In the former, the Zen 3 APU outperformed the Ryzen 3 Pro 4350G and Core i3-10100 by 12.6% and 25.2%, respectively.
In Cinebench R15, we can see the Ryzen 3 5300G rising above the Ryzen 3 Pro 4350G by 16.7% and the Core i3-10100 by 11.6%.
1080p, Low Settings
1080p, Medium Settings
1080p, High Settings
Battlefield V
48 fps
37 fps
29 fps
Battlefield 4
95 fps
82 fps
47 fps
While the Ryzen 3 5300G’s processing prowess is impressive, many will probably pick up the Zen 3 APU for its gaming potential. The Ryzen 3 5300G already appears to be a decent APU for gaming at 1080p resolution, but its 720p gaming performance should be even more spectacular.
At 1080p, the Ryzen 3 5300G’s Vega graphics engine reportedly pushed frame rates up to 48 frames per second (fps) on Battlefield V and 95 fps on Battlefield 4 with low settings. With medium settings, the APU’s listed frame rates dropped to 37 fps and 82 fps, respectively.
On high settings the Ryzen 3 5300G’s graphical performance took a hit. The APU ran Battlefield V at 29 fps, which is just 1 fps below what we consider playable, and Battlefield 4 at 47 fps.
It’s unclear why AMD is taking so long to announce desktop Cezanne. The engineering samples are evidently out in the wild already. With the current graphics card shortage, the Zen 3 APUs could be a legit option for gamers with tight budgets.
TechPowerUp is one of the most highly cited graphics card review sources on the web, and we strive to keep our testing methods, game selection, and, most importantly, test bench up to date. Today, I am pleased to announce our newest March 2021 VGA test system, which has one of many firsts for TechPowerUp. This is our first graphics card test bed powered by an AMD CPU. We are using the Ryzen 7 5800X 8-core processor based on the “Zen 3” architecture. The new test setup fully supports the PCI-Express 4.0 x16 bus interface to maximize performance of the latest generation of graphics cards by both NVIDIA and AMD. The platform also enables the Resizable BAR feature by PCI-SIG, allowing the processor to see the whole video memory as a single addressable block, which could potentially improve performance.
A new test system heralds completely re-testing every single graphics card used in our performance graphs. It allows us to kick out some of the older graphics cards and game tests to make room for newer cards and games. It also allows us to refresh our OS, testing tools, update games to the latest version, and explore new game settings, such as real-time raytracing, and newer APIs.
A VGA rebench is a monumental task for TechPowerUp. This time, I’m testing 26 graphics cards in 22 games at 3 resolutions, or 66 game tests per card, which works out to 1,716 benchmark runs in total. In addition, we have doubled our raytracing testing from two to four titles. We also made some changes to our power consumption testing, which is now more detailed and more in-depth than ever.
In this article, I’ll share some thoughts on what was changed and why, while giving you a first look at the performance numbers obtained on the new test system.
Hardware
Below are the hardware specifications of the new March 2021 VGA test system.
Windows 10 Professional 64-bit Version 20H2 (October 2020 Update)
Drivers:
AMD: 21.2.3 Beta NVIDIA: 461.72 WHQL
The AMD Ryzen 7 5800X has emerged as the fastest processor we can recommend to gamers for play at any resolution. We could have gone with the 12-core Ryzen 9 5900X or even maxed out this platform with the 16-core 5950X, but neither would be faster at gaming, and both would be significantly more expensive. AMD certainly wants to sell you the more expensive (overpriced?) CPU, but the Ryzen 7 5800X is actually the fastest option because of its single CCD architecture. Our goal with GPU test systems over the past decade has consistently been to use the fastest mainstream-desktop processor. Over the years, this meant a $300-something Core i7 K-series LGA115x chip making room for the $500 i9-9900K. The 5900X doesn’t sell for anywhere close to this mark, and we’d rather not use an overpriced processor just because we can. You’ll also notice that we skipped upgrading to the 10-core “Comet Lake” Core i9-10900K processor from the older i9-9900K because we saw no significant increases and negligible gaming performance gains, especially considering the large overclock on the i9-9900K. The additional two cores do squat for nearly all gaming situations, which is the second reason besides pricing that had us decide against the Ryzen 9 5900X.
We continue using our trusted Thermaltake TOUGHRAM 16 GB dual-channel memory kit that served us well for many years. 32 GB isn’t anywhere close to needed for gaming, so I didn’t want to hint at that, especially to less experienced readers checking out the test system. We’re running at the most desirable memory configuration for Zen 3 to reduce latencies inside the processor: Infinity Fabric at 2000 MHz, memory clocked at DDR4-4000, in 1:1 sync with the Infinity Fabric clock. Timings are at a standard CL19 configuration that’s easily found on affordable memory modules—spending extra for super-tight timings usually is overkill and not worth it for the added performance.
The MSI B550-A PRO was an easy choice for a motherboard. We wanted a cost-effective motherboard for the Ryzen 9 5800X and don’t care at all about RGB or other bling. The board can handle the CPU and memory settings we wanted for this test bed, the VRM barely gets warm. It also doesn’t come with any PCIe gymnastics—a simple PCI-Express 4.0 x16 slot wired to the CPU without any lane switches along the way. The slot is metal-reinforced and looks like it can take quite some abuse over time. Even though I admittedly swap cards hundreds of times each year, probably even 1000+ times, it has never been any issue—insertion force just gets a bit softer, which I actually find nice.
Software and Games
Windows 10 was updated to 20H2
The AMD graphics driver used for all testing is now 21.2.3 Beta
All NVIDIA cards use 461.72 WHQL
All existing games have been updated to their latest available version
The following titles were removed:
Anno 1800: old, not that popular, CPU limited
Assassin’s Creed Odyssey: old, DX11, replaced by Assassin’s Creed Valhalla
Hitman 2: old, replaced by Hitman 3
Project Cars 3: not very popular, DX11
Star Wars: Jedi Fallen Order: horrible EA Denuvo makes hardware changes a major pain, DX11 only, Unreal Engine 4, of which we have several other titles
Strange Brigade: old, not popular at all
The following titles were added:
Assassin’s Creed Valhalla
Cyberpunk 2077
Hitman 3
Star Wars Squadrons
Watch Dogs: Legion
I considered Horizon Zero Dawn, but rejected it because it uses the same game engine as Death Stranding. World of Warcraft or Call of Duty won’t be tested because of their always-online nature, which enforces game patches that mess with performance—at any time. Godfall is a bad game, Epic exclusive, and commercial flop.
The full list of games now consists of Assassin’s Creed Valhalla, Battlefield V, Borderlands 3, Civilization VI, Control, Cyberpunk 2077, Death Stranding, Detroit Become Human, Devil May Cry 5, Divinity Original Sin 2, DOOM Eternal, F1 2020, Far Cry 5, Gears 5, Hitman 3, Metro Exodus, Red Dead Redemption 2, Sekiro, Shadow of the Tomb Raider, Star Wars Squadrons, The Witcher 3, and Watch Dogs: Legion.
Raytracing
We previously tested raytracing using Metro Exodus and Control. For this round of retesting, I added Cyberpunk 2077 and Watch Dogs Legion. While Cyberpunk 2077 does not support raytracing on AMD, I still felt it’s one of the most important titles to test raytracing with.
While Godfall and DIRT 5 support raytracing, too, neither has had sufficient commercial success to warrant inclusion in the test suite.
Power Consumption Testing
The power consumption testing changes have been live for a couple of reviews already, but I still wanted to detail them a bit more in this article.
After our first Big Navi reviews I realized that something was odd about the power consumption testing method I’ve been using for years without issue. It seemed the Radeon RX 6800 XT was just SO much more energy efficient than NVIDIA’s RTX 3080. It definitely is more efficient because of the 7 nm process and AMD’s monumental improvements in the architecture, but the lead just didn’t look right. After further investigation, I realized that the RX 6800 XT was getting CPU bottlenecked in Metro: Last Light at even the higher resolutions, whereas the NVIDIA card ran without a bottleneck. This of course meant NVIDIA’s card consumed more power in this test because it could run faster.
The problem here is that I used the power consumption numbers from Metro for the “Performance per Watt” results under the assumption that the test loaded the card to the max. The underlying reason for the discrepancy is AMD’s higher DirectX 11 overhead, which only manifested itself enough to make a difference once AMD actually had cards able to compete in the high-end segment.
While our previous physical measurement setup was better than what most other reviewers use, I always wanted something with a higher sampling rate, better data recording, and a more flexible analysis pipeline. Previously, we recorded at 12 samples per second, but could only store minimum, maximum, and average. Starting and stopping the measurement process was a manual operation, too.
The new data acquisition system also uses professional lab equipment and collects data at 40 samples per second, which is four times faster than even NVIDIA’s PCAT. Every single data point is recorded digitally and stashed away for analysis. Just like before, all our graphics card power measurement is “card only”, not the “whole system” or “GPU chip only” (the number displayed in the AMD Radeon Settings control panel).
Having all data recorded means we can finally chart power consumption over time, which makes for a nice overview. Below is an example data set for the RTX 3080.
The “Performance per Watt” chart has been simplified to “Energy Efficiency” and is now based on the actual power and FPS achieved during our “Gaming” power consumption testing run (Cyberpunk 2077 at 1440p, see below).
The individual power tests have also been refined:
“Idle” testing is now measuring at 1440p, whereas it used 1080p previously. This is to follow the increasing adoption rates of high-res monitors.
“Multi-monitor” is now 2560×1440 over DP + 1920×1080 over HDMI—to test how well power management works with mixed resolutions over mixed outputs.
“Video Playback” records power usage of a 4K30 FPS video that’s encoded with H.264 AVC at 64 Mbps bitrate—similar enough to most streaming services. I considered using something like madVR to further improve video quality, but rejected it because I felt it to be too niche.
“Gaming” power consumption is now using Cyberpunk 2077 at 1440p with Ultra settings—this definitely won’t be CPU bottlenecked. Raytracing is off, and we made sure to heat up the card properly before taking data. This is very important for all GPU benchmarking—in the first seconds, you will get unrealistic boost rates, and the lower temperature has the silicon operating at higher efficiency, which screws with the power consumption numbers.
“Maximum” uses Furmark at 1080p, which pushes all cards into its power limiter—another important data point.
Somewhat as a bonus, and I really wasn’t sure if it’s as useful, I added another run of Cyberpunk at 1080p, capped to 60 FPS, to simulate a “V-Sync” usage scenario. Running at V-Sync not only removes tearing, but also reduces the power consumption of the graphics card, which is perfect for slower single-player titles where you don’t need the highest FPS and would rather conserve some energy and have less heat dumped into your room. Just to clarify, we’re technically running a 60 FPS soft cap so that weaker cards that can’t hit 60 FPS (GTX 1650S and GTX 1660) won’t run 60/30/20 FPS V-Sync, but go as high as able.
Last but not least, a “Spikes” measurement was added, which reports the highest 20 ms spike recorded in this whole test sequence. This spike usually appears at the start of Furmark, before the card’s power limiting circuitry can react to the new conditions. On RX 6900 XT, I measured well above 600 W, which can trigger the protections of certain power supplies, resulting in the machine suddenly turning off. This happened to me several times with a different PSU than the Seasonic, so it’s not a theoretical test.
Radeon VII Fail
Since we’re running with Resizable BAR enabled, we also have to boot with UEFI instead of CSM. When it was time to retest the Radeon VII, I got no POST, and it seemed the card was dead. Since there’s plenty of drama around Radeon VII cards suddenly dying, I already started looking for a replacement, but wanted to give it another chance in another machine, which had it working perfectly fine. WTF?
After some googling, I found our article detailing the lack of UEFI support on the Radeon VII. So that was the problem, the card simply didn’t have the BIOS update AMD released after our article. Well, FML, the page with the BIOS update no longer exists on AMD’s website.
Really? Someone on their web team made the decision to just delete the pages that contain an important fix to get the product working, a product that’s not even two years old? (launched Feb 7 2019, page was removed no later than Nov 8 2020).
Luckily, I found the updated BIOS in our VGA BIOS collection, and the card is working perfectly now.
Performance results are on the next page. If you have more questions, please do let us know in the comments section of this article.
Popular handheld gaming device maker GPD reportedly plans to create a new handheld built around AMD’s new Ryzen 7 5800U Zen 3 mobile processor, according to YouTuber Wild Lee. This will be GPD’s first use of an AMD CPU, and it will be by far the most powerful handheld they’ve made up to this point, competing directly with the Aya Neo.
GPD will be putting the 5800U in its Win Max chassis, so it’ll look very similar to GPDs current Win Max devices that feature both a keyboard and joysticks, so you can choose whether to play a game with a controller layout or with a mouse and keyboard.
Specs-wise, this should give the Zen 3 GPD device an edge over the Aya Neo, which features a previous-gen Ryzen 5 4500U with lower core and thread counts. The 5800U should be especially handy for emulators and AAA titles that rely heavily on CPU resources to maintain performance. While both devices pack integrated Vega graphics chips, the 5800U has 8 CUs clocked at up to 2GHz, while the 4500U only has 6 CUs clocked at up to 1.5GHz, so performance should also greatly favor the GDP handheld.
There might be a third competitor in the space as well, from GPD’s own lineup. GPD just announced the Win 3, featuring an Intel Tiger Lake CPU and a form factor similar to the Nintendo Switch Lite, with a built-in keyboard. The use of a Tiger Lake CPU gives the Win 3 Intel’s latest Xe Graphics, which might be more powerful than AMD’s current Vega graphics — depending on the games used.
It will be interesting to see where the Ryzen 7 5800U equipped GPD device lands in the handheld market. Will gamers prefer a beefier CPU, or a potentially more powerful GPU, and how will the various options actually stack up? We’ll have to wait to find out.
Early indications suggest that, as with the current-gen EPYC Rome processors, AMD fabs the EPYC Milan chips with the 7nm process and they top out at 64 cores. The most significant change to the series appears to come with the infusion of the Zen 3 microarchitecture that lends a 19% in instruction per cycle (IPC) throughput improvement through several changes, like a unified L3 cache and better thermal management that allows the chip to extract more performance within any given TDP range.
AMD has obviously prioritized the production of its EPYC chips to continue its assault on Intel’s vaunted Xeon platform. As a result, AMD has continued to slowly whittle away at Intel’s commanding lead in the data center, a trend that still continues today despite supply constraints that have slowed the company’s advance in other segments.
Faced with unrelenting pressure from a surprisingly nimble competitor, Intel has significantly reduced gen-on-gen pricing with the debut of its Cascade Lake Refresh Xeon models, by 60% in some cases, by slightly adjusting the capabilities of the chips in a way that largely results in a price reduction that comes in the guise of new chips. To counter, AMD bulked up its EPYC Rome lineup with its workload-optimized 7F and 7H parts, which come with higher power consumption and thermals than the standard 7002 series chips but feature higher frequencies, allowing AMD to challenge Intel’s traditional lead in per-core performance.
The Milan launch, not to mention Intel’s pending 10nm Ice Lake launch, promises to reignite the heated data center competition once again. AMD has reiterated that EPYC Milan processors are on track for the formal launch in the first quarter of the year, and today’s announcement indicates those plans are on track. It’s noteworthy that the EPYC Milan chips already began shipping to select cloud and HPC customers in the last quarter of 2020, while the formal launch will mark availability for Tier 1 OEMs.
PNY’s XLR8 Gaming Epic-X RGB DDR4-3200 C16 memory kit is a good partner for contemporary AMD and Intel processors that natively support DDR4-3200 memory.
For
Acceptable performance
RGB lighting doesn’t require proprietary software
Against
Too expensive
Limited overclocking potential
Nowadays, it feels like the norm that every computer hardware company has a dedicated gaming sub-brand. For PNY, that would be XLR8 Gaming that currently competes in three major hardware markets: memory, gaming graphics cards, and SSDs. In terms of memory, the XLR8 Gaming branding is still a bit wet behind the ears, but the company has started to solidify its lineups. The Epic-X RGB series, in particular, is one of XLR8 Gaming’s latest additions to its memory portfolio.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The Epic-X RGB memory modules come with a black PCB and a matching aluminum heat spreader. The design is as simple as it gets, and that’s not a bad thing. The heat spreaders feature a few diagonal lines and the XLR8 logo in the middle. An RGB lightbar is positioned on top of the memory module to provide some flair. The memory measures 47mm (1.85 inches) tall, so it might get in the way of some CPU air coolers.
PNY didn’t develop a proprietary program to control the Epic-X RGB’s lighting, which will favor users who don’t want to install another piece of software on their system. Instead, PNY is handing the responsibility over to the motherboard. Fear not, because the Epic-X RGB has all its bases covered. The memory’s illumination is compatible with Asus Aura Sync, Gigabyte RGB Fusion, MSI Mystic Light Sync, and ASRock Polychrome Sync.
The Epic-X RGB memory kit is comprised of two 8GB DDR4 memory modules. They’re built on a 10-layer PCB and feature a single-rank design. Thaiphoon Burner was unable to identify the integrated circuits (ICs) inside the Epic-X RGB. However, given the primary timings, the memory is likely using Hynix C-die chips.
Predictably, the Epic-X RGB runs at DDR4-2133 with 15-15-15-36 timings by default. There’s a single XMP profile that brings the memory up to speed. In this case, it sets the memory modules to DDR4-3200 and the timings to 16-18-18-38. At this frequency, the memory draws 1.35V. For more on timings and frequency considerations, see our PC Memory 101 feature, as well as our How to Shop for RAM story.
Comparison Hardware
Memory Kit
Part Number
Capacity
Data Rate
Primary Timings
Voltage
Warranty
Team Group T-Force Xtreem ARGB
TF10D416G3600HC14CDC01
2 x 8GB
DDR4-3600 (XMP)
14-15-15-35 (2T)
1.45 Volts
Lifetime
Gigabyte Aorus RGB Memory
GP-AR36C18S8K2HU416R
2 x 8GB
DDR4-3600 (XMP)
18-19-19-39 (2T)
1.35 Volts
Lifetime
PNY XLR8 Gaming Epic-X RGB
MD16GK2D4320016XRGB
2 x 8GB
DDR4-3200 (XMP)
16-18-18-38 (2T)
1.35 Volts
Lifetime
Lexar DDR4-2666
LD4AU008G-R2666U x 2
2 x 8GB
DDR4-2666
19-19-19-43 (2T)
1.20 Volts
Lifetime
Our Intel test system consists of an Intel Core i9-10900K and Asus ROG Maximus XII Apex on 0901 firmware. On the opposite side, the AMD testbed leverages an AMD Ryzen 5 3600 and ASRock B550 Taichi with 1.30 firmware. The MSI GeForce RTX 2080 Ti Gaming Trio handles the graphical duties on both platforms.
Intel Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
Predictably, the Epic-X RGB didn’t beat the faster memory kits in our RAM benchmarks. Performance was consistent, with the Epic-X kit placing third overall on the application and gaming charts.
AMD Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
Things didn’t change on the AMD platform, either. However, the Epic-X RGB did earn some merits since the memory kit was the fastest in the Cinebench R20 and HandBrake x264 conversion tests. The margin of victory was slim, though, at less than 1%.
Overclocking and Latency Tuning
Image 1 of 3
Image 2 of 3
Image 3 of 3
The Epic-X RGB isn’t the best overclocker that we’ve had in the labs. Nevertheless, we squeezed an extra 400 MHz out of the kit. We could hit DDR4-3600 at 1.45V after we relaxed the timings to 20-20-20-40.
Lowest Stable Timings
Memory Kit
DDR4-2666 (1.45V)
DDR4-3200 (1.45V)
DDR4-3600 (1.45V)
DDR4-3900 (1.45V)
DDR4-4200 (1.45V)
Team Group T-Force Xtreem ARGB
N/A
N/A
13-14-14-35 (2T)
N/A
19-19-19-39 (2T)
Gigabyte Aorus RGB Memory
N/A
N/A
16-19-19-39 (2T)
20-20-20-40 (2T)
N/A
PNY XLR8 Gaming Epic-X RGB
N/A
15-18-18-38 (2T)
20-20-20-40 (2T)
N/A
N/A
Lexar DDR4-2666
16-21-21-41 (2T)
N/A
N/A
17-22-22-42 (2T)
N/A
Sadly, we didn’t have the same level of luck optimizing the Epic-X RGB at DDR4-3200. Even with a 1.45V DRAM voltage, we could only get the CAS Latency down from 15 to 16 clocks. The other timings wouldn’t yield.
Bottom Line
In this day and age, enthusiasts are pursuing faster and faster memory kits. However, there’s always space for a standard memory kit, and the XLR8 Gaming Epic-X RGB DDR4-3200 C16 kit could very well find its place with users that want to stick to a processor’s official supported memory frequency. Today’s modern processors, such as AMD’s Zen 2 and Zen 3 processors and Intel’s looming Rocket Lake processors, support DDR4-3200 memory right out of the box. The XLR8 Gaming Epic-X RGB DDR4-3200 C16 would fit nicely in this situation since you can just enable XMP and never look back.
The XLR8 Gaming Epic-X RGB DDR4-3200 C16 only has a tiny flaw, and that’s pricing. The memory kit retails for $94.99 when the typical DDR4-3200 C16 kit starts at $74.99. Even the faster DDR4-3600 C18 memory kits sell for as low as $79.99. In PNY’s defense, the Epic-X RGB memory modules do look nice with the RGB lighting and whatnot, so we can probably chalk the extra cost up to the RGB tax.
Intel’s 11th Generation Rocket Lake processors aren’t due until March 30. However, some retailers are already shipping out orders. One user from the Chiphell forums has gotten his hands on a retail Core i7-11700K, and it would appear that Intel is using a similar memory overclocking concept as AMD’s Infinity Fabric Clock (FCLK), but with Rocket Lake chips.
If you’re not familiar with AMD’s Ryzen processors, many of which sit on our best CPUs list, the FCLK dictates the frequency of the Infinity Fabric, which serves as an interconnect across the chiplets. Adjusting this value allows you to hit higher memory frequency overclocks. By default, the FCLK is synchronized with the unified memory controller clock (UCLK) and memory clock (MEMCLK). Obviously, you can run the FCLK in asynchronous mode, but doing so will induce a latency penalty that negatively impacts performance.
Since Rocket Lake isn’t officially out yet, we’re not completely sure how Intel’s portrayal of the FCLK memory overclocking will work. The BIOS screenshot shows two operational modes for the CPU IMC (integrated memory controller) and the DRAM clock on MSI’s Z490I Unify. Apparently, Gear 1 runs both in a 1:1 ratio, while Gear 2 puts them in a 1/2:1 ratio. It’s similar to how the FCLK works on Ryzen processors.
According to the author of the forum post, his retail Core i7-11700K seems to hit a wall at DDR4-3733, suggesting that DDR4-3733 is the limit at which Rocket Lake’s IMC can run in a 1:1 ratio with the memory clock. In retrospect, the majority of AMD’s Zen 2 processors scale to a 1,800 FCLK (DDR4-3600) with some samples hitting a 1,900 MHz FCLK (DDR4-3800). If Rocket Lake has the same limits, it’s going to lose points since AMD’s latest Zen 3 processors have peaked as high as a 2,000 MHz FCLK (DDR4-4000) before breaking synchronous operation.
Image 1 of 2
Image 2 of 2
It’s too soon to pass judgement whether DDR4-3733 is a hard cap that’s built into the Rocket Lake silicon itself or it’s merely a product of an early and unoptimized microcode. We should point out that the user did his testing on a MEG Z490I Unify motherboard so a proper firmware is required to make Rocket Lake operate properly. The Chiphell forum user provided some RAM benchmarks that reportedly shows the performance impact.
With a DDR4-4000 memory kit with 18-20-20-40 1T timings in asynchronous mode, the user got a latency of 61.3 nanoseconds in AIDA64. Switching over to a DDR4-3600 memory that has 14-14-14-34 2T timings allowed him to decrease the latency to 50.2 nanoseconds, which represents a 18.1% reduction. However, we have to take certain points into consideration. For one, the DDR4-4000 memory kit obviously has very sloppy timings that help contribute to higher latency. Furthermore, the user evidently overclocked the Core i7-11700K’s uncore frequency to 4,100 MHz on the DDR4-3600 run so that probably skewed the results in its favor as well.
We’ll have to wait until the Rocket Lake processors are available to investigate the matter thoroughly. So far, a DDR4-3733 limit certainly doesn’t bode well for Rocket Lake, especially when some of the really pricey Z590 motherboards are advertising memory support above DDR4-5000. In all fairness, Rocket Lake only natively supports memory up to DDR4-3200 so anything higher is technically overclocking in Intel’s book.
We now have official specs for the AMD Radeon RX 6700 XT, yet another poorly kept secret in the land of GPUs you can’t actually buy. We’ve been expecting Navi 22 to join the ranks of the best graphics cards and land somewhere near the RTX 3060 Ti in our GPU benchmarks hierarchy for several months now, and it will officially arrive on March 18, 2021, at 9am Eastern. It will be completely sold out by 9:00:05, and based on recent events like the RTX 3060 12GB, we doubt more than a handful of people will manage to acquire one at whatever MSRP AMD sets.
Speaking of which, AMD revealed that it plans to launch the RX 6700 XT with a starting price of $479. Considering AMD expects it to be faster than the RTX 3070, never mind the RTX 3060 Ti, that’s a reasonable target. The die size also appears to be relatively large, thanks to a still-sizeable Infinity Cache. Here’s the full list of known specs:
The AMD Radeon RX 6700 XT comes in with the highest GPU clocks we’ve to date, 2424 MHz. The RX 6800 XT and RX 6900 XT both have 2250 MHz game clocks, though in actual benchmarks, we’ve seen speeds of more than 2500 MHz already — the Game Clock is more of a conservative boost clock. Even with a drop down to 40 CUs (from 60 CUs on the RX 6800), the higher clock speeds should prove relatively potent. Raw theoretical performance sits at 12.4 TFLOPS, and assuming AMD uses 16Gbps GDDR6 again (which is likely), it will have 384GBps of bandwidth. Except, it still has a honking 96MB L3 Infinity Cache.
We were very curious about how far AMD would cut down the Infinity Cache from Navi 21. The answer appears to be “not very much.” The Biggest Navi chip has up to 80 CUs and 128MB of Infinity Cache, so AMD cut the computational resources in half but only lopped off a quarter of the cache. That should keep cache hit rates high, which means effective bandwidth — even from a 192-bit memory interface — should be much higher than Nvidia’s similarly-equipped RTX 3060 12GB.
Let’s go back to that TFLOPS number for a moment, though. 12.4 TFLOPS may not sound like much, but it’s a big jump from the previous gen 40 CU part. The RX 5700 XT had a theoretical 9.8 TFLOPS, and we know the Infinity Cache allows the GPU to get closer to that maximum level of performance in games. That means a 40-50 percent jump in performance might be possible. On the other hand, the RX 6800 with 60 CUs, even at lower clocks, is rated for 16.2 TFLOPS, a 31% increase in compute potential. It also has 33% more memory bandwidth, which means on average it should be at least 20% faster than the 6700 XT, for about 20% more money (well, if MSRP was anything but a fantasy right now).
There are other indications this will still be a performant card, like the 230W board power (just 20W lower than RX 6800). And then there’s the die shot comparison.
AMD didn’t reveal all of the specs, but based on that image, it looks like RX 6700 XT / Navi 22 will max out at 96 ROPs (Render Outputs), and the total die size looks to be in the neighborhood of 325mm square, with around 16-17 billion transistors (give or take 10%). That’s quite a bit smaller than Navi 21 (520mm square and 26.8 billion transistors), and perhaps the above images aren’t to scale, but clearly, there’s a lot of other circuitry besides the GPU cores that still needs to be present — the cores and cache only account for about half of the die area.
By way of comparison, Nvidia’s GA106 measures 276mm square with 12 billion transistors, while the GA104 has 17.4 billion transistors and a 393mm square die size. AMD’s Navi 22 should be competitive with GA104, but with a smaller size thanks to its TSMC N7 process technology. However, TSMC N7 costs more and is in greater demand, which leads back to the $479 price point.
Performance, as usual, will be the real deciding factor on how desirable the RX 6700 XT ends up being. AMD provided some initial benchmark results — using games and settings that generally favor its GPUs, naturally. Take these benchmarks with a grain of salt, in other words, but even reading between the lines, the 6700 XT looks pretty potent.
That’s eight games, three with definite AMD ties (Assassin’s Creed Valhalla, Borderlands 3, and Dirt 5) and two with Nvidia ties (Cyberpunk 2077 and Watch Dogs Legion). AMD says “max settings,” but we suspect that means max settings but without ray tracing effects. Still, there are a lot of games that don’t use RT, and of those that have it, the difference in visual quality isn’t even that great for a lot of them, so rasterization performance still reigns as the most important factor. Based on AMD’s data, it looks like the RX 6700 XT will trade blows with the RTX 3070.
AMD had a few other announcements today. It’s bringing resizable BAR support, called AMD Smart Access Memory, to Ryzen 3000 processors. That excludes the Ryzen 3200G and 3400G APUs, which of course, are technically Zen+ architecture and have a limited x8 PCIe link to the graphics. AMD also didn’t mention any Ryzen 4000 mobile or desktop APUs (i.e., Renoir), so those may not be included either, but every Zen 2 and Zen 3 AMD CPU will have Smart Access Memory.
AMD didn’t discuss future Navi 22-derived graphics cards, but there will inevitably be more products built around the GPU. From what we can tell, RX 6700 XT uses the fully enabled chip with 40 CUs. Just as we’ve seen with Navi 21 and previous GPUs like Navi 10, not all chips are fully functional, and harvesting those partial dies is a key component of improving yields. We expect to see an RX 6700 (non-XT) at the very least, and there are opportunities for OEM-only variants as well (i.e., similar to the RX 5500 non-XT cards of the previous generation). We’ll probably see the RX 6700 (or whatever the final name ends up being) within the next month.
Again, pricing and availability are critical factors for any GPU launch, and while we have no doubt AMD will sell every RX 6700 XT it produces, we just hope it can produce more than a trickle of cards. When asked about this, AMD issued the following statement:
“We hear, and understand, the frustration from gamers right now due to the unexpectedly strong global demand for graphics cards. With the AMD Radeon RX 6700 XT launch, we are on track to have significantly more GPUs available for sale at launch. We continue to take additional steps to address the demand we see from the community. We are also refreshing stock of both AMD Radeon RX 6000 Series graphics cards and AMD Ryzen 5000 Series processors on AMD.com on a weekly basis, giving gamers and enthusiasts a direct option to purchase the latest Ryzen CPUs and Radeon GPUs at the suggested etail and retail price.”
That’s nice to hear, but we remain skeptical. We’ve been tracking general trends in the marketplace, and it’s clear Nvidia continues to sell far more graphics cards than AMD, and it’s still not coming anywhere close to meeting demand. Will Navi 22 buck that trend? Our Magic 8-Ball was cautiously optimistic, as you can see:
All joking aside, we’re looking forward to another likely frustrating GPU launch. There’s no indication that AMD will follow Nvidia’s example and try to limit mining performance on its future GPUs, but with or without high mining performance, the RX 6700 XT will inevitably sell out. There’s at least some good news in recent GPU mining profitability trends, however: Cards that were making $12–$15 per day last month are now mining in the $6–$8 range and dropping. That’s not going to stop mining completely, but hopefully it means fewer people trying to start up mining farms if the potential break-even point is more than a year away, rather than 3–4 months out.
The AMD Radeon RX 6700 XT officially launches on March 18. We’ll have a full review at that time. Given the pictures AMD sent along, we expect there will be dual-fan reference cards, but AMD will want to shift the bulk of cards over to its AIB partners. We should see various models from all the usual partners, and we’re eager to see how the GPU fares in independent testing. Check back on March 18 to find out.
Below is the full slide deck from AMD’s announcement today.
Image 1 of 35
Image 2 of 35
Image 3 of 35
Image 4 of 35
Image 5 of 35
Image 6 of 35
Image 7 of 35
Image 8 of 35
Image 9 of 35
Image 10 of 35
Image 11 of 35
Image 12 of 35
Image 13 of 35
Image 14 of 35
Image 15 of 35
Image 16 of 35
Image 17 of 35
Image 18 of 35
Image 19 of 35
Image 20 of 35
Image 21 of 35
Image 22 of 35
Image 23 of 35
Image 24 of 35
Image 25 of 35
Image 26 of 35
Image 27 of 35
Image 28 of 35
Image 29 of 35
Image 30 of 35
Image 31 of 35
Image 32 of 35
Image 33 of 35
Image 34 of 35
Image 35 of 35
MORE: Best Graphics Cards
MORE: GPU Benchmarks and Hierarchy
MORE: All Graphics Content
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.