Flexxon, a Singapore-based security firm, has introduced an SSD with embedded AI-based security capabilities that the company says promises protection against traditional threats like malware and viruses, or physical tampering with the drive.
Modern SSD controllers rely on several Arm Cortex R cores and are basically rather high-performance system-on-chips with fairly sophisticated compute capabilities. These very capabilities, along with firmware enhancements, are what powers Flexxon’s X-Phy SSD platform.
The platform relies on a technology that Flexxon calls AI One Core Quantum Engine and a special secured firmware. The company’s description of its technology is vague at best, so it is unclear whether its engine is a completely self-sufficient/isolated platform or a combination of software, hardware, and firmware.
This AI One Core Quantum Engine presumably runs on an NVMe 1.3-compliant SSD controller and monitors all the traffic. Once its algorithm detects a threat (a virus, malware, an intrusion), it can block it to protect the firmware and data integrity. Furthermore, the company says the self-learning algorithm can detect abnormalities and identify them as threats, the company said without elaborating. Meanwhile, the drive comes with a special application. The X-Phy drive looks to be compatible with all major operating systems, based on an image published on the company’s website.
The SSD is also equipped with “a range of features including temperature sensors to detect unusual movements that occur” in a bid to protect against physical intrusion. If the device detects tampering, it will lock itself and alert the owner via email. It is unclear how the device can alert its owner via an email if someone steals it from a PC that is shut down. Of course, there are ways to monitor HDD activity when the PC is off to lock the SSD if it is removed. Still, there isn’t a way to issue a notification about a physical intrusion if the OS isn’t running (unless, of course, the SSD is not equipped with a modem).
Flexxon stresses that the X-Phy SSD does not replace traditional security measures and calls it ‘the last line of defense.’
Flexxon’s X-Phy SSD is currently in trials with “government agencies, medical and industrial clients” and the manufacturer expects it to be available in Q4 2021 or in early 2022. The drive will be available in 512GB and 1TB 3D NAND configurations in M.2-2280 and U.2 form-factors with a PCIe 3.0 x4 interface. The SSD will support LDPC ECC as well as dynamic and static wear leveling. Expected prices are unknown.
According to IC Insights, Samsung is soon expected to make a comeback in the semiconductor industry to once again overtake Intel as the largest semiconductor manufacturer in the world. Predictions believe this change will take place right around Q2 of 2021 (which isn’t far from now).
Intel has long been a dominant payer in the semiconductor industry and currently holds the longest run as the number 1 semiconductor manufacturer in the world, starting in 1993 and lasting all the way through 2016.
It took 23 years before Samsung finally displaced Intel of its position in 2017, thanks to a growing supply of memory sales during that time. It was a good time for competition, and finally proved Intel could be beaten by another competitor in the semiconductor industry.
It should be of no surprise that Samsung was the company to beat Intel; over the past decade, Samsung has become a mega-corporation in the tech industry, becoming the worlds leading memory and NAND flash manufacturer, as well as producing many other devices such as TVs, phones, and smart home appliances.
But Samsung’s lead was short-lived — after just two quarters, the company suffered a 17% loss in revenue due to a sharp decline in memory sales allowing Intel to regain the number one position in 2018.
Luckily for Samsung, Intel’s sales have mostly flatlined since 2020, leading to a minor decline in revenue. This has allowed Samsung, with its slow but continuous increases in revenue, to almost match Intel in sales performance over the past few months.
If this trend continues, Samsung should once again displace Intel as the lead semiconductor manufacturer.
Lexar has made a name for itself in the portable storage market. They are very well known for their SD cards and USB sticks, so it’s natural for them to expand into other areas of flash storage, like consumer SSDs. Lexar was founded as a subsidiary of Micron, but was sold to Longsys in 2017 and has been operating quite independently since.
The Lexar SL200 is a USB-C-based, portable SSD that uses the USB 3.1 interface with speeds of up to 500 MB/s. Traditionally, most large-capacity external storage has been based on hard drives, which come at very low cost per TB but have several drawbacks. First, since they are mechanical components, they are sensitive to shock—if you drop one, it’s very likely broken. SSDs, on the other hand, are almost immune to external damage. Another plus of SSDs is that they don’t use any mechanical components to transfer data, so their seek times are much lower than on HDDs, and transfer rates are higher, too.
Internally, the Lexar SL200 uses a Lexar DM918 controller paired with 3D TLC NAND flash and a USB-to-SATA bridge chip from ASMedia.
Gigabyte Aorus Gen4 7000s is a high-performance, premium-priced M.2 NVMe SSD that keeps cool under any workload due to its sleek pre-installed heatsink.
For
+ Competitive performance
+ Attractive design
+ Effective cooling
+ AES 256-bit encryption
+ 5-year warranty and high endurance ratings
Features and Specifications
Today, we have Gigabyte’s Aorus Gen4 7000s in the lab for review with cooling that is fit for an SSD that can gulp down over 8.5 watts of power. With an extremely well-crafted heatsink that is decked out with tons of fins, it’s ready for the harshest of workloads and will add some bling to a high-end gaming build. Designed to compete with the best SSDs, the Aorus Gen4 7000s dishes out up to 7 GBps and surprisingly, from testing, shows improvement over earlier Phison E18 NVMe SSD samples we’ve come across.
We’ve had our hands on many Phison PS5018-E18-based SSDs in the past few months and they all deliver very high performance. But with such high speed, these SSDs also have high power consumption in comparison to other SSDs such as Samsung’s 980 Pro and WD’s Black SN850, and concurrently, this results in high heat output in heavy usage, especially for the higher capacity models like Sabrent’s 4TB Rocket 4 Plus.
As awesome as Sabrent’s recently reviewed 4TB Rocket 4 Plus is, when relying solely on the thin heat spreader to keep it cool, it is still susceptible to throttling under massive write workloads. Because of this, many of these new Phison 18-powered SSDs are rolling out, equipped with heatsinks to ensure throttle-free, (or at least hopefully throttle-free), operation.
Corsair went as far as to develop both a heatsinked MP600 Pro as well as an MP600 Pro Hydro X edition for those with custom water-cooled rigs who demand completely throttle-free operation. In our hands, we now have Gigabyte’s Aorus Gen4 7000s, an interesting alternative that hasn’t gone to such drastic measures. Coming with a surface area maximized heatsink, the Aorus Gen4 7000s utilizes a more traditional design approach to tackle the minor heat problem, but what’s not so traditional is that it also incorporates a nanocarbon coating that is stated to reduce temperatures by 20%.
Specifications
Product
Gen4 7000s 1TB
Gen4 7000s 2TB
Pricing
$209.99
$389.99
Capacity (User / Raw)
1000GB / 1024GB
2000GB / 2048GB
Form Factor
M.2 2280
M.2 2280
Interface / Protocol
PCIe 4.0 x4 / NVMe 1.4
PCIe 4.0 x4 / NVMe 1.4
Controller
Phison PS5018-E18
Phison PS5018-E18
DRAM
DDR4
DDR4
Memory
Micron 96L TLC
Micron 96L TLC
Sequential Read
7,000 MBps
7,000 MBps
Sequential Write
5,500 MBps
6,850 MBps
Random Read
350,000 IOPS
650,000 IOPS
Random Write
700,000 IOPS
700,000 IOPS
Security
AES 256-bit encryption
AES 256-bit encryption
Endurance (TBW)
700 TB
1,400 TB
Part Number
GP-AG70S1TB
GP-AG70S2TB
Warranty
5-Years
5-Years
The Gigabyte Aorus Gen4 7000s is available in two capacities, 1TB and 2TB, priced at $210 and $390, respectively. Gigabyte rates each capacity to hit 7,000 MBps read, but the 1TB is rated to deliver 5,500 MBps write while the 2TB model can hit 6,850 MBps write. In terms of peak random performance, the SSD is rated capable of up to 650,000 / 700,000 random read/write IOPS at the highest capacity.
Gigabyte backs the Aorus Gen4 7000s with a 5-year warranty and each comes with respectable write endurance ratings – up to 700TB per 1TB in capacity. Such high endurance is thanks to Phison’s fourth-generation LDPC and RAID ECC, wear leveling, a bit of over-provisioning. Also, like Corsair’s MP600 Pro, the SSD supports AES 256-bit hardware encryption, perfect for those on the go who need to meet security compliance standards when handling sensitive data.
Software and Accessories
Gigabyte provides a basic SSD Toolbox that can read the SSD’s health, S.M.A.R.T. data, as well as secure erase it (assuming it’s a secondary drive).
A Closer Look
Image 1 of 3
Image 2 of 3
Image 3 of 3
Gigabyte’s Aorus Gen4 7000s comes in an M.2 2280 double-sided form factor. The included aluminum heatsink measures 11.5 x 23.5 x 76 mm and the black and silver two-tone looks fantastic, too. We’re not too sure how much the nanocarbon coating helps by itself, but based on the way this heatsink is designed, we’re fairly confident that there is plenty of surface area to dissipate all the heat it needs to without it. The SSD is sandwiched between two thick thermal pads that transfer heat from the PCB to the heatsink and baseplate.
Image 1 of 2
Image 2 of 2
As mentioned, Gigabyte’s Aorus Gen4 7000s is powered by Phison’s second-generation PCIe 4.0 x4 SSD controller, the PS5018-E18. It leverages DRAM and features a triple-core architecture that is paired with the company’s CoXProcessor 2.0 technology (an extra two R5 CPU cores) for fast and consistent response. The main CPU cores are Arm Cortex R5’s clocked at 1 GHz, up from 733MHz on its predecessor, the PS5016-E16, while the CoXProcessor 2.0 cores are clocked slower for better efficiency.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Our 2TB sample comes with 2GB of DDR4 from SK hynix, split amongst two DRAM ICs, one on each side of the PCB. These chips interface with the controller at speeds clocking 1,600 MHz and consume 1.2V. Additionally, there are eight NAND flash emplacements in total for storage, each containing 256GB of Micron’s 512Gb 96L TLC flash (32 dies in total). This NAND operates at fast speeds of up to 1,200 MTps and features a quad-plane architecture for fast, responsive performance.
TeamGroup has announced the T-Create Expert PCIe 3.0 SSD that’s oriented towards both content creator and Chia farmers. The SSD delivers the industry’s first 12-year limited warranty.
Available in 1TB and 2TB flavors, the T-Create Expert PCIe SSD boasts endurance ratings of 6,000 TBW and 12,000 TBW, respectively. It’s performance, however, is limited to PCIe 3.0 x4 speeds. TeamGroup didn’t divulge the model of the SSD controller and NAND that are utilized inside the SSD though.
Regardless of the capacity, the T-Create Expert PCIe SSD offer sequential read and write speeds up to 3,400 MBps and 3,000 MBps, respectively. The drive’s random performance is rated for 180,000 IOPS reads and 140,000 IOPS writes. The T-Create Expert PCIe SSD’s Chia farming performance is so far unproven. For reference, a single Chia plot can take up to 12 hours to complete, depending on the drive.
The T-Create Expert PCIe SSD’s greatest asset is obviously its durability because performance-wise, there are way faster drives on the market. Typically, a Chia plot requires between 1.6TB to 1.8TB writes. In theory, the 1TB and 2TB models can create up to 3,333 plots and 6,666 plots, respectively, before hitting their write limits.
The last time we checked, each Chia plot was selling for $3.5. Therefore, the 1TB drive can generate up to $11,665.5 in profits and the 2TB up to $23,331. TeamGroup didn’t reveal the pricing for the T-Create Expert PCIe SSD though so we can’t factor in the cost yet.
The FuzeDrive P200 is a QLC-based hybrid SSD that defies the norm through clever tiering technology that delivers higher endurance, but the excessive pricing isn’t for everyone.
For
+ Large static and dynamic SLC caches
+ Competitive performance
+ Software package
+ 5-year warranty
+ High endurance ratings
Against
– High cost
– Capacity trade-off for SLC cache
– Low sustained write speed
– Initial software configuration
– Lacks AES 256-bit encryption
Features and Specifications
The Enmotus FuzeDrive P200 SSD takes an unconventional approach to increase SSD performance and extend lifespan by leveraging the power of AI to deliver up to 3.4 GBps and class-leading endurance. According to the company, artificial intelligence isn’t just about robots and decrypting future business trends — it can also enhance your SSD and tune it to your usage patterns, thus unlocking more performance and endurance.
Enmotus builds the FuzeDrive P200 using commodity hardware but says the drive delivers more than six times more endurance than most QLC-based SSDs through its sophisticated AI-boosted software and tiering techniques. In fact, a single 1.6 TB drive is guaranteed to absorb an amazing 3.6 petabytes of write data throughout its warranty. The company’s FusionX software also allows you to expand your storage volume up to 32TB by adding another SSD or HDD (just one). All of this will set you back the same cost of a new Samsung 980 Pro with a faster PCIe interface, though, ultimately making this drive attractive only for a niche audience.
Innovative AI Storage
Traditional SSDs, like Sabrent’s Rocket Q, come with QLC flash that operates in a dynamic SLC mode. While this provides fast performance and high capacity, it has drawbacks that primarily manifest as low endurance.
However, QLC flash can operate in the full 16-level, low-endurance QLC mode or operate in a high-endurance SLC mode, which is advantageous for Enmotus’s FuzeDrive P200 SSD. By operating Micron’s flash solely in high endurance SLC mode, the flash’s endurance multiplies – its program-erase cycle rating increases from roughly 600-1,000 cycles to 30,000 cycles. The main reason being that in SLC mode, the flash can be programmed in just one pass, whereas QLC takes 3+ cycles to fine-tune the cell charge.
The 1.6TB FuzeDrive P200 comes with 2TB of raw flash, but not all of it is available to the user. This is somewhat similar to Intel’s Optane Memory H10 and soon-to-be-released H20, but instead of the complication of relying on two separate controllers and storage mediums, the P200 uses only one controller and one type of flash. The FuzeDrive leverages the advantages that both dynamic and high endurance SLC modes have to offer by splitting the device into two LBA zones. The first LBA range is the high endurance zone, and it sacrifices 512GB of the raw flash to provide 128GB of SLC goodness (4 bits QLC -> 1-bit SLC), but the user can’t access this area directly. The remaining QLC flash in the second LBA zone operates in dynamic SLC mode and is made available to the end user. The 900GB model comes with a smaller 24GB SLC cache.
The company’s intelligent AI NVMe driver virtualizes the zones into a single volume and relocates data to either portion after analyzing the I/O. In this tiering configuration, a large RAM-based table is set up in memory (roughly 100MB) to track I/O behavior across the whole storage device. Most active and write-intensive data is automatically directed to the SLC zone, and inactive data is moved to the QLC portion with minimal CPU overhead compared to caching techniques. Movements are done only in the background, and only one copy of the data exists. The NVMe driver manages the data placement, while the drive uses a special modified firmware to split it into two separate LBA zones.
Specifications
Product
FuzeDrive P200 900GB
FuzeDrive P200 1.6TB
Pricing
$199.99
$349.99
Form Factor
M.2 2280
M.2 2280
Interface / Protocol
PCIe 3.0 x4 / NVMe 1.3
PCIe 3.0 x4 / NVMe 1.3
Controller
Phison PS5012-E12S
Phison PS5012-E12S
DRAM
DDR3L
DDR3L
Memory
Micron 96L QLC
Micron 96L QLC
Sequential Read
3,470 MBps
3,470 MBps
Sequential Write
2,000 MBps
3,000 MBps
Random Read
193,000 IOPS
372,000 IOPS
Random Write
394,000 IOPS
402,000 IOPS
Endurance (TBW)
750 TB
3,600 TB
Part Number
P200-900/24
P200-1600/128
Warranty
5-Years
5-Years
Enmotus’s FuzeDrive P200 comes in 900GB and 1.6TB capacities. Both fetch a pretty penny, priced at $200 and $350, respectively, roughly matching the price of the fastest Gen4 SSDs on the market. The FuzeDrive P200 comes with a Gen3 NVMe SSD controller, so Enmotus rated it for up to 3,470 / 3,000 MBps of sequential read/write throughput and sustain up to 372,000 / 402,000 random read/write IOPS.
But, while Samsung’s 980 Pro may be faster, it only offers one-third the endurance of the P200. Enmotus rates the 900GB model to handle up to 750 TB of writes during its five-year warranty. The 1.6TB model is much more robust — It can handle up to 3.6 petabytes of writes within its warranty, meaning the P200 comes backed with the highest endurance rating we’ve seen for a QLC SSD of this capacity.
Software and Accessories
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Enmotus provides Fuzion, a utility that monitors the SSD and enables other maintenance tasks, like updating firmware or secure erasing the SSD. The software is available from the Microsoft Store and will automatically install and update the driver for the device. The company also provides the Enmotus-branded Macrium Reflect Cloning Software to help migrate data to the new SSD, as well as the FuzionX software for more complex tiering capability.
When adding a third device into the mix, such as a high-capacity SATA SSD or HDD (NVMe support under development), you can use FusionX software to integrate it into the P200’s virtual volume. The SLC portion of the P200 SSD will retain the volume’s hot data, the QLC portion will retain the warm data, while the HDD stores cold data.
A Closer Look
Image 1 of 3
Image 2 of 3
Image 3 of 3
Enmotus’s FuzeDrive P200 SSD comes in an M.2 2280 form factor, and the 2TB model is double-sided solely to place a second DRAM IC on the back of the PCB. The company uses a copper heat spreader label to aid with heat dissipation. The controller supports ASPM, ASPT, and the L1.2 sleep mode to reduce power when the drive isn’t busy.
Image 1 of 2
Image 2 of 2
As mentioned, Enomotus builds the FuzeDrive P200 with commodity hardware – Phison’s mainstream E12S PCIe 3.0 x4 NVMe 1.3-compliant SSD controller and Micron QLC flash, but the firmware is specifically designed to enable splitting the drive into two distinct zones – one high endurance, one low endurance. The controller has dual Arm Cortex R5 CPUs, clocked at 666MHz, and a DRAM cache. The controller interfaces with two Nanya 4Gb DDR3L DRAM ICs at 1600 MHz for fast access to the FTL mapping tables.
There are four NAND packages on our 2TB sample, each containing four 1Tb Micron 96-Layer QLC packages. For responsive random performance and solid performance in mixed workloads, the flash has a four-plane architecture and interfaces with this eight-channel controller at speeds up to 667 MTps. To ensure reliable operation and maintain data integrity over time, the controller implements Phion’s third-generation LDPC ECC and RAID ECC along with a DDR ECC engine and end-to-end data path protection.
Acer is a world-leading manufacturer of computer hardware. The company was founded in 1976, in Taiwan. They are mostly famous for their laptops, desktop PCs, and monitors. They have now branched into the field of solid-state storage with OEM partner BIWIN Storage, who are also helping HP produce their SSDs.
The FA100 is part of the Acer SSD lineup, which was announced last week. Designed as a cost-effective, entry-level M.2 NVMe SSD, the FA100 offers performance greater than traditional SATA drives and is much faster than any mechanical HDD too, of course. Acer has built the FA100 using the Innogrit IG5216 controller paired with 3D TLC NAND flash. A DRAM cache chip is not included, probably to save on cost.
The Acer FA100 is available in capacities of 128 GB, 256 GB, 512 GB, 1 TB, and 2 TB. Endurance for these models is set at 70 TBW, 150 TBW, 300 TBW, 600 TBW, and 1200 TBW respectively. None of the price points but for the $125 1 TB FA100 in this review are known. Acer includes a five-year warranty with the FA100.
Specifications: Acer FA 100 1 TB
Brand:
Acer
Model:
FA100-1TB
Capacity:
1024 GB (953 GB usable) No additional overprovisioning
João Silva 2 days ago Featured Tech News, SSD Drives
Samsung is back with another PCIe 4 SSD – the PM9A1. The latest SSD is aimed at OEMs, but Samsung’s specifications suggest that it should offer very similar performance to the pre-existing 980 Pro SSD.
Similar to the 980 Pro, the Samsung PM9A1 uses the Elpis controller, DRAM cache, and V6 NAND memory (3D TLC). Available with up to 2TB of storage, it performs at about the same level as the 980 Pro, reaching speeds of 7,000MB/s in sequential reads and up to 5,200 MB/s in sequential writes. The rated 1,000,000 random 4K read IOPS are the same as the 980 Pro, but rated random 4K write IOPS are slightly inferior, set at 850,000 IOPS.
The PM9A1 SSD does not seem to come with a heat spreader, which is reasonable considering it has been designed to be used by OEMs. However, it features thermal control technology to prevent overheating and increase the drive’s lifespan.
Samsung’s new client SSD has been qualified by HP for its Z series of workstations, desktops, and laptops, and is already being shipped in these devices. Other OEMs should follow in early Q2. It’s unclear if these SSD drives will ever release to the DIY market, but given the fact that the similar 980 Pro SSD is widely available, this seems unlikely.
Discuss on our Facebook page, HERE.
KitGuru says: If the Samsung PM9A1 were to hit the DIY market, it would probably be cheaper than the 980 Pro. Would you consider Samsung’s PM9A1 SSD if it was available at the usual retailers?
Become a Patron!
Check Also
Acer planning a 49-inch Mini LED monitor to rival Samsung Odyssey G9
It looks like the Samsung Odyssey G9 already has some competition. A new curved gaming …
Patriot will release a new DDR4 kit next month to compete with the best RAM in the budget category. During The Tom’s Hardware Show yesterday, Roger Shinmoto, Patriot VP of product, revealed the DDR4-4000 Viper Elite 2.
The brand already has DDR4-4000 kits available in its other Viper-branded products, such as the Patriot Viper Steel, but the Viper Elite lineup currently maxes out at DDR4-2400. The Viper Elite 2 will kick things up to DDR4-4000, while keeping with the more wallet-friendly pricing of the Elite series.
4,000 MHz is a sweet spot for AMD platforms, but Shinmoto told us that the kits target both AMD and Intel builders.
But it’s not just about keeping your bank account happy. After 6 years of the Viper Elite being in the market, the new Viper Elite 2 is supposed to bring some new style too.
“Engineering team decided it was time to give it a facelift, so they went out and designed a brand new heat spreader from the ground up,” Shintomo said on The Tom’s Hardware Show. “It’s a really nice red and black design. Very aggressive styling.”
The exec pointed to the DDR4-4000 RAM as being a good fit for overclocking, as well as enthusiasts building a PC for the first time or builders simply seeking an upgrade that doesn’t cost a fortune.
We still don’t know the Viper Elite 2’s pricing, partially due to the memory market’s volatility.
“Just like NAND, DRAM pricing is so volatile,” Shinmoto explained. “We can quote a price today and it might change by May. So they’ll be competitive, but these aren’t the highest-end solutions we have. They’re geared more for the entry-level and price-minded sector.”
But although memory prices have been “going up for a couple months now” and “allocations have been tight,” according to Les Henry, Patriot’s VP of North America and South America sales, Shinmoto assured the Viper Elite 2’s pricing will be “very affordable.”
We currently see Patriot’s high-end Viper Steel DDR4-4000 (2x 8GB) going for about $145, so we hope the Viper Elite 2 is cheaper upon release.
The Tom’s Hardware Show livestream is every Thursday at 3 p.m. ET on YouTube, Facebook and Twitch, and is also available as a podcast.
Patriot’s Viper VP4300 is a high-end PCIe 4.0 NVMe M.2 SSD with all the features and performance you could want from an enthusiast-grade SSD, but you’ll pay a premium for the privilege.
For
+ Included heatsink and graphene label
+ Appealing aesthetics
+ AES 256-bit hardware encryption
+ Large hybrid SLC cache
+ High endurance
+ 5-year warranty
Against
– Pricing
– Lacks software package
– High idle power consumption
Features and Specifications
Patriot’s Viper VP4300 pumps out fast sequential speeds of up to 7.4 / 6.8 GBps read/write and features wicked good looks, making it a top contender for our best SSDs list. Whether you’re loading up the latest Call of Duty update or scrubbing 4K or 8K content, Patriot’s Viper VP4300 delivers responsive performance. And with two optional cooling solutions included, it will keep cool and look cool during the most strenuous tasks you can throw its way.
When PCIe 4.0 SSDs first hit the market, they all had one formula in common — they came powered by a Phison E16 SSD controller that was merely a repurposed PCIe 3.0 design modified to work with the PCIe 4.0 interface, and then paired with BiCS4 flash. This pairing improved the end-user experience, but it lacked the oomph we now see from new clean-sheet controller designs that leverage the speedy PCIe 4.0 interface, like the Phison E18 and the controllers with the latest Samsung and WD SSDs.
Patriot’s Viper VP4300 now joins the list of new drives with completely new controllers. This SSD slithers its way onto our test bench with a new Rainer controller designed by InnoGrit. This new PCIe Gen4 NVMe SSD controller comes paired with a healthy helping of Micron’s 96-Layer TLC flash to serve up fast performance.
The Viper VP4300 also comes with many of the features we expect from a high-end NVMe SSD, and even some we don’t. Patriot even throws in two cooling solutions – a sleek-looking 4mm thick aluminum heatsink and an ultra-thin graphene label for tighter-tolerance installations, like in notebooks. Add in the VP4300’s high endurance ratings, which even outstrip the Samsung 980 Pro and WD Black SN850, and it appears to be a very competitive drive. Let’s put it to the test.
Specifications
Product
Viper VP4300 1TB
Viper VP4300 2TB
Pricing
$ 254.99
$ 499.99
Capacity (User / Raw)
1024GB / 1024GB
2048GB / 2048GB
Form Factor
M.2 2280
M.2 2280
Interface / Protocol
PCIe 4.0 x4 / NVMe 1.4
PCIe 4.0 x4 / NVMe 1.4
Controller
InnoGrit IG5236
InnoGrit IG5236
DRAM
DDR4
DDR4
Memory
Micron 96L TLC
Micron 96L TLC
Sequential Read
7,400 MBps
7,400 MBps
Sequential Write
6,800 MBps
6,800 MBps
Random Read
800,000 IOPS
800,000 IOPS
Random Write
800,000 IOPS
800,000 IOPS
Security
AES 256-bit encryption
AES 256-bit encryption
Endurance (TBW)
1,000 TB
2,000 TB
Part Number
VP4300-1TBM28H
VP4300-2TBM28H
Warranty
5-Years
5-Years
Patriot’s Viper VP4300 comes in just two capacities of 1TB and 2TB. Each is rated to deliver speeds of up to 7.4 / 6.8 GBps of sequential read/write throughput and sustain up to 800,000 random read/write IOPS. Priced at $255 for the 1TB model and $500 for the 2TB, the Viper VP4300 launches with high pricing that exceeds both the WD Black SN850 and Samsung 980 Pro.
The Viper VP4300 carries very robust endurance ratings, though. The 1TB model is rated to endure up to 1,000 TB of writes within its five-year warranty period, while the 2TB is rated for up to 2,000 TB. The VP4300 has very little factory overprovisioning, roughly 7% of the SSD’s capacity is dedicated to the task, and it uses InnoGrit’s Proprietary 4K LDPC ECC along with end-to-end data path protection to ensure reliable performance within the lifespan of the product.
A Closer Look
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Patriot’s Viper VP4300 comes in an M.2 2280 double-sided form factor and includes two optional thermal solutions (“heatshield options,” as they refer to them) to choose from. You can either install the slim yet aggressive-looking aluminum heatsink that measures roughly 72 x 22 x 4 mm, or you can use the very thin graphene sticker for installation into tighter spaces, like notebooks.
Image 1 of 2
Image 2 of 2
Like the Samsung 980 Pro and WD Black SN850, the Viper VP4300 leverages a high-end PCIe 4.0 x4 SSD controller and NAND flash to match. Codenamed Rainer, InnoGrit’s IG5236 is a multi-core NVMe 1.4-compliant SSD controller with a DRAM-based architecture.
Two 8Gb SK hynix DDR4 DRAM chips are present on the PCB, one on each side, that accelerate FTL accesses to ensure responsive performance. The controller is fabbed on TSMC’s 12nm FinFET process and uses multiple consumer-oriented power management techniques to maintain its cool and perform efficiently, too.
Image 1 of 2
Image 2 of 2
Patriot paired the controller with Crucial’s fast 512Gb 96-Layer TLC flash. The controller interfaces with this flash over eight NAND channels at speeds of up to 1,200 MTps, and there are 32 NAND dies spread among the four NAND packages. The flash has a quad-plane architecture for a high level of parallelism per die, and it’s also quite robust and efficient thanks to the unique application of CuA (circuitry under array) design and tile groups.
Adata’s XPG Gammix S70 is fast and features almost everything you could want from a high-end PCIe Gen4 NVMe SSD, but the heatsink is a bit restrictive and not quite as refined as our current best picks.
For
+ Very fast sequential performance
+ High endurance
+ AES 256-bit hardware encryption
+ Black PCB + Heatsink
+ 5-year warranty
Against
– It may be physically incompatible with some motherboards
– High idle power consumption on the desktop
– Slow write speeds after the SLC cache fills
– Pricey
Features and Specifications
Dishing out blisteringly fast sequential speeds of up to 7.4 / 6.4 GBps, Gammix S70 touts some of the fastest performance ratings that we have seen from an NVMe SSD. Yet, it isn’t produced by Samsung or WD, and surprisingly, it isn’t even powered by a Phison controller. Instead, Adata’s XPG Gammix S70 uses a high-end NVMe SSD controller from InnoGrit, a much smaller fabless IC design company.
InnoGrit isn’t a big name when most think of flash controllers, at least not compared to Phison, Silicon Motion, and Marvell. However, the company is far from inexperienced in controller architecture design and engineering. In fact, its co-founders have years of experience in the industry and have created a compelling product line of SSD controllers since opening in 2016.
Thanks to InnoGrit’s IG5236, a robust PCIe 4.0 eight-channel NVMe SSD controller, the company secured a contract with Adata to create the XPG Gammix S70. With this beast of a controller at its core, the S70 could potentially be the fastest SSD on the market. But it faces tough competition from Samsung, WD, and other competitors that pack Phison’s competing E18 SSD controller, like the Corsair MP600 Pro and Sabrent Rocket 4 Plus, to name a few.
Specifications
Product
Gammix S70 1TB
Gammix S70 2TB
Pricing
$199.99
$399.99
Capacity (User / Raw)
1024GB / 1024GB
2048GB / 2048GB
Form Factor
M.2 2280
M.2 2280
Interface / Protocol
PCIe 4.0 x4 / NVMe 1.4
PCIe 4.0 x4 / NVMe 1.4
Controller
InnoGrit IG5236
InnoGrit IG5236
DRAM
DDR4
DDR4
Memory
Micron 96L TLC
Micron 96L TLC
Sequential Read
7,400 MBps
7,400 MBps
Sequential Write
5,500 MBps
6,400 MBps
Random Read
350,000 IOPS
650,000 IOPS
Random Write
720,000 IOPS
740,000 IOPS
Security
AES 256-bit encryption
AES 256-bit encryption
Endurance (TBW)
740 TB
1,480 TB
Part Number
AGAMMIXS70-1T-C
AGAMMIXS70-2T-C
Warranty
5-Years
5-Years
Adata’s XPG Gammix S70 is available in capacities of 1TB and 2TB, priced at $200 and $400, respectively. The S70 is rated to deliver sequential performance of up to 7.4 / 6.4 GBps and to sustain upwards of up to 650,000 / 740,000 random read/write IOPS with the 2TB model. Like most modern SSDs, the S70 uses SLC caching to absorb the majority of inbound write requests, and in this case, the cache measures one-third of the available capacity.
The controller implements InnoGrit’s proprietary 4K LDPC ECC, end-to-end data protection, and even a RAID engine to ensure reliability and data integrity. As a result, the S70 can endure up to 1,480 TB of data writes within its five-year warranty. Additionally, the S70 supports AES 256-bit hardware-accelerated encryption for those who need both speed and data security too.
Adata has also said this drive will feature a fixed build of materials, so the components, like the NAND and SSD controller, will remain the same throughout the life of the product.
A Closer Look
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Adata’s XPG Gammix S70 comes in an M.2 2280 double-sided form factor and is equipped with a very large aluminum heatsink to keep “cool in the heat of battle,” as the company’s marketing department says. Adata claims that the heatsink reduces the SSD’s temperatures by up to 30%. While potentially effective, with the heatsink measuring 24.3 x 70 x 15 mm, its tall and wide footprint may lead to compatibility issues, as was the case with our Asus ROG X570 Crosshair VIII Hero (WiFi).
The S70’s heatsink prevents the SSD from fitting into the motherboard’s secondary M.2 slot and also prevents the PCIe slot latch below it from locking to secure your add-in card (like a GPU) when placed in the first M.2 slot. Furthermore, if placed in an M.2 slot under a PCI slot, the S70’s thick heatsink may also prevent AICs from slotting completely into the PCIe slot.
Making matters worse, the base of the heatsink is held onto the PCB with a very strong adhesive. If you were planning to remove the heatsink for better compatibility, the adhesive might cause you to damage the PCB by cracking it in half. We don’t recommend doing so.
Image 1 of 2
Image 2 of 2
Unlike Adata’s XPG Gammix S50 Lite, the S70 comes with a much faster NVMe SSD controller. The InnoGrit IG5236, dubbed Rainier, is a capable multi-core PCIe 4.0 x4 NVMe 1.4-compliant SSD controller that’s fabbed on TSMC’s 16/12nm FinFET process, which is important to help control power consumption when achieving multi-GB performance figures. It also features client-oriented power management schemes, and Adata claims it consumes as low as 2mW in the L1.2 sleep state.
Image 1 of 2
Image 2 of 2
To achieve its fast performance, the S70 leverages a DRAM-based architecture. The controller interfaces with two SK hynix DDR4-3200 DRAM ICs for FTL table mapping and Micron’s 96-layer TLC flash at NV-DDR3 speeds of up to 1,200 MTps spread over eight flash channels. Our 2TB sample contains 32 dies in total — each die has a four-plane architecture that responds very fast to random requests.
Micon’s flash architecture places the periphery circuitry under the flash cell arrays, differing from Samsung’s V6 V-NAND and WD’s BiCS4 to enable high array efficiency and bit density. The CuA architecture also enables redundancies while splitting the page into multiple tiles and groups, enabling fast and efficient random read performance.
Neo Forza is a relatively young manufacturer of DRAM memory modules and flash memory products. The Taiwanese company was founded in 2018 as enthusiast-focused brand of Goldkey, a well-established producer of computer hardware which has focused on OEM manufacturing until recently.
Today’s review covers the Neo Forza eSports M.2 NVMe SSD, which is also known as NFP075. “eSports” is not a range of products, but the name of this specific drive. A future Gen 4 drive would be called “Esports4x4”, according to Neo Forza. Under the hood, the NFP075 is powered by a Phison PS5012-E12S controller paired with 3D TLC NAND from Chinese state-backed flash memory maker Yangtze Memory Technologies Co (YMTC)—the first YMTC flash I’ve ever reviewed! A DRAM chip from Kingston is included, too. PCI-Express 3.0 x4 is used as the host interface.
The Neo Forza eSports is available in capacities of 256 GB, 512 GB, 1 TB, and 2 TB. Endurance for these models is set at 420 TBW, 890 TBW, 1350 TBW, and 1550 TBW respectively. Neo Forza provides a three-year warranty for the eSports SSD.
Specifications: Neo Forza eSports NFP075 1 TB
Brand:
Neo Forza
Model:
NFP075PCI1T-3400200
Capacity:
1024 GB (953 GB usable) No additional overprovisioning
Seagate has announced that it had surpassed a shipments milestone this March. Throughout its history, the company has shipped three zettabytes (ZB) of hard drive storage.
Seagate’s math about its 3ZB achievement is pretty interesting by itself. Three zettabytes equal 30 billion 4K movies, 60 billion video games, 7.5 trillion MP3 songs, or 1.5 quadrillion selfies. If you prefer SI metrics, one zettabyte is a thousand exabytes, which is a thousand petabytes. So 3ZB equals three billion TB. That’s a lot of hard drives.
Seagate was founded in 1979, more than 41 years ago. Throughout its history, the company shipped hundreds of millions of hard drives. It is particularly noteworthy that 2ZB out of 3ZB were shipped in the last couple of years, which indicates that the world now generates more data than ever.
From a business perspective, Seagate’s history looks no less impressive as the company has outlived more than 200 other HDD makers and is currently one of the three suppliers of hard drives, including some that compete with the best external hard drives. But what is particularly impressive is that the company is only getting started, as to remain relevant, it will have to do more than it has done so far.
A number of important events have happened in the storage market in recent years, which accelerated sales of high-capacity hard drives. First up, laptops shrank their dimensions and many of them no longer can accommodate a 2.5-inch hard drive, but rely on an SSD such as one of the best SSDs and cloud storage.
Secondly, cloud services have become ubiquitous and all of them use tens of thousands of HDDs to store data. Thirdly, since data centers now consume more hard drives than ever, they use high-capacity HDDs, which is why Seagate and its competitors now offer drives that can store up to 20TB of data, which is more than an average person needs today. All of these factors allowed Seagate and its peers to significantly increase the storage capacities they sell today even despite the fact that unit shipments of HDDs dropped in recent years.
Demand for data storage will increase once again soon. Consumers and businesses will continue to expand cloud storage usage, so the appropriate services will have to use more drives.
As major Hollywood studios launch their own streaming services, they (or rather their data center partners) will naturally need more storage, too. But end-users, businesses, and streaming services will still be responsible for only a fraction of that data that will have to be stored several years from now. Smart cities, smart factories, smart devices, autonomous vehicles, and robots will generate more data than the whole of humanity combined throughout its history.
As a result, Seagate and its rivals will sell tens of zettabytes of hard drive storage in the coming years. Last year Seagate and IDC estimated that the sum of data generated globally by 2025 was set to accelerate exponentially to 175ZB. To store data four years down the road, Seagate and other makers of HDDs will need to offer higher capacity drives and these are where technologies like heat-assisted magnetic recording (HAMR) will come into play. Seagate expects capacities of its HDDs to increase to 40TB ~ 50TB by 2025 ~ 2026 and to 100TB in 2030.
Hard drives provide a good balance between storage capacity and performance. Meanwhile, loads of cold data will be stored using tape drives, so 1PB tapes by Fujitsu and IBM will come in quite handy. Furthermore, as there will be loads of ‘hot’ data that will have to be always available, makers of NAND flash as well as dozens of SSD manufacturers will definitely not sit without a job.
Intel’s long-delayed 10nm+ third-gen Xeon Scalable Ice Lake processors mark an important step forward for the company as it attempts to fend off intense competition from AMD’s 7nm EPYC Milan processors that top out at 64 cores, a key advantage over Intel’s existing 14nm Cascade Lake Refresh that tops out at 28 cores. The 40-core Xeon Platinum 8380 serves as the flagship model of Intel’s revamped lineup, which the company says features up to a 20% IPC uplift on the strength of the new Sunny Cove core architecture paired with the 10nm+ process.
Intel has already shipped over 200,000 units to its largest customers since the beginning of the year, but today marks the official public debut of its newest lineup of data center processors, so we get to share benchmarks. The Ice Lake chips drop into dual-socket Whitley server platforms, while the previously-announced Cooper Lake slots in for quad- and octo-socket servers. Intel has slashed Xeon pricing up to 60% to remain competitive with EPYC Rome, and with EPYC Milan now shipping, the company has reduced per-core pricing again with Ice Lake to remain competitive as it targets high-growth markets, like the cloud, enterprise, HPC, 5G, and the edge.
The new Xeon Scalable lineup comes with plenty of improvements, like increased support for up to eight memory channels that run at a peak of DDR4-3200 with two DIMMs per channel, a notable improvement over Cascade Lake’s support for six channels at DDR4-2933 and matching EPYC’s eight channels of memory. Ice Lake also supports 6TB of DRAM/Optane per socket (4TB of DRAM) and 4TB of Optane Persistent Memory DIMMs per socket (8 TB in dual-socket). Unlike Intel’s past practices, Ice Lake also supports the full memory and Optane capacity on all models with no additional upcharge.
Intel has also moved forward from 48 lanes of PCIe 3.0 connectivity to 64 lanes of PCIe 4.0 (128 lanes in dual-socket), improving both I/O bandwidth and increasing connectivity to match AMD’s 128 available lanes in a dual-socket server.
Intel says that these additives, coupled with a range of new SoC-level optimizations, a focus on improved power management, along with support for new instructions, yield an average of 46% more performance in a wide range of data center workloads. Intel also claims a 50% uplift to latency-sensitive applications, like HammerDB, Java, MySQL, and WordPress, and up to 57% more performance in heavily-threaded workloads, like NAMD, signaling that the company could return to a competitive footing in what has become one of AMD’s strongholds — heavily threaded workloads. We’ll put that to the test shortly. First, let’s take a closer look at the lineup.
Intel Third-Gen Xeon Scalable Ice Lake Pricing and Specfications
We have quite the list of chips below, but we’ve actually filtered out the downstream Intel parts, focusing instead on the high-end ‘per-core scalable’ models. All told, the Ice Lake family spans 42 SKUs, with many of the lower-TDP (and thus performance) models falling into the ‘scalable performance’ category.
Intel also has specialized SKUs targeted at maximum SGX enclave capacity, cloud-optimized for VMs, liquid-cooled, networking/NFV, media, long-life and thermal-friendly, and single-socket optimized parts, all of which you can find in the slide a bit further below.
Cores / Threads
Base / Boost – All Core (GHz)
L3 Cache (MB)
TDP (W)
1K Unit Price / RCP
EPYC Milan 7763
64 / 128
2.45 / 3.5
256
280
$7,890
EPYC Rome 7742
64 / 128
2.25 / 3.4
256
225
$6,950
EPYC Milan 7663
56 / 112
2.0 / 3.5
256
240
$6,366
EPYC Milan 7643
48 / 96
2.3 / 3.6
256
225
$4.995
Xeon Platinum 8380
40 / 80
2.3 / 3.2 – 3.0
60
270
$8,099
Xeon Platinum 8368
38 / 76
2.4 / 3.4 – 3.2
57
270
$6,302
Xeon Platinum 8360Y
36 / 72
2.4 / 3.5 – 3.1
54
250
$4,702
Xeon Platinum 8362
32 / 64
2.8 / 3.6 – 3.5
48
265
$5,448
EPYC Milan 7F53
32 / 64
2.95 / 4.0
256
280
$4,860
EPYC Milan 7453
28 / 56
2.75 / 3.45
64
225
$1,570
Xeon Gold 6348
28 / 56
2.6 / 3.5 – 3.4
42
235
$3,072
Xeon Platinum 8280
28 / 56
2.7 / 4.0 – 3.3
38.5
205
$10,009
Xeon Gold 6258R
28 / 56
2.7 / 4.0 – 3.3
38.5
205
$3,651
EPYC Milan 74F3
24 / 48
3.2 / 4.0
256
240
$2,900
Intel Xeon Gold 6342
24 / 48
2.8 / 3.5 – 3.3
36
230
$2,529
Xeon Gold 6248R
24 / 48
3.0 / 4.0
35.75
205
$2,700
EPYC Milan 7443
24 / 48
2.85 / 4.0
128
200
$2,010
Xeon Gold 6354
18 / 36
3.0 / 3.6 – 3.6
39
205
$2,445
EPYC Milan 73F3
16 / 32
3.5 / 4.0
256
240
$3,521
Xeon Gold 6346
16 / 32
3.1 / 3.6 – 3.6
36
205
$2,300
Xeon Gold 6246R
16 / 32
3.4 / 4.1
35.75
205
$3,286
EPYC Milan 7343
16 / 32
3.2 / 3.9
128
190
$1,565
Xeon Gold 5317
12 / 24
3.0 / 3.6 – 3.4
18
150
$950
Xeon Gold 6334
8 / 16
3.6 / 3.7 – 3.6
18
165
$2,214
EPYC Milan 72F3
8 / 16
3.7 / 4.1
256
180
$2,468
Xeon Gold 6250
8 / 16
3.9 / 4.5
35.75
185
$3,400
At 40 cores, the Xeon Platinum 8380 reaches new heights over its predecessors that topped out at 28 cores, striking higher in AMD’s Milan stack. The 8380 comes at $202 per core, which is well above the $130-per-core price tag on the previous-gen flagship, the 28-core Xeon 6258R. However, it’s far less expensive than the $357-per-core pricing of the Xeon 8280, which had a $10,008 price tag before AMD’s EPYC upset Intel’s pricing model and forced drastic price reductions.
With peak clock speeds of 3.2 GHz, the 8380 has a much lower peak clock rate than the previous-gen 28-core 6258R’s 4.0 GHz. Even dipping down to the new 28-core Ice Lake 6348 only finds peak clock speeds of 3.5 GHz, which still trails the Cascade Lake-era models. Intel obviously hopes to offset those reduced clock speeds with other refinements, like increased IPC and better power and thermal management.
On that note, Ice Lake tops out at 3.7 GHz on a single core, and you’ll have to step down to the eight-core model to access these clock rates. In contrast, Intel’s previous-gen eight-core 6250 had the highest clock rate, 4.5 GHz, of the Cascade Lake stack.
Surprisingly, AMD’s EPYC Milan models actually have higher peak frequencies than the Ice Lake chips at any given core count, but remember, AMD’s frequencies are only guaranteed on one physical core. In contrast, Intel specs its chips to deliver peak clock rates on any core. Both approaches have their merits, but AMD’s more refined boost tech paired with the 7nm TSMC process could pay dividends for lightly-threaded work. Conversely, Intel does have solid all-core clock rates that peak at 3.6 GHz, whereas AMD has more of a sliding scale that varies based on the workload, making it hard to suss out the winners by just examining the spec sheet.
Ice Lake’s TDPs stretch from 85W up to 270W. Surprisingly, despite the lowered base and boost clocks, Ice Lake’s TDPs have increased gen-on-gen for the 18-, 24- and 28-core models. Intel is obviously pushing higher on the TDP envelope to extract the most performance out of the socket possible, but it does have lower-power chip options available (listed in the graphic below).
AMD has a notable hole in its Milan stack at both the 12- and 18-core mark, a gap that Intel has filled with its Gold 5317 and 6354, respectively. Milan still holds the top of the hierarchy with 48-, 56- and 64-core models.
Image 1 of 12
Image 2 of 12
Image 3 of 12
Image 4 of 12
Image 5 of 12
Image 6 of 12
Image 7 of 12
Image 8 of 12
Image 9 of 12
Image 10 of 12
Image 11 of 12
Image 12 of 12
The Ice Lake Xeon chips drop into Whitley server platforms with Socket LGA4189-4/5. The FC-LGA14 package measures 77.5mm x 56.5mm and has an LGA interface with 4189 pins. The die itself is predicted to measure ~600mm2, though Intel no longer shares details about die sizes or transistor counts. In dual-socket servers, the chips communicate with each other via three UPI links that operate at 11.2 GT/s, an increase from 10.4 GT/s with Cascade Lake. . The processor interfaces with the C620A chipset via four DMI 3.0 links, meaning it communicates at roughly PCIe 3.0 speeds.
The C620A chipset also doesn’t support PCIe 4.0; instead, it supports up to 20 lanes of PCIe 3.0, ten USB 3.0, and fourteen USB 2.0 ports, along with 14 ports of SATA 6 Gbps connectivity. Naturally, that’s offset by the 64 PCIe 4.0 lanes that come directly from the processor. As before, Intel offers versions of the chipset with its QuickAssist Technology (QAT), which boosts performance in cryptography and compression/decompression workloads.
Image 1 of 12
Image 2 of 12
Image 3 of 12
Image 4 of 12
Image 5 of 12
Image 6 of 12
Image 7 of 12
Image 8 of 12
Image 9 of 12
Image 10 of 12
Image 11 of 12
Image 12 of 12
Intel’s focus on its platform adjacencies business is a key part of its messaging around the Ice Lake launch — the company wants to drive home its message that coupling its processors with its own differentiated platform additives can expose additional benefits for Whitley server platforms.
The company introduced new PCIe 4.0 solutions, including the new 200 GbE Ethernet 800 Series adaptors that sport a PCIe 4.0 x16 connection and support RDMA iWARP and RoCEv2, and the Intel Optane SSD P5800X, a PCIe 4.0 SSD that uses ultra-fast 3D XPoint media to deliver stunning performance results compared to typical NAND-based storage solutions.
Intel also touts its PCIe 4.0 SSD D5-P5316, which uses the company’s 144-Layer QLC NAND for read-intensive workloads. These SSDs offer up to 7GBps of throughput and come in capacities stretching up to 15.36 TB in the U.2 form factor, and 30.72 TB in the E1.L ‘Ruler’ form factor.
Intel’s Optane Persistent Memory 200-series offers memory-addressable persistent memory in a DIMM form factor. This tech can radically boost memory capacity up to 4TB per socket in exchange for higher latencies that can be offset through software optimizations, thus yielding more performance in workloads that are sensitive to memory capacity.
The “Barlow Pass” Optane Persistent Memory 200 series DIMMs promise 30% more memory bandwidth than the previous-gen Apache Pass models. Capacity remains at a maximum of 512GB per DIMM with 128GB and 256GB available, and memory speeds remain at a maximum of DDR4-2666.
Intel has also expanded its portfolio of Market Ready and Select Solutions offerings, which are pre-configured servers for various workloads that are available in over 500 designs from Intel’s partners. These simple-to-deploy servers are designed for edge, network, and enterprise environments, but Intel has also seen uptake with cloud service providers like AWS, which uses these solutions for its ParallelCluster HPC service.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
Like the benchmarks you’ll see in this review, the majority of performance measurements focus on raw throughput. However, in real-world environments, a combination of throughput and responsiveness is key to deliver on latency-sensitive SLAs, particularly in multi-tenant cloud environments. Factors such as loaded latency (i.e., the amount of performance delivered to any number of applications when all cores have varying load levels) are key to ensuring performance consistency across multiple users. Ensuring consistency is especially challenging with diverse workloads running on separate cores in multi-tenant environments.
Intel says it focused on performance consistency in these types of environments through a host of compute, I/O, and memory optimizations. The cores, naturally, benefit from increased IPC, new ISA instructions, and scaling up to higher core counts via the density advantages of 10nm, but Intel also beefed up its I/O subsystem to 64 lanes of PCIe 4.0, which improves both connectivity (up from 48 lanes) and throughput (up from PCIe 3.0).
Intel says it designed the caches, memory, and I/O, not to mention power levels, to deliver consistent performance during high utilization. As seen in slide 30, the company claims these alterations result in improved application performance and latency consistency by reducing long tail latencies to improve worst-case performance metrics, particularly for memory-bound and multi-tenant workloads.
Image 1 of 12
Image 2 of 12
Image 3 of 12
Image 4 of 12
Image 5 of 12
Image 6 of 12
Image 7 of 12
Image 8 of 12
Image 9 of 12
Image 10 of 12
Image 11 of 12
Image 12 of 12
Ice Lake brings a big realignment of the company’s die that provides cache, memory, and throughput advances. The coherent mesh interconnect returns with a similar arrangement of horizontal and vertical rings present on the Cascade Lake-SP lineup, but with a realignment of the various elements, like cores, UPI connections, and the eight DDR4 memory channels that are now split into four dual-channel controllers. Here we can see that Intel shuffled around the cores on the 28-core die and now has two execution cores on the bottom of the die clustered with I/O controllers (some I/O is now also at the bottom of the die).
Intel redesigned the chip to support two new sideband fabrics, one controlling power management and the other used for general-purpose management traffic. These provide telemetry data and control to the various IP blocks, like execution cores, memory controllers, and PCIe/UPI controllers.
The die includes a separate peer-to-peer (P2P) fabric to improve bandwidth between cores, and the I/O subsystem was also virtualized, which Intel says offers up to three times the fabric bandwidth compared to Cascade Lake. Intel also split one of the UPI blocks into two, creating a total of three UPI links, all with fine-grained power control of the UPI links. Now, courtesy of dedicated PLLs, all three UPIs can modulate clock frequencies independently based on load.
Densely packed AVX instructions augment performance in properly-tuned workloads at the expense of higher power consumption and thermal load. Intel’s Cascade Lake CPUs drop to lower frequencies (~600 to 900 MHz) during AVX-, AVX2-, and AVX-512-optimized workloads, which has hindered broader adoption of AVX code.
To reduce the impact, Intel has recharacterized its AVX power limits, thus yielding (unspecified) higher frequencies for AVX-512 and AVX-256 operations. This is done in an adaptive manner based on three different power levels for varying instruction types. This nearly eliminates the frequency delta between AVX and SSE for 256-heavy and 512-light operations, while 512-heavy operations have also seen significant uplift. All Ice Lake SKUs come with dual 512b FMAs, so this optimization will pay off across the entire stack.
Intel also added support for a host of new instructions to boost cryptography performance, like VPMADD52, GFNI, SHA-NI, Vector AES, and Vector Carry-Less multiply instructions, and a few new instructions to boost compression/decompression performance. All rely heavily upon AVX acceleration. The chips also support Intel’s Total Memory Encryption (TME) that offers DRAM encryption through AES-XTS 128-bit hardware-generated keys.
Intel also made plenty of impressive steps forward on the microarchitecture, with improvements to every level of the pipeline allowing Ice Lake’s 10nm Sunny Cove cores to deliver far higher IPC than 14nm Cascade Lake’s Skylake-derivative architecture. Key improvements to the front end include larger reorder, load, and store buffers, along with larger reservation stations. Intel increased the L1 data cache from 32 KiB, the capacity it has used in its chips for a decade, to 42 KiB, and moved from 8-way to 12-way associativity. The L2 cache moves from 4-way to 8-way and is also larger, but the capacity is dependent upon each specific type of product — for Ice Lake server chips, it weighs in at 1.25 MB per core.
Intel expanded the micro-op cache (UOP) from 1.5K to 2.25K micro-ops, the second-level translation lookaside buffer (TLB) from 1536 entries to 2048, and moved from a four-wide allocation to five-wide to allow the in-order portion of the pipeline (front end) to feed the out-of-order (back end) portion faster. Additionally, Intel expanded the Out of Order (OoO) Window from 224 to 352. Intel also increased the number of execution units to handle ten operations per cycle (up from eight with Skylake) and focused on improving branch prediction accuracy and reducing latency under load conditions.
The store unit can now process two store data operations for every cycle (up from one), and the address generation units (AGU) also handle two loads and two stores each cycle. These improvements are necessary to match the increased bandwidth from the larger L1 data cache, which does two reads and two writes every cycle. Intel also tweaked the design of the sub-blocks in the execution units to enable data shuffles within the registers.
Intel also added support for its Software Guard Extensions (SGX) feature that debuted with the Xeon E lineup, and increased capacity to 1TB (maximum capacity varies by model). SGX creates secure enclaves in an encrypted portion of the memory that is exclusive to the code running in the enclave – no other process can access this area of memory.
Test Setup
We have a glaring hole in our test pool: Unfortunately, we do not have AMD’s recently-launched EPYC Milan processors available for this round of benchmarking, though we are working on securing samples and will add competitive benchmarks when available.
We do have test results for the AMD’s frequency-optimized Rome 7Fx2 processors, which represent AMD’s performance with its previous-gen chips. As such, we should view this round of tests largely through the prism of Intel’s gen-on-gen Xeon performance improvement, and not as a measure of the current state of play in the server chip market.
We use the Xeon Platinum Gold 8280 as a stand-in for the less expensive Xeon Gold 6258R. These two chips are identical and provide the same level of performance, with the difference boiling down to the more expensive 8280 coming with support for quad-socket servers, while the Xeon Gold 6258R tops out at dual-socket support.
Image 1 of 7
Image 2 of 7
Image 3 of 7
Image 4 of 7
Image 5 of 7
Image 6 of 7
Image 7 of 7
Intel provided us with a 2U Server System S2W3SIL4Q Software Development Platform with the Coyote Pass server board for our testing. This system is designed primarily for validation purposes, so it doesn’t have too many noteworthy features. The system is heavily optimized for airflow, with the eight 2.5″ storage bays flanked by large empty bays that allow for plenty of air intake.
The system comes armed with dual redundant 2100W power supplies, a 7.68TB Intel SSD P5510, an 800GB Optane SSD P5800X, and an E810-CQDA2 200GbE NIC. We used the Intel SSD P5510 for our benchmarks and cranked up the fans for maximum performance in our benchmarks.
We tested with the pre-installed 16x 32GB DDR4-3200 DIMMs, but Intel also provided sixteen 128GB Optane Persistent Memory DIMMs for further testing. Due to time constraints, we haven’t yet had time to test the Optane DIMMs, but stay tuned for a few demo workloads in a future article. As we’re not entirely done with our testing, we don’t want to risk prying the 8380 out of the socket yet for pictures — the large sockets from both vendors are becoming more finicky after multiple chip reinstalls.
Memory
Tested Processors
Intel S2W3SIL4Q
16x 32GB SK hynix ECC DDR4-3200
Intel Xeon Platinum 8380
Supermicro AS-1023US-TR4
16x 32GB Samsung ECC DDR4-3200
EPYC 7742, 7F72, 7F52
Dell/EMC PowerEdge R460
12x 32GB SK hynix DDR4-2933
Intel Xeon 8280, 6258R, 5220R, 6226R
To assess performance with a range of different potential configurations, we used a Supermicro 1024US-TR4 server with three different EPYC Rome configurations. We outfitted this server with 16x 32GB Samsung ECC DDR4-3200 memory modules, ensuring the chips had all eight memory channels populated.
We used a Dell/EMC PowerEdge R460 server to test the Xeon processors in our test group. We equipped this server with 12x 32GB Sk hynix DDR4-2933 modules, again ensuring that each Xeon chip’s six memory channels were populated.
We used the Phoronix Test Suite for benchmarking. This automated test suite simplifies running complex benchmarks in the Linux environment. The test suite is maintained by Phoronix, and it installs all needed dependencies and the test library includes 450 benchmarks and 100 test suites (and counting). Phoronix also maintains openbenchmarking.org, which is an online repository for uploading test results into a centralized database.
We used Ubuntu 20.04 LTS to maintain compatibility with our existing test results, and leverage the default Phoronix test configurations with the GCC compiler for all tests below. We also tested all platforms with all available security mitigations.
Naturally, newer Linux kernels, software, and targeted optimizations can yield improvements for any of the tested processors, so take these results as generally indicative of performance in compute-intensive workloads, but not as representative of highly-tuned deployments.
Linux Kernel, GCC and LLVM Compilation Benchmarks
Image 1 of 2
Image 2 of 2
AMD’s EPYC Rome processors took the lead over the Cascade Lake Xeon chips at any given core count in these benchmarks, but here we can see that the 40-core Ice Lake Xeon 8380 has tremendous potential for these type of workloads. The dual 8380 processors complete the Linux compile benchmark, which builds the Linux kernel at default settings, in 20 seconds, edging out the 64-core EPYC Rome 7742 by one second. Naturally, we expect AMD’s Milan flagship, the 7763, to take the lead in this benchmark. Still, the implication is clear — Ice Lake-SP has significantly-improved performance, thus reducing the delta between Xeon and competing chips.
We can also see a marked improvement in the LLVM compile, with the 8380 reducing the time to completion by ~20% over the prior-gen 8280.
Molecular Dynamics and Parallel Compute Benchmarks
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
NAMD is a parallel molecular dynamics code designed to scale well with additional compute resources; it scales up to 500,000 cores and is one of the premier benchmarks used to quantify performance with simulation code. The Xeon 8380’s notch a 32% improvement in this benchmark, slightly beating the Rome chips.
Stockfish is a chess engine designed for the utmost in scalability across increased core counts — it can scale up to 512 threads. Here we can see that this massively parallel code scales well with EPYC’s leading core counts. The EPYC Rome 7742 retains its leading position at the top of the chart, but the 8380 offers more than twice the performance of the previous-gen Cascade Lake flagship.
We see similarly impressive performance uplifts in other molecular dynamics workloads, like the Gromacs water benchmark that simulates Newtonian equations of motion with hundreds of millions of particles. Here Intel’s dual 8380’s take the lead over the EPYC Rome 7742 while pushing out nearly twice the performance of the 28-core 8280.
We see a similarly impressive generational improvement in the LAAMPS molecular dynamics workload, too. Again, AMD’s Milan will likely be faster than the 7742 in this workload, so it isn’t a given that the 8380 has taken the definitive lead over AMD’s current-gen chips, though it has tremendously improved Intel’s competitive positioning.
The NAS Parallel Benchmarks (NPB) suite characterizes Computational Fluid Dynamics (CFD) applications, and NASA designed it to measure performance from smaller CFD applications up to “embarrassingly parallel” operations. The BT.C test measures Block Tri-Diagonal solver performance, while the LU.C test measures performance with a lower-upper Gauss-Seidel solver. The EPYC Milan 7742 still dominates in this workload, showing that Ice Lake’s broad spate of generational improvements still doesn’t allow Intel to take the lead in all workloads.
Rendering Benchmarks
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Turning to more standard fare, provided you can keep the cores fed with data, most modern rendering applications also take full advantage of the compute resources. Given the well-known strengths of EPYC’s core-heavy approach, it isn’t surprising to see the 64-core EPYC 7742 processors retain the lead in the C-Ray benchmark, and that applies to most of the Blender benchmarks, too.
Encoding Benchmarks
Image 1 of 3
Image 2 of 3
Image 3 of 3
Encoders tend to present a different type of challenge: As we can see with the VP9 libvpx benchmark, they often don’t scale well with increased core counts. Instead, they often benefit from per-core performance and other factors, like cache capacity. AMD’s frequency-optimized 7F52 retains its leading position in this benchmark, but Ice Lake again reduces the performance delta.
Newer software encoders, like the Intel-Netflix designed SVT-AV1, are designed to leverage multi-threading more fully to extract faster performance for live encoding/transcoding video applications. EPYC Rome’s increased core counts paired with its strong per-core performance beat Cascade Lake in this benchmark handily, but the step up to forty 10nm+ cores propels Ice Lake to the top of the charts.
Compression, Security and Python Benchmarks
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
The Pybench and Numpy benchmarks are used as a general litmus test of Python performance, and as we can see, these tests typically don’t scale linearly with increased core counts, instead prizing per-core performance. Despite its somewhat surprisingly low clock rates, the 8380 takes the win in the Pybench benchmark and improves Xeon’s standing in Numpy as it takes a close second to the 7F52.
Compression workloads also come in many flavors. The 7-Zip (p7zip) benchmark exposes the heights of theoretical compression performance because it runs directly from main memory, allowing both memory throughput and core counts to heavily impact performance. As we can see, this benefits the core-heavy chips as they easily dispatch with the chips with lesser core counts. The Xeon 8380 takes the lead in this test, but other independent benchmarks show that AMD’s EPYC Milan would lead this chart.
In contrast, the gzip benchmark, which compresses two copies of the Linux 4.13 kernel source tree, responds well to speedy clock rates, giving the 16-core 7F52 the lead. Here we see that 8380 is slightly slower than the previous-gen 8280, which is likely at least partially attributable to the 8380’s much lower clock rate.
The open-source OpenSSL toolkit uses SSL and TLS protocols to measure RSA 4096-bit performance. As we can see, this test favors the EPYC processors due to its parallelized nature, but the 8380 has again made big strides on the strength of its higher core count. Offloading this type of workload to dedicated accelerators is becoming more common, and Intel also offers its QAT acceleration built into chipsets for environments with heavy requirements.
Conclusion
Admittedly, due to our lack of EPYC Milan samples, our testing today of the Xeon Platinum 8380 is more of a demonstration of Intel’s gen-on-gen performance improvements rather than a holistic view of the current competitive landscape. We’re working to secure a dual-socket Milan server and will update when one lands in our lab.
Overall, Intel’s third-gen Xeon Scalable is a solid step forward for the Xeon franchise. AMD has steadily chewed away data center market share from Intel on the strength of its EPYC processors that have traditionally beaten Intel’s flagships by massive margins in heavily-threaded workloads. As our testing, and testing from other outlets shows, Ice Lake drastically reduces the massive performance deltas between the Xeon and EPYC families, particularly in heavily threaded workloads, placing Intel on a more competitive footing as it faces an unprecedented challenge from AMD.
AMD will still hold the absolute performance crown in some workloads with Milan, but despite EPYC Rome’s commanding lead in the past, progress hasn’t been as swift as some projected. Much of that boils down to the staunchly risk-averse customers in the enterprise and data center; these customers prize a mix of factors beyond the standard measuring stick of performance and price-to-performance ratios, instead focusing on areas like compatibility, security, supply predictability, reliability, serviceability, engineering support, and deeply-integrated OEM-validated platforms.
AMD has improved drastically in these areas and now has a full roster of systems available from OEMs, along with broadening uptake with CSPs and hyperscalers. However, Intel benefits from its incumbency and all the advantages that entails, like wide software optimization capabilities and platform adjacencies like networking, FPGAs, and Optane memory.
Although Ice Lake doesn’t lead in all metrics, it does improve the company’s positioning as it moves forward toward the launch of its Sapphire Rapids processors that are slated to arrive later this year to challenge AMD’s core-heavy models. Intel still holds the advantage in several criteria that appeal to the broader enterprise market, like pre-configured Select Solutions and engineering support. That, coupled with drastic price reductions, has allowed Intel to reduce the impact of a fiercely-competitive adversary. We can expect the company to redouble those efforts as Ice Lake rolls out to the more general server market.
Apple’s computers have been notorious for their lack of upgradeability, particularly since the introduction of Apple’s M1 chip that integrates memory directly into the package. But as spotted via Twitter, if you want to boost the power of your Mac, it may be possible with money, skill, time and some real desire by removing the DRAM and NAND chips and adding more capacious versions, much like we’ve seen multiple times with enthusiasts soldering on more VRAM to graphics cards.
With the ongoing transition to custom Apple system-on-chips (SoCs), it will get even harder to upgrade Apple PCs. But one Twitter user points to “maintenance engineers” that did just that.
By any definition, such modifications void the warranty, so we strongly do not recommend doing them on your own: It obviously takes a certain level of skill, and patience, to pull off this type of modification.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
With a soldering station (its consumer variant is not that expensive at $60), DRAM memory chips and NAND flash memory chips, (which are close to impossible to buy on the consumer level), the engineers reportedly upgraded the Apple M1-based Mac Mini with 8GB of RAM and 256GB of storage to 16GB and 1TB, respectively, by de-soldering the existing components and adding more capacious chips. According to the post, no firmware modifications were necessary.
Chinese maintenance engineers can already expand the capacity of the Apple M1. The 8GB memory has been expanded to 16GB, and the 256GB hard drive has been expanded to 1TB. pic.twitter.com/2Fyf8AZfJRApril 4, 2021
See more
Using their soldering station, the engineers removed 8GB of GDDR4X memory and installed chips with a 16GB capacity. Removing the NAND chips from the motherboard using the same method was not a problem. The chips were then replaced with higher-capacity devices.
The details behind the effort are slight, though the (very) roughly translated Chinese text in one of the images reads, “The new Mac M1 whole series the first time 256 and upgrade to 1TB, memory is 8L 16G, perfect! This is a revolutionary period the companies are being reshuffled. In the past, if you persevered, there was hope, but today, if you keep on the original way, a lot of them will disappear unless we change our way of thinking. We have to evolve, update it, and start again. Victory belongs to those who adapt; we have to learn to make ourselves more valuable.”
Of course, Apple is not the only PC maker to opt for SoCs and soldered components. Both Intel and AMD offer PC makers SoCs, and Intel even offers reference designs for building soldered down PC platforms.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.