Flexxon, a Singapore-based security firm, has introduced an SSD with embedded AI-based security capabilities that the company says promises protection against traditional threats like malware and viruses, or physical tampering with the drive.
Modern SSD controllers rely on several Arm Cortex R cores and are basically rather high-performance system-on-chips with fairly sophisticated compute capabilities. These very capabilities, along with firmware enhancements, are what powers Flexxon’s X-Phy SSD platform.
The platform relies on a technology that Flexxon calls AI One Core Quantum Engine and a special secured firmware. The company’s description of its technology is vague at best, so it is unclear whether its engine is a completely self-sufficient/isolated platform or a combination of software, hardware, and firmware.
This AI One Core Quantum Engine presumably runs on an NVMe 1.3-compliant SSD controller and monitors all the traffic. Once its algorithm detects a threat (a virus, malware, an intrusion), it can block it to protect the firmware and data integrity. Furthermore, the company says the self-learning algorithm can detect abnormalities and identify them as threats, the company said without elaborating. Meanwhile, the drive comes with a special application. The X-Phy drive looks to be compatible with all major operating systems, based on an image published on the company’s website.
The SSD is also equipped with “a range of features including temperature sensors to detect unusual movements that occur” in a bid to protect against physical intrusion. If the device detects tampering, it will lock itself and alert the owner via email. It is unclear how the device can alert its owner via an email if someone steals it from a PC that is shut down. Of course, there are ways to monitor HDD activity when the PC is off to lock the SSD if it is removed. Still, there isn’t a way to issue a notification about a physical intrusion if the OS isn’t running (unless, of course, the SSD is not equipped with a modem).
Flexxon stresses that the X-Phy SSD does not replace traditional security measures and calls it ‘the last line of defense.’
Flexxon’s X-Phy SSD is currently in trials with “government agencies, medical and industrial clients” and the manufacturer expects it to be available in Q4 2021 or in early 2022. The drive will be available in 512GB and 1TB 3D NAND configurations in M.2-2280 and U.2 form-factors with a PCIe 3.0 x4 interface. The SSD will support LDPC ECC as well as dynamic and static wear leveling. Expected prices are unknown.
The new Chia cryptocurrency has already started making waves in the storage industry, as we’ve reported back in April. With Chia trading now live, it looks set to become even more interesting in the coming months. The total netspace for Chia has already eclipsed 2 exabytes, and it’s well on its way to double- and probably even triple-digit EiB levels if current trends continue. If you’re looking to join the latest crypto-bandwagon, here’s how to get started farming Chia coin.
First, if you’ve dabbled in other cryptocurrencies before, Chia is a very different beast. Some of the fundamental blockchain concepts aren’t radically different from what’s going before, but Chia coin ditches the Proof of Work algorithm for securing the blockchain and instead implements Proof of Space — technically Proof of Time and Space, but the latter appears to be the more pertinent factor. Rather than mining coins by dedicating large amounts of processing power to the task, Chia simply requires storage plots — but these plots need to be filled with the correct data.
The analogies with real-world farming are intentional. First you need to clear a field (i.e., delete any files on your storage devices that are taking up space), then you plough and seed the field (compute a plot for Chia), and then… well, you wait for the crops to grow, which can take quite a long time when those crops are Chia blocks.
Your chances of solving a Chia coin block are basically equal to your portion of the total network space (netspace). Right now, Chia’s netspace sits at roughly 2.7 EiB (Exbibytes — the binary SI unit, so 1 EiB equals 2^60 bytes, or 1,152,921,504,606,846,976 bytes decimal). That means if you dedicate a complete 10TB (10 trillion bytes) of storage to Chia plots, your odds of winning are 0.00035%, or 0.0000035 if we drop the percentage part. Those might sound like terrible odds — they’re not great — but the catch is that there are approximately 4,608 Chia blocks created every day (a rate of 32 blocks per 10 minutes, or 18.75 seconds per block) and any one of them could match your plot.
Simple math can then give you the average time to win, though Chia calculators make estimating this far easier than doing the math yourself. A completely full 10TB HDD can store 91 standard Chia blocks (101.4 GiB). Yeah, don’t get lazy and forget to convert between tebibytes and terabytes, as SI units definitely matters. Anyway, 91 blocks on a single 10TB HDD should win a block every two months or so — once every 68 days.
Each Chia plot ends up being sort of like a massive, complex Bingo card. There’s lots of math behind it, but that analogy should suffice. Each time a block challenge comes up, the Chia network determines a winner based on various rules. If your plot matches and ‘wins’ the block, you get the block reward (currently 2 XCH, Chia’s coin abbreviation). That block reward is set to decrease every three years, for the first 12 years, after which the block reward will be static ad infinitum. The official FAQ lists the reward rate as 64 XCH per 10 minutes, and it will get cut in half every three years until it’s at 4 XCH per 10 minutes with a block reward of 0.125 XCH.
Of course, luck comes into play. It’s theoretically possible (though highly unlikely) to have just a few plots and win a block solution immediately. It’s also possible to have hundreds of plots and go for a couple of months without a single solution. The law of averages should equalize over time, though. Which means to better your chances, you’ll need more storage storing more Chia plots. Also, just because a plot wins once doesn’t mean it can’t win again, so don’t delete your plots after they win.
This is the standard cryptocurrency arms race that we’ve seen repeated over the past decade with hundreds of popular coins. The big miners — farmers in this case — want more of the total Chia pie, and rush out to buy more hardware and increase their odds of winning. Except, this time it’s not just a matter of buying more SSDs or HDDs. This time farmers need to fill each of those with plots, and based on our testing, that is neither a simple task nor something that can be done quickly.
Hardware Requirements for Chia Coin Farming
With Ethereum, once you have the requisite GPUs in hand, perhaps some of the best mining GPUs, all you have to do is get them running in a PC. Chia requires that whole ploughing and plotting business, and that takes time. How much time? Tentatively, about six or seven hours seems typical per plot, with a very fast Optane 905P SSD, though it’s possible to do multiple plots at once with the right hardware. You could plot directly to hard drive storage, but then it might take twice as long, and the number of concurrent plots you can do drops to basically one.
The best solution is to have a fast SSD — probably an enterprise grade U.2 drive with plenty of capacity — and then use that for the plotting and transfer the finished plots to a large HDD. Chia’s app will let you do that, but it can be a bit finicky, and if something goes wrong like exceeding the temp storage space, the plotting will crash and you’ll lose all that work. Don’t over schedule your plotting, in other words.
Each 101.4 GiB plot officially requires up to 350 GiB of temporary storage, though we’ve managed to do a single plot multiple times on a 260 GiB SSD. Average write speed during the plotting process varies, sometimes it reaches over 100MB/s, other times it can drop closer to zero. When it drops, that usually means more computational work and memory are being used. Plotting also requires 4 GiB of RAM, so again, high capacity memory sticks are par for the course.
Ultimately, for fast SSDs, the main limiting will likely be storage capacity. If we use the official 350 GiB temp space requirement, that means a 2TB SSD (1863 TiB) can handle at most five concurrent plots. Our own testing suggests that it can probably do six just fine, maybe even seven, but we’d stick with six to be safe. If you want to do more than that (and you probably will if you’re serious about farming Chia), you’ll need either a higher capacity SSD, or multiple SSDs. Each plot your PC is creating also needs 4GB of memory and two CPU threads, and there appear to be scaling limits.
Based on the requirements, here are two recommended builds — one for faster plotting (more concurrent plots) and one for slower plotting.
Our baseline Chia plotting PC uses a 6-core/12-thread CPU, and we’ve elected to go with Intel’s latest Core i5-11400 simply because it’s affordable, comes with a cooler, and should prove sufficiently fast. AMD’s Ryzen 5 5600X would be a good alternative, were it readily available — right now it tends to cost about twice as much as the i5-11400, plus it also needs a dedicated graphics card, and we all know how difficult it can be to find those right now.
For storage, we’ve selected a Sabrent Rocket 4 Plus 2TB that’s rated for 1400 TBW. That’s enough to create around 800–900 plots, at which point your Chia farm should be doing quite nicely and you’ll be able to afford a replacement SSD. Mass storage comes via a 10TB HDD, because that’s the most economical option — 12TB, 14TB, 16TB, and 18TB drives exist, but they all cost quite a bit more per GB of storage. Plus, you’ll probably want to move your stored plots to a separate machine when a drive is filled, but more on that below.
The other components are basically whatever seems like a reasonably priced option, with an eye toward decent quality. You could probably use a smaller case and motherboard, or a different PSU as well. You’ll also need to add more HDDs — probably a lot more — as you go. This PC should support up to six internal SATA HDDs, though finding space in the case for all the drives might be difficult.
At a rate of 18 plots per day, it would take about 30 days of solid plotting time to fill six 10TB HDDs. Meanwhile, the potential profit from 60TB of Chia plots (546 101.4 GiB plots) is currently… wow. Okay, we don’t really want to get your hopes up, because things are definitely going to change. There will be more netspace, the price could drop, etc. But right now, at this snapshot in time, you’d potentially solve a Chia block every 11 days and earn around $5,900 per month.
What’s better than a PC that can do six plots at a time? Naturally it’s a PC that can do even more concurrent plots! This particular setup has a 10-core CPU, again from Intel because of pricing considerations. We’ve doubled the memory and opted for an enterprise class 3.84TB SSD this time. That’s sufficient for the desired ten concurrent plots, which will require up to nearly all of the 3.57 TiB of capacity. We’ve also added a second 10TB HDD, with the idea being that you do two sets of five plots at the same time, with the resulting plots going out to different HDDs (so that HDD write speed doesn’t cause a massive delay when plotting is finished for each batch).
Most of the remaining components are the same as before, though we swapped to a larger case for those who want to do all the farming and plotting on one PC. You should be able to put at least 10 HDDs into this case (using the external 5.25-inch bays). At a rate of 30 plots per day, it should take around 30 days again to fill ten 10TB drives (which aren’t included in the price, though we did put in two). As before, no promises on the profitability since it’s virtually guaranteed to be a lot lower than this, but theoretically such a setup should solve a Chia block every seven days and earn up to $9,800 per month.
Chia farming rig from https://t.co/IPJadpARFa 96 terabytes running off a RockPi4 Model C pic.twitter.com/F6iKOMIdIyJanuary 15, 2021
See more
Long-term Efficient Chia Farming
So far we’ve focused on the hardware needed to get plotting, which is the more difficult part of Chia farming. Once you’re finished building your farm, though, you’ll probably want to look at ways to efficiently keep the farm online. While it’s possible to build out PCs with dozens of HDDs using PCIe SATA cards and extra power supplies, it’s likely far easier and more efficient to skip all that and go with Raspberry Pi. That’s actually the recommended long-term farming solution from the Chia creators.
It’s not possible to directly connected dozens of SATA drives to Raspberry Pi, but using USB-to-SATA adapters and USB hubs overcomes that limitation. There’s the added benefit of not overloading the 5V rail on a PSU, since the enclosures should have their own power — or the USB hubs will. And once you’re finished building out a farm, the power costs to keep dozens of hard drives connected and running are relatively trivial — you could probably run 50 HDDs for the same amount of power as a single RTX 3080 mining Ethereum.
How to Create Chia Plots
We’ve mostly glossed over the plot creation process so far. It’s not terribly complicated, but there are some potential pitfalls. One is that the plotting process can’t be stopped and restarted. You don’t want to do this on a laptop that may power off, though theoretically it should be possible to put a system to sleep and wake it back up, and then let it pick up where it left off. But if you overfill the temp storage, Chia will crash and you’ll lose all progress on any plots, and since it can take six or seven hours, that’s a painful loss.
The first step naturally is to install Chia. We’re using Windows, though it’s available on MacOS and can be compiled from source code for various Linux platforms. Once installed, you’ll need to let the blockchain sync up before you can get to work on farming. However, you can still create plots before the blockchain gets fully synced — that takes perhaps 10 hours, in our experience, but it will inevitably start to take longer as more blocks get added.
You’ll need to create a new private key to get started — don’t use the above key, as anyone else on the ‘net can just steal any coins you farm. Screenshot and write down your 24 word mnemonic, as that’s the only way you can regain access to your wallet should your PC die. Store this in a safe and secure place!
Next, you’ll see the main page. As noted above, it can take quite a while to sync up, and any information displayed on this screen prior to having the full blockchain won’t be current. For example, the above screenshot was taken when the total netspace was only 1.51 EiB (sometime earlier this week). The Wallets and Farm tabs on the left won’t have anything useful right now, so head over to Plots and get started on the plotting process.
If you’ve previously generated plots, you could import the folder here, but your key has to match the key used for generating plots. If you were to gain access to someone else’s plot files somehow, without the key they’d do you no good. Again, don’t lose your key — or share it online! Hit the Add a Plot button, though.
Here’s where the ‘magic’ happens. We’ve specified six concurrent plots, with a ten minute delay between each plot starting. That should result in roughly a ten minute delay between plots finishing, which should be enough time for the program to move a finished plot to the final directory.
The Temporary Directory will be your big and fast SSD drive. You could try for a smaller delay between plots starting, but six concurrent plots will certainly put a decent load on most SSDs. Note also that Chia says it needs 239 GiB of temporary storage per plot — it’s not clear (to us) if that’s in addition to the 101.4 GiB for the final plot, but the amount of used space definitely fluctuates during the course of plot creation.
Once everything is set, click the Create Plot button at the bottom, and walk away for the next 6–8 hours. If you come back in eight hours, hopefully everything will have finished without incident and you’ll now see active plots on your Chia farm. Queue up another set of six plots (or however many plots your PC can handle concurrently), and done properly you should be able to get around three cycles in per day.
Then you just leave everything online (or migrate full drives to a separate system that uses the same key), and eventually you should manage to solve a block, earn some XCH coin, and then you can hoard that and hope the price goes up, or exchange it for some other cryptocurrency. Happy farming!
Chia Farming: The Bottom Line
Just looking at that income potential should tell you one thing: More people are going to do this than what we’re currently seeing. That or price is going to implode. For the cost of an RTX 3080 off of eBay right now, you could break even in just a couple of weeks. Our short take: anyone looking for new hard drives or large SSDs — could be in for a world of hurt as Chia causes a storage shortage.
During its first week of trading, Chia started with a price of around $1,600, climbed up to a peak of around $1,900, and then dropped to a minimum value of around $560. But then it started going up again and reached a relatively stable (which isn’t really stable at all) $1,000 or so on Friday. A couple more exchanges have joined the initial trio, with OKex accounting for around 67% of trades right now.
More importantly than just price is volume of trading. The first day saw only $11 million in trades, but Thursday/Friday has chalked up over 10X as much action. It might be market manipulation, as cryptocurrencies are full of such shenanigans, but anyone that claimed Chia was going to fade away after the first 12 hours of trading clearly missed the boat.
Unlike other cryptocurrencies, Chia will take a lot more effort to bring more plots online, but we’re still seeing an incredibly fast ramp in allocated netspace. It’s currently at 2.7 EiB, which is a 55% increase just in the past four days. We’ll probably see that fast rate of acceleration for at least a few weeks, before things start to calm down and become more linear in nature.
There are still concerns with e-waste and other aspects of any cryptocurrency, but Chia at least does drastically cut back on the power requirements. Maybe that’s only temporary as well, though. 50 HDDs use as much power as a single high-end GPU, but if we end up with 50X as many HDDs farming Chia, we’ll be right back to square one. For the sake of the environment, let’s hope that doesn’t happen.
Lexar has made a name for itself in the portable storage market. They are very well known for their SD cards and USB sticks, so it’s natural for them to expand into other areas of flash storage, like consumer SSDs. Lexar was founded as a subsidiary of Micron, but was sold to Longsys in 2017 and has been operating quite independently since.
The Lexar SL200 is a USB-C-based, portable SSD that uses the USB 3.1 interface with speeds of up to 500 MB/s. Traditionally, most large-capacity external storage has been based on hard drives, which come at very low cost per TB but have several drawbacks. First, since they are mechanical components, they are sensitive to shock—if you drop one, it’s very likely broken. SSDs, on the other hand, are almost immune to external damage. Another plus of SSDs is that they don’t use any mechanical components to transfer data, so their seek times are much lower than on HDDs, and transfer rates are higher, too.
Internally, the Lexar SL200 uses a Lexar DM918 controller paired with 3D TLC NAND flash and a USB-to-SATA bridge chip from ASMedia.
Update: Walmart’s restock of the PS5 and Xbox Series X seems to be over. If you’re looking for some accessories to go with a console, we have some picks below.
Due to the global electronic component shortage, getting your hands on a PS5 or Xbox Series X has been challenging. Fortunately, if you want another shot at getting either next-gen console, Walmart has done a surprise drop right now for both systems, while supplies last.
As seen on Walmart’s website, the PS5 digital edition and Xbox Series X gaming consoles are available for purchase. We’re noticing that stock is coming in and out, so stick with the page for a few minutes and keep refreshing to see if you can add one to your cart.
Once you get your PS5, if you are looking to buy some games, some of the hottest titles such as Returnaland Spider-Man: Miles Morales are available on the console. Resident Evil Village is releasing tomorrow, May 7th.
If you are worried about not having a ton of storage available to store all of your games, the latest PS5 update allows you to store PS5 games on an external hard drive. Although you cannot play these games on an external HDD, this is still good to help manage which games are currently stored on your SSD.
Xbox fans, once you have secured Microsoft’s next-gen console, if you are unsure what type of games to buy, a subscription to Xbox Game Pass Ultimate might be the best option for you. It includes a robust library of first- and third-party titles for you to download and play. And unlike the PS5, you can purchase a 1TB SSD expansion to add to the Xbox’s base storage (512GB on the Series S, 1TB on the X), though it is not cheap.
Xbox Series X
$500
Prices taken at time of publishing.
The Xbox Series X is Microsoft’s flagship console, serving as its most powerful (and biggest) option that costs $499.99. While the Series S is aimed at smooth 1440p performance, the Series X is focused on fast 4K gameplay.
$500
at Walmart
Xbox Series S
$300
Prices taken at time of publishing.
The Xbox Series S costs $299.99. Compared to the Series X, it’s far smaller, less powerful, and it has half the amount of SSD storage built in. It also lacks a disc drive.
Gigabyte Aorus Gen4 7000s is a high-performance, premium-priced M.2 NVMe SSD that keeps cool under any workload due to its sleek pre-installed heatsink.
For
+ Competitive performance
+ Attractive design
+ Effective cooling
+ AES 256-bit encryption
+ 5-year warranty and high endurance ratings
Features and Specifications
Today, we have Gigabyte’s Aorus Gen4 7000s in the lab for review with cooling that is fit for an SSD that can gulp down over 8.5 watts of power. With an extremely well-crafted heatsink that is decked out with tons of fins, it’s ready for the harshest of workloads and will add some bling to a high-end gaming build. Designed to compete with the best SSDs, the Aorus Gen4 7000s dishes out up to 7 GBps and surprisingly, from testing, shows improvement over earlier Phison E18 NVMe SSD samples we’ve come across.
We’ve had our hands on many Phison PS5018-E18-based SSDs in the past few months and they all deliver very high performance. But with such high speed, these SSDs also have high power consumption in comparison to other SSDs such as Samsung’s 980 Pro and WD’s Black SN850, and concurrently, this results in high heat output in heavy usage, especially for the higher capacity models like Sabrent’s 4TB Rocket 4 Plus.
As awesome as Sabrent’s recently reviewed 4TB Rocket 4 Plus is, when relying solely on the thin heat spreader to keep it cool, it is still susceptible to throttling under massive write workloads. Because of this, many of these new Phison 18-powered SSDs are rolling out, equipped with heatsinks to ensure throttle-free, (or at least hopefully throttle-free), operation.
Corsair went as far as to develop both a heatsinked MP600 Pro as well as an MP600 Pro Hydro X edition for those with custom water-cooled rigs who demand completely throttle-free operation. In our hands, we now have Gigabyte’s Aorus Gen4 7000s, an interesting alternative that hasn’t gone to such drastic measures. Coming with a surface area maximized heatsink, the Aorus Gen4 7000s utilizes a more traditional design approach to tackle the minor heat problem, but what’s not so traditional is that it also incorporates a nanocarbon coating that is stated to reduce temperatures by 20%.
Specifications
Product
Gen4 7000s 1TB
Gen4 7000s 2TB
Pricing
$209.99
$389.99
Capacity (User / Raw)
1000GB / 1024GB
2000GB / 2048GB
Form Factor
M.2 2280
M.2 2280
Interface / Protocol
PCIe 4.0 x4 / NVMe 1.4
PCIe 4.0 x4 / NVMe 1.4
Controller
Phison PS5018-E18
Phison PS5018-E18
DRAM
DDR4
DDR4
Memory
Micron 96L TLC
Micron 96L TLC
Sequential Read
7,000 MBps
7,000 MBps
Sequential Write
5,500 MBps
6,850 MBps
Random Read
350,000 IOPS
650,000 IOPS
Random Write
700,000 IOPS
700,000 IOPS
Security
AES 256-bit encryption
AES 256-bit encryption
Endurance (TBW)
700 TB
1,400 TB
Part Number
GP-AG70S1TB
GP-AG70S2TB
Warranty
5-Years
5-Years
The Gigabyte Aorus Gen4 7000s is available in two capacities, 1TB and 2TB, priced at $210 and $390, respectively. Gigabyte rates each capacity to hit 7,000 MBps read, but the 1TB is rated to deliver 5,500 MBps write while the 2TB model can hit 6,850 MBps write. In terms of peak random performance, the SSD is rated capable of up to 650,000 / 700,000 random read/write IOPS at the highest capacity.
Gigabyte backs the Aorus Gen4 7000s with a 5-year warranty and each comes with respectable write endurance ratings – up to 700TB per 1TB in capacity. Such high endurance is thanks to Phison’s fourth-generation LDPC and RAID ECC, wear leveling, a bit of over-provisioning. Also, like Corsair’s MP600 Pro, the SSD supports AES 256-bit hardware encryption, perfect for those on the go who need to meet security compliance standards when handling sensitive data.
Software and Accessories
Gigabyte provides a basic SSD Toolbox that can read the SSD’s health, S.M.A.R.T. data, as well as secure erase it (assuming it’s a secondary drive).
A Closer Look
Image 1 of 3
Image 2 of 3
Image 3 of 3
Gigabyte’s Aorus Gen4 7000s comes in an M.2 2280 double-sided form factor. The included aluminum heatsink measures 11.5 x 23.5 x 76 mm and the black and silver two-tone looks fantastic, too. We’re not too sure how much the nanocarbon coating helps by itself, but based on the way this heatsink is designed, we’re fairly confident that there is plenty of surface area to dissipate all the heat it needs to without it. The SSD is sandwiched between two thick thermal pads that transfer heat from the PCB to the heatsink and baseplate.
Image 1 of 2
Image 2 of 2
As mentioned, Gigabyte’s Aorus Gen4 7000s is powered by Phison’s second-generation PCIe 4.0 x4 SSD controller, the PS5018-E18. It leverages DRAM and features a triple-core architecture that is paired with the company’s CoXProcessor 2.0 technology (an extra two R5 CPU cores) for fast and consistent response. The main CPU cores are Arm Cortex R5’s clocked at 1 GHz, up from 733MHz on its predecessor, the PS5016-E16, while the CoXProcessor 2.0 cores are clocked slower for better efficiency.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Our 2TB sample comes with 2GB of DDR4 from SK hynix, split amongst two DRAM ICs, one on each side of the PCB. These chips interface with the controller at speeds clocking 1,600 MHz and consume 1.2V. Additionally, there are eight NAND flash emplacements in total for storage, each containing 256GB of Micron’s 512Gb 96L TLC flash (32 dies in total). This NAND operates at fast speeds of up to 1,200 MTps and features a quad-plane architecture for fast, responsive performance.
Reviews for Capcom’s Resident Evil Village have gone live, and we’re taking the opportunity to look at how the game runs on the best graphics cards. We’re running the PC version on Steam, and while patches and future driver updates could change things a bit, both AMD and Nvidia have provided Game Ready drivers for REV.
This installment in the Resident Evil series adds DirectX Raytracing (DXR) support for AMD’s RX 6000 RDNA2 architecture, or Nvidia’s RTX cards — both the Ampere architecture and Turing architecture. AMD’s promoting Resident Evil Village, and it’s on the latest gen consoles as well, so there’s no support of Nvidia’s DLSS technology. We’ll look at image quality in a moment, but first let’s hit the official system requirements.
Capcom notes that in either case, the game targets 1080p at 60 fps, using the “Prioritize Performance” and presumably “Recommended” presets. Capcom does state that the framerate “might drop in graphics-intensive scenes,” but most mid-range and higher GPUs should be okay. We didn’t check lower settings, but we can confirm that 60 fps at 1080p will certainly be within reach of a lot of graphics cards.
The main pain point for anyone running a lesser graphics card will be VRAM, particularly at higher resolutions. With AMD pushing 12GB and 16GB on its latest RX 6000-series cards, it’s not too surprising that the Max preset uses 12GB VRAM. It’s possible to run 1080p Max on a 6GB card, and 1440p Max on an 8GB card, but 4K Max definitely wants more than 8GB VRAM — we experienced inconsistent frametimes in our testing. We’ve omitted results on cards where performance wasn’t reliable in the charts.
Anyway, let’s hit the benchmarks. Due to time constraints, we’re not going to run every GPU under the sun in these benchmarks, but will instead focus on the latest gen GPUs, plus the top and bottom RTX 20-series GPUs and a few others as we see fit. We used the ‘Max’ preset, with and without ray tracing, and most of the cards we tested broke 60 fps. Turning on ray tracing disables Ambient Occlusion, because that’s handled by the ray-traced GI and Reflection options, but every other setting is on the highest quality option (which means variable-rate shading is off for our testing).
Our test system consists of a Core i9-9900K CPU, 32GB VRAM and a 2TB SSD — the same PC we’ve been using for our graphics card and gaming benchmarks for about two years now, because it continues to work well. With the current graphics card shortages, acquiring a new high-end GPU will be difficult — our GPU pricing index covers the details. Hopefully, you already have a capable GPU from pre-2021, back in the halcyon days when graphics cards were available at and often below MSRP. [Wistful sigh]
Granted, these are mostly high-end cards, but even the RTX 2060 still posted an impressive 114 fps in our test sequence — and it also nearly managed 60 fps with ray tracing enabled (see below). Everything else runs more than fast enough as well, with the old GTX 1070 bringing up the caboose with a still more than acceptable 85 fps. Based off what we’ve seen with these GPUs and other games, it’s a safe bet that cards like the GTX 1660, RX 5600 XT, and anything faster than those will do just fine in Resident Evil Village.
AMD’s RDNA2 cards all run smack into an apparent CPU limit at around 195 fps for our test sequence, while Nvidia’s fastest GPUs (2080 Ti and above) end up with a lower 177 fps limit. At 1080p, VRAM doesn’t appear to matter too much, provided your GPU has at least 6GB.
Turning on ray tracing drops performance, but the drop isn’t too painful on many of the cards. Actually, that’s not quite true — the penalty for DXR depends greatly on your GPU. The RTX 3090 only lost about 13% of its performance, and the RTX 3080 performance dropped by 20%. AMD’s RX 6900 XT and RX 6800 XT both lost about 30-35% of their non-RT performance, while the RTX 2080 Ti, RX 6800, RTX 3070, RTX 3060 Ti, and RTX 3060 plummeted by 40–45%. Meanwhile, the RX 6700 XT ended up running at less than half its non-DXR rate, and the RTX 2060 also saw performance chopped in half.
Memory and memory bandwidth seem to be major factors with ray tracing enabled, and the 8GB and lower cards were hit particularly hard. Turning down a few settings should help a lot, but for these initial results we wanted to focus on maxed-out graphics quality. Let us know in the comments what other tests you’d like to see us run.
The performance trends we saw at 1080p become more pronounced at higher resolutions. At 1440p Max, more VRAM and memory bandwidth definitely helped. The RX 6900 XT, RX 6800 XT, RTX 3090, and RTX 3080 only lost a few fps in performance compared to 1080p when running without DXR enabled, and the RX 6800 dipped by 10%. All of the other GPUs drop by around 20–30%, but the 6GB RTX 2060 plummeted by 55%. Only the RTX 2060 and GTX 1070 failed to average 60 fps or more.
1440p and ray tracing with max settings really needs more than 8GB VRAM — which probably explains why the Ray Tracing preset (which we didn’t use) opts for modest settings everywhere else. Anyway, the RTX 2060, 3060 Ti, and 3070 all started having problems at 1440p with DXR, which you can see in the numbers. Some runs were much better than we show here, others much worse, and after repeating each test a bunch of times, we still aren’t confident those three cards will consistently deliver a good experience without further tweaking the graphics settings.
On the other hand, cards with 10GB or more VRAM don’t show nearly the drop that we saw without ray tracing when moving from 1080p to 1440p. The RTX 3060 only lost 18% of its 1080p performance, and chugs along happily at just shy of 60 fps. The higher-end AMD and Nvidia cards were all around the 15% drop mark as well.
But enough dawdling. Let’s just kill everything with some 4K testing…
Well, ‘kill’ is probably too strong of a word. Without ray tracing, most of the GPUs we tested still broke 60 fps. But of those that came up short, they’re very short. RTX 3060 is still generally playable, but Resident Evil Village appears to expect 30 fps or more, as dropping below that tends to cause the game to slow down. The RX 5700 XT should suffice in a pinch, even though it lost 67% of its 1440p performance, but the 1070 and 2060 would need lower settings to even take a crack at 4K.
Even with DXR, the RTX 2080 Ti and RX 6800 and above continue to deliver 60 fps or more. The RTX 3060 also still manages a playable 41 fps — this isn’t a twitch action game, so sub-60 frame rates aren’t the end of the world. Of course, we’re not showing the cards that dropped into the teens or worse — which is basically all the RTX cards with 8GB or less VRAM.
The point isn’t how badly some of the cards did at 4K Max (with or without DXR), but rather how fast a lot of the cards still remained. The DXR switch often imposed a massive performance hit at 1080p, but at 4K the Nvidia cards with at least 10GB VRAM only lost about 15% of their non-DXR performance. AMD’s GPUs took a larger 25% hit, but it was very consistent across all four GPUs.
Resident Evil Village Graphics Settings
Image 1 of 8
Image 2 of 8
Image 3 of 8
Image 4 of 8
Image 5 of 8
Image 6 of 8
Image 7 of 8
Image 8 of 8
You can see the various advanced settings available in the above gallery. Besides the usual resolution, refresh rate, vsync, and scaling options, there are 18 individual graphics settings, plus two more settings for ray tracing. Screen space reflections, volumetric lighting and shadow quality are likely to cause the biggest impact on performance, though the sum of the others can add up as well. For anyone with a reasonably high-end GPU, though, you should be able to play at close to max quality (minus ray tracing if you don’t have an appropriate GPU, naturally).
But how does the game look? Capturing screenshots with the various settings on and off is a pain, since there are only scattered save points (typewriters), and some settings appear to require a restart to take effect. Instead of worrying about all of the settings, let’s just look at how ray tracing improves things.
Resident Evil Village Image Quality: Ray Tracing On / Off
Image 1 of 18
Image 2 of 18
Image 3 of 18
Image 4 of 18
Image 5 of 18
Image 6 of 18
Image 7 of 18
Image 8 of 18
Image 9 of 18
Image 10 of 18
Image 11 of 18
Image 12 of 18
Image 13 of 18
Image 14 of 18
Image 15 of 18
Image 16 of 18
Image 17 of 18
Image 18 of 18
Or doesn’t, I guess. Seriously, the effect is subtle at the best of times, and in many scenes, I couldn’t even tell you whether RT was on or off. If there’s a strong light source, it can make a difference. Sometimes a window or glass surface will change with RT enabled, but even then (e.g., in the images of the truck and van) it’s not always clearly better.
The above gallery should be ordered with RT off and RT on for each pair of images. You can click (on a PC) to get the full images, which I’ve compressed to JPGs (and they look visually almost the same as the original PNG files). Indoor areas tend to show the subtle lighting effects more than outside, but unless a patch dramatically changes the way RT looks, Resident Evil Village will be another entry in the growing list of ray tracing games where you could skip it and not really miss anything.
Resident Evil Village will release to the public on May 7. So far, reviews are quite favorable, and if you enjoyed Resident Evil 7, it’s an easy recommendation. Just don’t go in expecting ray tracing to make a big difference in the way the game looks or feels.
Resident Evil Village is the latest addition to the long-running horror series, and just like last year’s Resident Evil 3 remake, it is built on Capcom’s RE Engine. We test over 25 GPUs at 1080p, 1440p and 4K to find out what sort of hardware you need to run this game at maximum settings, while also looking at the performance and visual quality of the game’s ray tracing options.
Watch via our Vimeo channel (below) or over on YouTube at 2160p HERE
In terms of visual settings, there are a number of options in the display menu. Texture and texture filtering settings are on offer, as well as variable rate shading, resolution, shadows, and so on. There’s also selection of quick presets, and for our benchmarking today we opted for the Max preset, but with V-Sync and CAS disabled.
One interesting thing about the Max preset is the default ambient occlusion setting – FidelityFX CACAO, which stands for Combined Adaptive Compute Ambient Occlusion, a technology optimised for RDNA-based GPUs. To make sure this setting wouldn’t unfairly penalise Nvidia GPUs, we tested CACAO vs SSAO with both the RX 6800 and RTX 3070:
Both GPUs only lost 3% performance when using CACAO instead of SSAO, so we were happy to use the former setting for our benchmarking today.
Driver Notes
AMD GPUs were benchmarked with a pre-release driver provided by AMD for Resident Evil Village.
Nvidia GPUs were benchmarked with the 466.27 driver.
Test System
We test using the a custom built system from PCSpecialist, based on Intel’s Comet Lake-S platform. You can read more about it over HERE, and configure your own system from PCSpecialist HERE.
CPU
Intel Core i9-10900K
Overclocked to 5.1GHz on all cores
Motherboard
ASUS ROG Maximus XII Hero Wi-Fi
Memory
Corsair Vengeance DDR4 3600MHz (4 X 8GB)
CL 18-22-22-42
Graphics Card
Varies
System Drive
500GB Samsung 970 Evo Plus M.2
Games Drive
2TB Samsung 860 QVO 2.5″ SSD
Chassis
Fractal Meshify S2 Blackout Tempered Glass
CPU Cooler
Corsair H115i RGB Platinum Hydro Series
Power Supply
Corsair 1200W HX Series Modular 80 Plus Platinum
Operating System
Windows 10 2004
Our 1-minute benchmark pass came from quite early on in the game, as the player descends down into the village for the first time. Over the hour or so that I played, the results do seem representative of wider gameplay, with the exception of intense combat scenes which can be a bit more demanding. Those are much harder to benchmark accurately though, as there’s more variation from run to run, so I stuck with this outdoor scene.
1080p Benchmarks
1440p Benchmarks
2160p (4K) Benchmarks
Closing Thoughts
After previously looking at the Resident Evil 3 remake last year, a game which is also built on Capcom’s RE Engine, I wasn’t too surprised to see that overall performance is pretty similar between both games.
That’s certainly a good thing though, as the game plays very well across a wide range of hardware. At the lower end, weaker GPUs like the GTX 1650, or older cards like the GTX 1060 6GB, still deliver a very playable experience at 1080p max settings. Village also scales very well, so if you have a higher-end GPU, you will be rewarded with significantly higher frame rates.
AMD does see the benefit to its partnership with Capcom for this one, as RDNA-based GPUs do over-perform here compared to the average performance we’d expect from those cards. The RX 6700 XT is matching the RX 3070 for instance – when we’d typically expect it to be slower – while the RX 6900 XT is 7% faster than the RTX 3090 at 1440p.
In terms of visual fidelity, I don’t think the RE Engine delivers a cutting edge experience like you’d get from Cyberpunk 2077 or Red Dead Redemption 2 when using Ultra settings, but it still looks good and I am particularly impressed with the detailed character models.
The only negative point for me is that the ray tracing is pretty underwhelming. As we demonstrate in the video above, it doesn’t really deliver much extra from a visual perspective, at least in my opinion. Overall though, Resident Evil Village looks good and runs well on pretty much any GPU, so it definitely gets a thumbs up from me.
Discuss on our Facebook page HERE.
KitGuru says: Capcom’s newest game built on the RE Engine delivers impressive performance and looks good while doing so.
João Silva 15 hours ago Featured Tech News, Gaming PC
Gigabyte is getting into pre-built gaming PCs, starting with two new models – the Model X and the Model S. The Model X is a more traditional ATX system based on your choice of Intel Z590 or AMD X570 and an RTX 3080 GPU, while the Model S is a compact, 14-litre PC that packs high-end hardware despite its small size.
The Aorus Model X chassis offers good thermal performance and stylish aesthetics thanks to a half-vented, half-tempered glass front panel with RGB lighting and a half-vented top panel with RGB. Rated with acoustic performance below 40dB while gaming, the inside of the Model X was organised to allow less experienced users to mount an SSD or add another component to the system with ease. The chassis comes with an integrated GPU bracket and a 360mm AIO cooler. The side panel can either be transparent or metallic.
The Aorus Model S shares some similarities with other cases such as the NZXT H1 and the darkFlash DLH21. Featuring an AIO thermal design, the Model S has more space to fit the remaining components. The air intakes are concealed to keep the sleek aesthetics of the chassis, which features an RGB-lit Aorus logo on the front panel. During operation, the rated noise performance sits just below 36dB.
Whether you choose AMD or Intel for the CPU, some specifications are shared across both variants. For instance, the Model S comes with a 750W power supply for both Intel and AMD configurations. There are also some differences, with AMD-based PCs coming with slower memory options compared to an Intel-based PC.
The following table shows the specifications of the AMD-powered Aorus Model X and S gaming systems:
Model
Aorus Model X
Aorus Model S
Platform
X570
B550
CPU
AMD R9 5900X
AMD R9 5900X
RAM
32GB DDR4-3600 RGB
32GB DDR4-3600
GPU
RTX 3080
RTX 3080
PSU
850W 80 Plus Gold
750W 80 Plus Gold
Storage 1
M.2 2280 Gen4 1TB
M.2 2280 Gen4 1TB
Storage 2
M.2 2280 NVMe 2TB
M.2 2280 NVMe 2TB
The next table shows the specifications of the Intel-based Aorus Model X and S gaming PCs:
Model
Aorus Model X
Aorus Model S
Platform
Z590
Z590
CPU
Intel Core i9-11900K
Intel Core i9-11900K
RAM
16GB DDR4-4400 RGB
32GB DDR4-4000
GPU
RTX 3080
RTX 3080
PSU
850W 80 Plus Gold
750W 80 Plus Gold
Storage 1
M.2 2280 Gen4 1TB
M.2 2280 Gen4 1TB
Storage 2
M.2 2280 NVMe 2TB
M.2 2280 NVMe 2TB
The Intel version of the Model S comes with 32GB DDR4-4000 memory and the Intel Model X with 16GB DDR4-4400 memory. AMD versions of both PCs come with DDR4-3600 or DDR4-4000 memory instead. It’s also worth noting that the AMD Model S comes with a B550 motherboard, while the Model X features an X570 motherboard.
KitGuru says: What do you think of Gigabyte’s latest Aorus gaming PCs? Would you go for an Intel or AMD based system?
Become a Patron!
Check Also
Gamescom 2021 will once again be an all-digital event
2020 saw many of the industries biggest events either get cancelled outright, or translated into …
After about a month of preparation, following the initial mainnet launch, cryptocurrency Chia coin (XCH) has officially started trading — which means it’s possibly preparing to suck up all of the best SSDs like Ethereum (see how to mine Ethereum) has been gobbling up the best graphics cards. Early Chia calculators suggested an estimated starting price of $20 per XCH. That was way off, but with the initial fervor and hype subsiding, we’re ready to look at where things stand and where they might stabilize.
To recap, Chia is a novel approach to cryptocurrencies, ditching the Proof of Work hashing used by most coins (i.e., Bitcoin, Ethereum, Litecoin, Dogecoin, and others) and instead opting for a new Proof of Time and Space algorithm. Using storage capacity helps reduce the potential power footprint, obviously at the cost of storage. And let’s be clear: The amount of storage space (aka netspace) already used by the Chia network is astonishing. It passed 1 EiB (Exbibyte, or 2^60 bytes) of storage on April 28, and just a few days later it’s approaching the 2 EiB mark. Where will it stop? That’s the $21 billion dollar question.
All of that space goes to storing plots of Chia, which are basically massive 101.4GiB Bingo cards. Each online plot has an equal chance, based on the total netspace, of ‘winning’ the block solution. This occurs at a rate of approximately 32 blocks per 10 minutes, with 2 XCH as the reward per block. Right now, assuming every Chia plot was stored on a 10TB HDD (which obviously isn’t accurate, but roll with it for a moment), that would require about 200,000 HDDs worth of Chia farms.
Assuming 5W per HDD, since they’re just sitting idle for the most part, that’s potentially 1 MW of power use. That might sound like a lot, and it is — about 8.8 GWh per year — but it pales in comparison to the amount of power going into Bitcoin and Ethereum. Ethereum, as an example, currently uses an estimated 41.3 TWh per year of power because it relies primarily on the best mining GPUs, while Bitcoin uses 109.7 TWh per year. That’s around 4,700 and 12,500 times more power than Chia at present, respectively. Of course, Ethereum and Bitcoin are also far more valuable than Chia at current exchange rates, and Chia has a long way to go to prove itself a viable cryptocoin.
Back to the launch, though. Only a few cryptocurrency exchanges have picked up XCH trading so far, and none of them are what we would call major exchanges. Considering how many things have gone wrong in the past (like the Turkish exchange where the founder appears to have walked off with $2 billion in Bitcoins), discretion is definitely the best approach. Initially, according to Coinmarketcap, Gate.io accounted for around 65% of transactions, MXC.com was around 34.5%, and Bibox made up the remaining 0.5%. Since then, MSC and Gate.io swapped places, with MXC now sitting at 64% of all transactions.
By way of reference, Gate.io only accounts for around 0.21% of all Bitcoin transactions, and MXC doesn’t even show up on Coinmarketcap’s list of the top 500 BTC exchange pairs. So, we’re talking about small-time trading right now, on riskier platforms, with a total trading volume of around $27 million in the first day. That might sound like a lot, but it’s only a fraction of Bitcoin’s $60 billion or so in daily trade volume.
Chia started at an initial trading price of nearly $1,600 per XCH, peaked in early trading to peak at around $1,800, and has been on a steady downward slope since then. At present, the price seems to mostly have flattened out (at least temporarily) at around $700. It could certainly end up going a lot lower, however, so we wouldn’t recommend betting the farm on Chia, but even at $100 per XCH a lot of miners/crypto-farmers are likely to jump on the bandwagon.
As with many cryptocoins, Chia is searching for equilibrium right now. 10TB of storage dedicated to Chia plots would be enough for a farm of 100 plots and should in theory account for 0.0005% of the netspace. That would mean about 0.046 XCH per day of potential farming, except you’re flying solo (proper Chia pools don’t exist yet), so it would take on average 43 days to farm a block — and that’s assuming netspace doesn’t continue to increase, which it will. But if you could bring in a steady stream of 0.04 XCH per day, even if we lowball things with a value of $100, that’s $4-$5 per day, from a 10TB HDD that only costs about $250. Scale that up to ten drives and you’d be looking at $45 per day, albeit with returns trending downward over time.
GPU miners have paid a lot more than that for similar returns, and the power and complexity of running lots of GPUs (or ASICs) ends up being far higher than running a Chia farm. In fact, the recommended approach to Chia farming is to get the plots set up using a high-end PC, and then connect all the storage to a Raspberry Pi afterwards for low-power farming. You could run around 50 10TB HDDs for the same amount of power as a single RTX 3080 mining Ethereum.
It’s important to note that it takes a decent amount of time to get a Chia farm up and running. If you have a server with a 64-core EPYC processor, 256GB of RAM, and at least 16TB of fast SSD storage, you could potentially create up to 64 plots at a time, at a rate of around six (give or take) hours per group of plots. That’s enough to create 256 plots per day, filling over 2.5 10TB HDDs with data. For a more typical PC, with an 8-core CPU (e.g, Ryzen 7 5800X or Core i9-11900K), 32GB of RAM, and an enterprise SSD with at least 2.4TB of storage, doing eight concurrent plots should be feasible. The higher clocks on consumer CPUs probably mean you could do a group of plots in four hours, which means 48 plots per day occupying about half of a 10TB HDD. That’s still a relatively fast ramp to a bunch of drives running a Chia farm, though.
In either case, the potential returns even with a price of $100 per XCH amount to hundreds of dollars per month. Obviously, that’s way too high of a return rate, so things will continue to change. Keep in mind that where a GPU can cost $15-$20 in power per month (depending on the price of electricity), a hard drive running 24/7 will only cost $0.35. So what’s a reasonable rate of return for filling up a hard drive or SSD and letting it sit, farming Chia? If we target $20 per month for a $250 10TB HDD, then either Chia’s netspace needs to balloon to around 60EiB, or the price needs to drop to around $16 per XCH — or more likely some combination of more netspace and lower prices.
In the meantime, don’t be surprised if prices on storage shoots up. It was already starting to happen, but like the GPU and other component shortages, it might be set to get a lot worse.
TeamGroup has announced the T-Create Expert PCIe 3.0 SSD that’s oriented towards both content creator and Chia farmers. The SSD delivers the industry’s first 12-year limited warranty.
Available in 1TB and 2TB flavors, the T-Create Expert PCIe SSD boasts endurance ratings of 6,000 TBW and 12,000 TBW, respectively. It’s performance, however, is limited to PCIe 3.0 x4 speeds. TeamGroup didn’t divulge the model of the SSD controller and NAND that are utilized inside the SSD though.
Regardless of the capacity, the T-Create Expert PCIe SSD offer sequential read and write speeds up to 3,400 MBps and 3,000 MBps, respectively. The drive’s random performance is rated for 180,000 IOPS reads and 140,000 IOPS writes. The T-Create Expert PCIe SSD’s Chia farming performance is so far unproven. For reference, a single Chia plot can take up to 12 hours to complete, depending on the drive.
The T-Create Expert PCIe SSD’s greatest asset is obviously its durability because performance-wise, there are way faster drives on the market. Typically, a Chia plot requires between 1.6TB to 1.8TB writes. In theory, the 1TB and 2TB models can create up to 3,333 plots and 6,666 plots, respectively, before hitting their write limits.
The last time we checked, each Chia plot was selling for $3.5. Therefore, the 1TB drive can generate up to $11,665.5 in profits and the 2TB up to $23,331. TeamGroup didn’t reveal the pricing for the T-Create Expert PCIe SSD though so we can’t factor in the cost yet.
The FuzeDrive P200 is a QLC-based hybrid SSD that defies the norm through clever tiering technology that delivers higher endurance, but the excessive pricing isn’t for everyone.
For
+ Large static and dynamic SLC caches
+ Competitive performance
+ Software package
+ 5-year warranty
+ High endurance ratings
Against
– High cost
– Capacity trade-off for SLC cache
– Low sustained write speed
– Initial software configuration
– Lacks AES 256-bit encryption
Features and Specifications
The Enmotus FuzeDrive P200 SSD takes an unconventional approach to increase SSD performance and extend lifespan by leveraging the power of AI to deliver up to 3.4 GBps and class-leading endurance. According to the company, artificial intelligence isn’t just about robots and decrypting future business trends — it can also enhance your SSD and tune it to your usage patterns, thus unlocking more performance and endurance.
Enmotus builds the FuzeDrive P200 using commodity hardware but says the drive delivers more than six times more endurance than most QLC-based SSDs through its sophisticated AI-boosted software and tiering techniques. In fact, a single 1.6 TB drive is guaranteed to absorb an amazing 3.6 petabytes of write data throughout its warranty. The company’s FusionX software also allows you to expand your storage volume up to 32TB by adding another SSD or HDD (just one). All of this will set you back the same cost of a new Samsung 980 Pro with a faster PCIe interface, though, ultimately making this drive attractive only for a niche audience.
Innovative AI Storage
Traditional SSDs, like Sabrent’s Rocket Q, come with QLC flash that operates in a dynamic SLC mode. While this provides fast performance and high capacity, it has drawbacks that primarily manifest as low endurance.
However, QLC flash can operate in the full 16-level, low-endurance QLC mode or operate in a high-endurance SLC mode, which is advantageous for Enmotus’s FuzeDrive P200 SSD. By operating Micron’s flash solely in high endurance SLC mode, the flash’s endurance multiplies – its program-erase cycle rating increases from roughly 600-1,000 cycles to 30,000 cycles. The main reason being that in SLC mode, the flash can be programmed in just one pass, whereas QLC takes 3+ cycles to fine-tune the cell charge.
The 1.6TB FuzeDrive P200 comes with 2TB of raw flash, but not all of it is available to the user. This is somewhat similar to Intel’s Optane Memory H10 and soon-to-be-released H20, but instead of the complication of relying on two separate controllers and storage mediums, the P200 uses only one controller and one type of flash. The FuzeDrive leverages the advantages that both dynamic and high endurance SLC modes have to offer by splitting the device into two LBA zones. The first LBA range is the high endurance zone, and it sacrifices 512GB of the raw flash to provide 128GB of SLC goodness (4 bits QLC -> 1-bit SLC), but the user can’t access this area directly. The remaining QLC flash in the second LBA zone operates in dynamic SLC mode and is made available to the end user. The 900GB model comes with a smaller 24GB SLC cache.
The company’s intelligent AI NVMe driver virtualizes the zones into a single volume and relocates data to either portion after analyzing the I/O. In this tiering configuration, a large RAM-based table is set up in memory (roughly 100MB) to track I/O behavior across the whole storage device. Most active and write-intensive data is automatically directed to the SLC zone, and inactive data is moved to the QLC portion with minimal CPU overhead compared to caching techniques. Movements are done only in the background, and only one copy of the data exists. The NVMe driver manages the data placement, while the drive uses a special modified firmware to split it into two separate LBA zones.
Specifications
Product
FuzeDrive P200 900GB
FuzeDrive P200 1.6TB
Pricing
$199.99
$349.99
Form Factor
M.2 2280
M.2 2280
Interface / Protocol
PCIe 3.0 x4 / NVMe 1.3
PCIe 3.0 x4 / NVMe 1.3
Controller
Phison PS5012-E12S
Phison PS5012-E12S
DRAM
DDR3L
DDR3L
Memory
Micron 96L QLC
Micron 96L QLC
Sequential Read
3,470 MBps
3,470 MBps
Sequential Write
2,000 MBps
3,000 MBps
Random Read
193,000 IOPS
372,000 IOPS
Random Write
394,000 IOPS
402,000 IOPS
Endurance (TBW)
750 TB
3,600 TB
Part Number
P200-900/24
P200-1600/128
Warranty
5-Years
5-Years
Enmotus’s FuzeDrive P200 comes in 900GB and 1.6TB capacities. Both fetch a pretty penny, priced at $200 and $350, respectively, roughly matching the price of the fastest Gen4 SSDs on the market. The FuzeDrive P200 comes with a Gen3 NVMe SSD controller, so Enmotus rated it for up to 3,470 / 3,000 MBps of sequential read/write throughput and sustain up to 372,000 / 402,000 random read/write IOPS.
But, while Samsung’s 980 Pro may be faster, it only offers one-third the endurance of the P200. Enmotus rates the 900GB model to handle up to 750 TB of writes during its five-year warranty. The 1.6TB model is much more robust — It can handle up to 3.6 petabytes of writes within its warranty, meaning the P200 comes backed with the highest endurance rating we’ve seen for a QLC SSD of this capacity.
Software and Accessories
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Enmotus provides Fuzion, a utility that monitors the SSD and enables other maintenance tasks, like updating firmware or secure erasing the SSD. The software is available from the Microsoft Store and will automatically install and update the driver for the device. The company also provides the Enmotus-branded Macrium Reflect Cloning Software to help migrate data to the new SSD, as well as the FuzionX software for more complex tiering capability.
When adding a third device into the mix, such as a high-capacity SATA SSD or HDD (NVMe support under development), you can use FusionX software to integrate it into the P200’s virtual volume. The SLC portion of the P200 SSD will retain the volume’s hot data, the QLC portion will retain the warm data, while the HDD stores cold data.
A Closer Look
Image 1 of 3
Image 2 of 3
Image 3 of 3
Enmotus’s FuzeDrive P200 SSD comes in an M.2 2280 form factor, and the 2TB model is double-sided solely to place a second DRAM IC on the back of the PCB. The company uses a copper heat spreader label to aid with heat dissipation. The controller supports ASPM, ASPT, and the L1.2 sleep mode to reduce power when the drive isn’t busy.
Image 1 of 2
Image 2 of 2
As mentioned, Enomotus builds the FuzeDrive P200 with commodity hardware – Phison’s mainstream E12S PCIe 3.0 x4 NVMe 1.3-compliant SSD controller and Micron QLC flash, but the firmware is specifically designed to enable splitting the drive into two distinct zones – one high endurance, one low endurance. The controller has dual Arm Cortex R5 CPUs, clocked at 666MHz, and a DRAM cache. The controller interfaces with two Nanya 4Gb DDR3L DRAM ICs at 1600 MHz for fast access to the FTL mapping tables.
There are four NAND packages on our 2TB sample, each containing four 1Tb Micron 96-Layer QLC packages. For responsive random performance and solid performance in mixed workloads, the flash has a four-plane architecture and interfaces with this eight-channel controller at speeds up to 667 MTps. To ensure reliable operation and maintain data integrity over time, the controller implements Phion’s third-generation LDPC ECC and RAID ECC along with a DDR ECC engine and end-to-end data path protection.
Acer is a world-leading manufacturer of computer hardware. The company was founded in 1976, in Taiwan. They are mostly famous for their laptops, desktop PCs, and monitors. They have now branched into the field of solid-state storage with OEM partner BIWIN Storage, who are also helping HP produce their SSDs.
The FA100 is part of the Acer SSD lineup, which was announced last week. Designed as a cost-effective, entry-level M.2 NVMe SSD, the FA100 offers performance greater than traditional SATA drives and is much faster than any mechanical HDD too, of course. Acer has built the FA100 using the Innogrit IG5216 controller paired with 3D TLC NAND flash. A DRAM cache chip is not included, probably to save on cost.
The Acer FA100 is available in capacities of 128 GB, 256 GB, 512 GB, 1 TB, and 2 TB. Endurance for these models is set at 70 TBW, 150 TBW, 300 TBW, 600 TBW, and 1200 TBW respectively. None of the price points but for the $125 1 TB FA100 in this review are known. Acer includes a five-year warranty with the FA100.
Specifications: Acer FA 100 1 TB
Brand:
Acer
Model:
FA100-1TB
Capacity:
1024 GB (953 GB usable) No additional overprovisioning
Although Thermaltake’s The Tower 100 isn’t the most practical case, it’s joyfully weird and doesn’t cost much.
For
+ Unique new case design
+ Showpiece from all angles
+ Easily accessible top IO
+ Reasonable thermal performance
+ Affordable
Against
– Cheap build quality
– Lacking cable management
– Impractical build process
– Limited cooling potential
Features and Specifications
Thermaltake’s The Tower 100 is a new ITX chassis that comes with a totally different design from what we’re used to. It places the motherboard along the back wall of the chassis, GPU directly into the PCIe slot, rear IO at the top under a cover, and a large ATX power supply in the basement. It’s bigger than most ITX cases, but it’s got a unique design that may appeal to those who want to show off their hardware, thanks to the glass on three sides.
But although it’s a small showcase, it does limit practicality somewhat by favoring form over function. Without further ado, let’s dig a bit deeper and find out if the case is good enough for a spot on our Best PC Cases list.
Specifications
Type
Mini-ITX
Motherboard Support
Mini-ITX
Dimensions (HxWxD)
18.2 x 10.5 x 10.5 inches (462.8 x 266 x 266 mm)
Max GPU Length
13.0 inches (330 mm)
CPU Cooler Height
7.5 inches (190 mm)
Max PSU Size
ATX, up to 7.1 inches (180 mm)
External Bays
✗
Internal Bays
2x 2.5-inch
Expansion Slots
2x
Front I/O
2x USB 3.0, USB-C, Headphone, Mic
Other
✗
Front Fans
✗
Rear Fans
1x 120mm
Top Fans
1x 120mm
Bottom Fans
✗
Side Fans
✗
RGB
No
Damping
No
Features
Image 1 of 3
Image 2 of 3
Image 3 of 3
Touring around the outside of the chassis, two things that are immediately clear are the lavish amount of glass that’s included for an ITX case, and the ventilation. Glass doesn’t do many favors for cooling, but ventilation does, and from the looks of it, there’s plenty to be found here.
The materials quality isn’t the most stunning, but given that this chassis carries an MSRP of just $109, it’s nothing to be upset about and more than adequate. Only the shroud around the top of the chassis is made from cheap plastic, though it is color-matched quite well to the rest of the case.
Image 1 of 2
Image 2 of 2
Front IO comprises two USB 3.0 ports, a USB Type-C port, dedicated microphone and headphone jacks, and of course power and reset switches. This is all very complete, and much appreciated at the case’s price.
Meanwhile, air filters are also provided on all possible air intake locations. All the side and front vents have filters. The top and rear exhaust have filters, and the bottom PSU intake has an air filter. Of course, that’s a good thing, but there’s a good reason for it: With no dedicated spots for fan-assisted air intake, every corner better have filtration or you’ll end up with significant dust buildup.
Opening Up the Tower 100
Opening up The Tower 100 is a bit of a tedious process, but let’s start with the teardown to reveal the case’s internals. First, you pop off the top cover by pressing down the back to click it out, revealing access to the top-mounted rear IO location. You’ll also spot an exhaust fan here, along with all the cabling for the front IO.
Then, you have to remove five screws to remove the plastic shroud. It then comes right off, and you can remove the glass panels.
Image 1 of 2
Image 2 of 2
Then, we remain with the bottom vents, which are removed by unscrewing them from below. The thumbscrews here are quite tight, so you’ll need a screwdriver to get them off. Personally, I would have preferred to see the front and sides as a single panel and the top shroud stuck on with clips. As designed, it’s quite a bit of work to get the side panels off – a lot more than most ATX cases.
The rear panel comes off by removing four thumbscrews, again bring your screwdriver.
And with that, we have the chassis stripped down to its bare essentials.
The only remaining thing to mention about the internals of the chassis is the dual SSD brackets on the right side, where you can mount your 2.5-inch drives somewhat on display.
A Word on Hardware Compatibility
This chassis is primarily aimed at offering a lot of GPU space and compatibility. As such, fitting large GPUs up to 13 inches (330mm) is a breeze, but you won’t get a lot of CPU cooling potential. The biggest AIO that fits in here is a 120mm unit, which isn’t much. For gaming, this will be fine, but if you’re also running a very powerful Intel CPU and doing a lot of CPU-intensive tasks, you may want to look elsewhere.
Minisforum has introduced its new ultra-compact form-factor (UCFF) desktop PC that combines miniature dimensions with decent performance, rich connectivity, and upgradeability. The TL50 system packs Intel’s 11th Generation Core ‘Tiger Lake’ processor with built-in Xe Graphics and features two 2.5GbE connectors, a Thunderbolt 4 port, and three display outputs.
The PC is based on Intel’s quad-core Core i7-1135G7 processor, paired with 12GB of LPDDR4-3200/3733 memory as well as a 512GB M.2-2280 SSD with a PCIe interface. The CPU is cooled via a heatsink and a fan, so the 28W chip should be able to spend a fair amount of time in Turbo mode.
While Intel’s Tiger Lake platform enables PC makers to build very feature-rich computers on a very small footprint, there are only a few UCFF desktops featuring these processors that take full advantage of their capabilities. Minisforum’s TL50— which measures 5.9×5.9×2.2 inches — is a prime example.
Normally, miniature desktops have constraints when it comes to graphics performance and storage capacity, but the Minisforum TL50 can be equipped with two 2.5-inch HDDs or SSDs as well as an external eGFX graphics solution using a Thunderbolt 4 port.
TL50’s connectivity department looks quite solid, including a Wi-Fi 6 + Bluetooth module, two 2.5GbE ports, three display outputs (DisplayPort 1.4, HDMI 2.0, and Thunderbolt 4), six USB Type-A connectors (four USB 3.0, two USB 2.0), audio input and output and one USB Type-C for the power supply.
The Minisforum TL50 is currently available for pre-order through Japanese crowdfunding site Makuake starting from $651, reports Liliputing. The company plans to make the systems available by the end of July, but by that time they will naturally get more expensive.
G-Technology’s ArmorLock Encrypted NVMe SSD is ready for almost any condition or abuse and comes with secure, always-on 256-bit AES-XTS hardware encryption.
For
+ Competitive 10 Gbps performance
+ AES-XTS 256-bit hardware encryption
+ Rugged design
+ 5-year warranty
Against
– Single 2TB capacity
– Bulky size
– Costly
Features and Specifications
By leveraging your phone’s biometrics, such as touch or FaceID, G-Technology’s ArmorLock Encrypted NVMe SSD makes passwords a thing of the past in an attempt to remove the most common inconvenience when it comes to data security — entering a password. The ArmorLock Encrypted NVMe SSD is a secure portable storage solution with fast, consistent performance of up to 1 GBps of sequential read/write throughput that keeps your data safe with always-on 256-bit AES-XTS hardware encryption. Plus, it carries robust abuse ratings that ensure it will maintain reliability in the toughest conditions, perfect for those adventurous types.
Data security is becoming more important for a large swath of users, from creators in the media and entertainment industry to professionals in finance, government, healthcare, IT enterprise, and legal fields. Password protection backed by AES 256-bit encryption is the norm for those who need to ensure the data they have remains locked down and secure. Ranging from a simple password manager launching within the host OS to alphanumeric keypads with PIN protection and even fingerprint scanners, we have seen quite a few ways of unlocking password-protected storage devices over the years. Unlocking your secure storage with only a phone app seems convenient; let’s put it to the test.
Specifications
Product
ArmorLock Encrypted NVMe SSD 2TB
Pricing
$499.99
Capacity (User / Raw)
Interface / Protocol
USB-C / USB 3.2 Gen 2
Included
USB Type-C & USB Type-C to USB Type-A
Sequential Read
1,000 MBps
Sequential Write
1,000 MBps
Interface Controller
ASMedia ASM2362
NAND Controller
WD Architecture
DRAM
DDR4
Storage Media
WD 96L TLC
Power
Bus-powered
Endurance
IP55 water-dust resistant
2-meter drop protection
1,000 lbs. crush resistant
Security
AES-XTS 256-bit hardware encryption
Dimensions (L x W x H)
134 x 82 x 19 mm
Weight
200 g
Part Number
0G10484-1
Warranty
5-Years
G-Technology’s ArmorLock Encrypted NVMe SSD is available in just one 2TB model priced at $400. The SSD delivers up to 1,000 MBps of sequential throughput. Unlike most consumer-grade SSDs, the ArmorLock NVMe SSD’s write performance won’t significantly degrade below its rated performance under high abuse. Of course, that is assuming that it is connected to a compliant USB 3.2 Gen 2 port. The company backs it with a long five-year warranty for peace of mind, too.
Software & Accessories
G-Technology includes two twelve-inch USB cables, one Type-C, and one Type-A to Type-C, with the ArmorLock Encrypted NVMe SSD.
ArmorLock App
You configure and manage the device through the company’s ArmorLock app, available on both the App Store and Google Play Store. Not only does the app enable firmware updates, formatting, and even secure erasing the device, but it can also track the last known location of the SSD and simplifies multi-user and multi-drive management. You cannot unlock the ArmorLock drive by connecting it directly to a PC — you have to use the app. As such, at installation, the app creates a recovery key that you store separately. This key allows you to install the app onto another phone if you lose your phone or uninstall the application, thus enabling you to unlock your storage device.
Image 1 of 2
Image 2 of 2
A Closer Look
Image 1 of 3
Image 2 of 3
Image 3 of 3
G-Technology’s ArmorLock Encrypted NVMe SSD features a fairly rugged design and an activity indicator light. It carries an IP67 dust and water resistance rating and can handle a three-meter drop, and the company states that it boasts 1000-pound crush resistance. Its thick finned aluminum core aids with heat dissipation, but it comes at the expense of size.
While the device is pocketable, it is very large at 134 x 82 x 19 mm, and the plastic casing gives it a clunky and toy-like feel in the hand. We were even able to twist the casing, which ironically helped during our disassembly process. It is also fairly heavy, weighing roughly 200 grams, which is two to three times heavier than many portable SSDs.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
G-Technology’s ArmorLock uses a Bluetooth low energy module by Raytac that’s based on a Nordic nRF52840 SoC solution that allows communication between your drive and the app. When plugged in, the ArmorLock Encrypted NVMe SSD’s LED indicator light will show it is locked until you unlock the device with the app.
The ArmorLock uses 256-bit AES-XTS hardware encryption, which provides stronger data protection by taking advantage of two AES keys instead of just one, and NIST P-256 elliptic curve-based key management to eliminate side-channel attacks, ensuring data stored on the devices remains secure.
Image 1 of 3
Image 2 of 3
Image 3 of 3
An ASMedia ASM2362 USB 3.2 Gen 2 10 GBps to PCIe 3.0 x2 bridge chip manages host-to-SSD communication. G-Technology outfitted the ArmorLock with WD SN730 PCIe 3.0 x4 NVMe 1.3-compliant SSD. This SSD is similar to the SN750 but comes as a client solution that uses BiCS4 96-Layer TLC flash. It features a multi-core, DRAM-based architecture and offers plenty of speed to saturate the bridge chip’s capability.
MORE: Best SSDs
MORE: How We Test HDDs And SSDs
MORE: All SSD Content
Current page:
Features and Specifications
Next Page2TB Performance Results
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.