ASRock (via momomo_us) has carved up a new motherboard for cryptocurrency miners. The H510 Pro BTC+, which arrives with the LGA1200 socket and H510 chipset, is ready to power your mining operations with the latest Comet Lake or Rocket Lake processors.
The H510 PRO BTC+ measures 50.1 x 22.4cm (20.1 x 8.8 inches) so the motherboard doesn’t adhere to an official form factor. It shouldn’t matter anyways, since the H510 PRO BTC+ is more than likely going on to a rack rather than inside a conventional computer case.
The motherboard’s greatest trait comes in the shape of six PCIe 3.0 x16 expansion slots. However, only the primary PCIe 3.0 expansion slot offers x16 speeds, while the remaining expansion slots are capped at x1. The motherboard allows you to connect up to six graphics cards to mine cryptocurrency. An additional USB mining port bumps the number up to seven.
The steel expansion slots on the H510 PRO BTC+ make sure that your multiple graphics card sit neat and tight on the motherboard. ASRock equipped the H510 PRO BTC+ with not one but three 24-pin power connectors and four Molex power connectors so the motherboard will get all the juice that it needs to feed each and every graphics card.
If we leave the expansion slots aside, the H510 PRO BTC+ is really an austere motherboard. It features a very modest four-phase power delivery subsystem, but the motherboard does boast 50A power chokes. It only has one DDR4 memory slot, though. You’re limited to 32GB of total memory and memory speeds up to DDR4-3200 on Rocket Lake and DDR4-2933 for Comet Lake. However, there is support for ECC memory modules if that’s your thing.
You only have two options for storage. The SATA III port will accept any ordinary hard drive or SSD, while the M.2 slot houses SATA-based drives up to 110mm in length. There’s no audio chip onboard the H510 PRO BTC+ so you’ll have to rely on the HDMI 1.4 port.
The H510 PRO BTC+ provides a single Gigabit Ethernet port, which is based on the Intel I219V controller. The rear panel also holds a combo PS/2 port, one HDMI 1.4 port, two USB 2.0 ports and two USB 3.2 Gen 1 ports. One USB 2.0 header is readily available to deliver two more USB 2.0 ports.
Newegg has the H510 PRO BTC+ up for pre-order at $279.99. The H510 PRO BTC+ officially launches on July 18, and purchase is limited to two motherboards per customer.
Team Group is a well-known Taiwanese hardware manufacturer with a long history of catering to the needs of enthusiasts and gamers from all over the globe. Their lineup includes DRAM memory and solid-state drives, and they also offer various memory cards and USB thumb drives.
Today, we are reviewing the Team Group T-Force Treasure Touch portable SSD, which includes and adjustable RGB element that can be controller via a “touch” interface—as the product name suggests. A colored RGB lighting strip runs along one edge of the drive and lights up in various colors and combos, you can control. Under the hood, we found a fully-fledged SATA SSD, using a Silicon Motion SM2258H controller, paired with Samsung 64-layer TLC flash, and a DRAM cache chip from Hynix. In terms of connectivity, the T-Force Treasure Touch uses a USB-C interface, supporting the USB 3.2 Gen 2 interface, aka USB 3.1 Gen 2 which supports speeds up to 10 Gbps.
We review the Team Group T-Force Treasure Touch in the 1 TB variant, which retails for $150, no other capacity is available, warranty is set to three years.
In the latest installment of the MSI Insider show, MSI has revealed the brand’s Z590 Pro 12VO motherboard that employs Intel’s 10-pin ATX12VO power connector. Besides the motherboard’s feature set, the vendor also shared the benefits of the ATX12VO power connector.
Despite Intel promoting the ATX12VO power connector as far back as last year, the standard hasn’t really caught on. A handful of motherboards on the market utilize the ATX12VO specification, but it’s far from mainstream. As its name implies, the ATX12VO only uses the 12V rail. Therefore, motherboards will have to come with buck converters to translate voltages down to 5V and 3.3V for hardware that still relies on one of the aforementioned voltages.
In addition to improving power efficiency, the ATX12VO power connector is also smaller since it only comes with 10 pins. This is beneficial in compact systems since the footprint is smaller. However, the ATX12VO power connector has yet to prove its worth on ATX motherboards.
Take MSI’s Z590 Pro 12VO, for example. While the motherboard doesn’t have that chunky 24-pin power connector, it has gained a 6-pin PCIe power connector and up to three additional 4-pin power connectors. Evidently, the ATX12VO standard does little for cable clutter in a full-sized desktop system, but again, its advantages reside in power saving.
MSI Z590 Pro 12VO Power Consumption
The Z590 Pro WiFi is the mainstream counterpart of the Z590 Pro 12VO, so naturally, the MSI representatives used the former for comparison. They took out the wireless module from the Z590 Pro WiFi so that both motherboards had a level playing field. The hosts employed the same Core i9-11900K (Rocket Lake) processor, memory and SSD for both tests. There were a lot of fluctuations in the measurements and the tests were short, so take the results with a grain of salt. For easy comprehension, we’ve rounded off the values in the tables below.
Z590 Pro WiFi
Z590 Pro 12VO
Power Reduction
System Idle Consumption
42W
38W
10%
Average CPU Package Power
17W
14W
18%
System Idle Consumption (C10)
N/A
24W
N/A
Average CPU Package Power (C10)
N/A
8W
N/A
The Z590 Pro 12VO drew 10% less system idle power consumption than the Z590 Pro WiFi. There was also an 18% reduction in average processor package power.
The MSI representative went inside the Z590 Pro 12VO’s BIOS and changed the “Package C State Limit” option from Auto to C10. If you’re not familiar with C-states, they are low-power modes that a processor can come into when it’s idling. C10 is the deepest state, wherein the chip effectively turns off.
With C10 enabled, the Z590 Pro 12VO dropped its system idle power consumption from 38W to 24W, a 37% decrease. The average processor package power, on the other hand, decreased from 14W to 8W, representing a 43% power saving.
OEMs are held to stricter environmental standards, which is why you’ll likely find the ATX12VO power connection inside a pre-built system. DIY users, on the other hand, don’t have to abide by environmental regulations.
The ATX12VO standard only thrives in idle or low-load scenarios, which begs the question of how many of us leave our systems idling for prolonged periods of time. Only time will tell if the ATX12VO ever becomes a widely accepted standard. With the rumor that Intel is allegedly giving the specification a hard push with its next-generation Alder Lake-S processors, the 10-pin power connector may be more common on upcoming LGA1700 motherboards.
As one of the world leaders in digital technology, Samsung pretty much makes any type of electronic device you can think of. Their products are used by millions of people around the world.
Being a leader in DRAM and flash memory production, it comes as no surprise that they are also a huge player in the SSD business. Their EVO and PRO Series SSDs are highly popular among upgraders, system builders, and enthusiasts.
The Samsung 980 non-Pro was announced end of March 2021 and made waves because it is a DRAM-less SSD, a design choice usually reserved for value drives without maximum performance, yet Samsung picked the “980” name, which is used on their flagship “980 Pro.” Under the hood, the Samsung 980 in today’s review uses a relatively new controller called “Pablo,” or S4LR033—a 4-channel PCIe Gen 3 controller design we’ve seen on some external Samsung SSDs before. The flash chips are 128-layer 3D TLC, same as on the Samsung 980 Pro. As mentioned before, a DRAM chip is not present, which is a cost-optimization measure, but has the drawback that random write performance is reduced.
The Samsung 980 comes in capacities of 250 GB ($55), 512 GB ($60), and 1 TB ($140). Endurance for these models is set to 150 TBW, 300 TBW, and 600 TBW respectively. Samsung includes a five-year warranty with the 980 non-Pro SSD.
More than six months after its initial debut, the PlayStation 5 remains as elusive as ever. Sony also expects console shortages to stretch into 2022, however, at 5:15PM ET / 2:15PM PT, both the PS5 and PS5 Digital Edition will be available at Sony’s store. You can head to that site right now, and the page will automatically toss you into the queue to (hopefully) secure a PS5 from the batch being released into the wild today.
Sony says that you don’t need to refresh the page, but make sure that you stick around once the queue begins because it’ll likely ask you at some point to verify that you’re still present, or else you might be booted from the queue. If it’s your first time, welcome to the needless drama involved with buying a PS5.
So, what’s the difference between the two models?
The standard PS5 ($499.99) and the Digital Edition ($399.99) are nearly identical, aside from the obvious. The standard console features a UHD Blu-ray disc drive whereas the latter does not, meaning you’ll need to download or stream anything you want to watch or play. The Digital Edition also features a slimmer, lighter build because of this, however, keep in that the size difference is slight. Both consoles also sport an 825GB SSD.
PlayStation 5
Sony’s flagship next-gen console, which includes a disc drive, allowing you to play both digital and physical games on the PS4 and PS5.
$500
at Sony
If you do manage to purchase a PS5, there are a few games and additional accessories we suggest you pick up, including Sony’s Pulse 3D Headset, the forthcoming midnight black DualSense controller, and the best PS5 exclusive released thus far: Returnal.
In addition to the accessories above, we also recommend buying an annual membership to PlayStation Plus. Doing so gives you access to online multiplayer, a selection of free monthly titles, and the PlayStation Plus Collection, which allows you to play a host of PlayStation 4 classics on your PS5 at no additional cost.
One of Western Digital’s Black drives, the SN850, is significantly underperforming in writes speeds on X570 chipset motherboards when connecting the SSD to a chipset-connected M.2 slot, according to a report from ComputerBase. The company told the site it’s looking into the matter.
This strange situation started out when multiple people started complaining about performance results on tech forums, but it wasn’t until ComputerBase tested the SSD for themselves that the issue really became apparent.
Modern motherboards are equipped with a generous number of M.2 slots so consumers can use multiple SSDs. Due to the limited PCIe lanes on processors, not every M.2 port communicates directly with the chip. Some M.2 slots are connected to the chipset instead. In the case of AMD’s X570 motherboards, it’s public knowledge that there is a performance disparity between chipset-and processor-based M.2 slots. This is due to the higher latency on the latter.
The difference in performance is typically less than 10% between a M.2 slot connected to the Ryzen processor and a M.2 slot connected to the X570 chipset. PCIe 3.0 SSDs are unaffected. The WD Black SN850, however, appears to suffer a major performance hit when accommodated in a M.2 slot that isn’t linked to the Ryzen processor.
For some reason, this handicap only seems to be affecting this particular SSD, and on the chipset lanes. If you installed this SSD directly to the CPU-controlled M.2 slot, there would be no performance penalty.
CrystalDiskMark 8.0.1
WD Black SN850 1TB via CPU
WD Black SN850 1TB via X570
Performance Loss
SEQ1M Q8T1
5,254.8
3,247.8
38.2%
SEQ1M Q1T1
5,255
2,972
43.4%
RDN4K Q32T1
652.8
660.4
1.2%
RDN4K Q1T1
250
217.8
12.9%
But when connected to the chipset M.2 slots, the performance is cut in half for write speeds. ComputerBase measured write speeds of 5,254 MBps for the SN850 when connected to the CPU-based M.2 slot. But when switching over to the chipset-controlled slot, performance was nearly cut in half to 3,247 MBps.
Apparently, sequential workloads were the most affected. The delta between random workloads, on the other hand, was less than 13%.Apparently, sequential workloads were the most affected. The delta between random workloads, on the other hand, was less than 13%.
Apparently, sequential workloads were the most affected. The delta between random workloads, on the other hand, was less than 13%.
CrystalDiskMark 8.0.1
WD Black SN850 1TB via CPU
WD Black SN850 1TB via X570
Performance Loss
SEQ1M Q8T1
7,067.5
6,304.2
10%
SEQ1M Q1T1
4,375.6
4,008.5
8.4%
RDN4K Q32T1
705.8
712.8
1%
RDN4K Q1T1
83.2
81.1
2.5%
The nature of the M.2 slot didn’t have a significant impact on the WD Black SN850’s read performance. There was still a performance loss, but it was only around 10%, which is within the expected margin. It would seem that only the SN850’s write performance suffered a major drop.
When ComputerBase tested other SSDs, performance only dropped by 10% when using the chipset lanes. Having a minor drop like this is normal since the chipset is very far away from the CPU, and there could be latency penalties that will lower performance.
But a near 50% drop in speeds is not normal at all for the SN850. Western Digital is actively looking into the matter right now, so we should know more about this situation sooner than later. We suspect the chipset is downgrading the SN850 to Gen 3.0 support, as 3200 MBps is a very common speed among Gen 3.0 SSDs.
The Inland Performance Plus offers up very fast Gen 4 performance at a lower price than its competition, making it a compelling value for those on the hunt for a new high-performance M.2 NVMe SSD.
For
+ Appealing aesthetics
+ Competitive performance
+ 5-year warranty
+ Keeps cool under most workloads
+ Heatsink is easily removed
Against
– Lacks AES 256-bit encryption
– Lacks supporting software
Features and Specifications
Inland’s Performance Plus is a high-performance PCIe 4.0 x4 M.2 NVMe SSD that rivals the best SSDs you can buy, but at a cheaper price point. Plus, it comes with a huge heat sink to keep this SSD cool under intensive workloads. You might not recognize the Inland brand, but it’s been a staple at Micro Center for years, and is available via Amazon as well.
Inland’s Performance Plus is one of a few of the company’s recent speedy SSDs we have slated for review. Many (if not all) of Inland’s SSDs look to be powered by Phison-branded SSD controllers, which gives us an idea of what to expect when it comes to performance and reliability. While the brand isn’t as large as say Samsung or Crucial, with the help of Phison, the company is able to remaining surprisingly competitive in the storage arena, against many much-larger rivals.
Available at Micro Center
Direct Pricing $399.99
Hardware-wise, Inland’s performance Plus is similar in design to that of the Gigabyte Aorus Gen4 7000s, Corsair MP600 Pro, and Sabrent Rocket 4 Plus. It leverages the same E18 NVMe SSD controller and Micron’s 96L TLC flash as these alternatives, along with a sleek heatsink, but it undercuts them in price in most cases. The Inland Performance Plus makes for a solid value for those on the hunt for fast Gen4 SSD.
Specifications
Product
1TB
2TB
Pricing
$189.99
$399.99
Capacity (User / Raw)
1000GB / 1024GB
2000GB / 2048GB
Form Factor
M.2 2280
M.2 2280
Interface / Protocol
PCIe 4.0 x4 / NVMe 1.4
PCIe 4.0 x4 / NVMe 1.4
Controller
Phison PS5018-E18
Phison PS5018-E18
DRAM
DDR4
DDR4
Memory
Micron 96L TLC
Micron 96L TLC
Sequential Read
7,000 MBps
7,000 MBps
Sequential Write
5,500 MBps
6,850 MBps
Random Read
350,000 IOPS
650,000 IOPS
Random Write
700,000 IOPS
700,000 IOPS
Security
N/A
N/A
Endurance (TBW)
700 TB
1,400 TB
Part Number
1TB NVME PERF
2TB NVME PERF
Warranty
5-Years
5-Years
Inland offers the Performance Plus in 1TB and 2TB capacities, priced at $190 and $400, respectively. In terms of warranty coverage, Inland backs the Performance Plus with a five-year warranty or up to 700TB of writes per 1TB in capacity, whichever comes first.
Each capacity can dish out up to 7 GBps in read performance, but both differ in write potential. The 1TB model can write at up to 5.5 GBps, while the roomier 2TB model can sustain writes at up to 6.85 GBps thanks to having double the number of the NAND dies. Additionally, random read performance scales much higher on the 2TB than the 1TB model. The 1TB Performance Plus is rated to deliver up to 350,000/700,000 random read/write IOPS while the 2TB model can manage up to 650,000/700,000 random read/write IOPS.
A Closer Look
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Inland includes a well-designed heatsink, similar to that of the Corsair MP600 Pro, but with a few more cuts to add surface area for taming the heat under sustained workloads. However, measuring 14.5 x 23 x 70mm, Inland’s Performance Plus is very thick and can interfere with GPU placement, depending on the M.2 slot you attempt to install it in. If it gets in the way or you just want to use your motherboard’s heat sink, it is easy to remove, though.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
At the heart of the Performance Plus is a Phison PS5018-E18 PCIe 4.0 x4 NVMe 1.4 SSD controller. Along with features such as S.M.A.R.T. data reporting, secure erase capability, and TRIM, it also features ASPM and APST support for low power consumption at idle.
Unlike Phison’s previous-generation E16, the E18 is built from the ground up tp offer greater performance capability for PCIe 4.0 drives. It leverages a tri-core primary Arm Cortex R5 architecture, along with a dual-core co-processor, which results in very fast sustained write speeds. Furthermore, there are two 8Gb SK hynix DDR4 DRAM ICs on our 2TB sample, in order to accelerate access to the logical to physical mapping tables, ensuring responsive reads.
As for the bulk storage, we find eight packages of Micron’s 96L TLC. There are 32 dies in total on our 2TB sample, each 512Gb in density. This flash is not quite as fast as Micron’s recently tested 176L TLC, but with it operating at 1200 MTps over the controller’s eight NAND channels, it’s fast enough to keep up with the likes of the best in many cases.
The HP Elite Dragonfly Max has a bright display and long battery life, but its performance could be stronger, and it has a very high price, even for a business-class laptop.
For
+ 5G option
+ Bright Display
+ Long Battery Life
Against
– Middling Performance
– Expensive even for a business-class computer
The original HP Elite Dragonfly challenged the Lenovo ThinkPad line with its style and excellent keyboard. Now, there’s a variant, the HP Elite Dragonfly Max ($2,199 to start, $2,789 as configured).
Despite the Max title implying that this device would be bigger, it’s actually the same size as the original, which is one of the best ultrabooks. This version adds a bright Sure View Reflect screen and 5G networking. But if neither of those appeal to you — the Sure View Reflect screen in particular suffers from some really harsh viewing angles that undercut its positives — you might be better off looking at the original Dragonfly or other options.
The HP Elite Dragonfly Max is a slick, thin convertible laptop with a glittery matte black shell that feels durable but loves to collect fingerprints. There’s a symmetrical, reflective HP logo on the lid and a smaller logo below the screen, plus EliteBook and Bang & Olufsen branding on the keyboard deck.
What’s most noticeable about this laptop is the size, although it’s not especially larger or smaller than most other ultraportables. At 11.98 x 7.78 x 0.63 inches, it’s a little wider than the Dell XPS 13 2-in-1 (11.6 x 8.2 x 0.6 inches) and the Razer Book 13 (11.6 x 7.8 x 0.6) but not too much thicker. But at 11.6 x 7.8 x 0.55 inches, the Lenovo ThinkPad X1 Nano is significantly thinner than the HP Elite Dragonfly Max.
The Elite Dragonfly Max is on the lighter end when it comes to weight, however. Its 2.49 pound weight is only beaten by the ThinkPad X1 Nano’s 2 pounds. Meanwhile, the Dell XPS 13 2-in-1 and Razer Book 13 are 2.9 and 3.1 pounds, respectively.
Ports on the Elite Dragonfly Max are varied but poorly distributed. While the left side has the NanoSim card reader (if you have a model with cellular networking capabilities, as we did) and a single USB Type-A port, the convertible’s right side has two Thunderbolt 4 connections, an HDMI 2.1 connection and a single 3.5mm combination headphone/microphone jack. This uneven port distribution can make charging your laptop a pain if your desk setup makes its left side more accessible.
Productivity Performance of the HP Elite Dragonfly Max
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The HP Elite Dragonfly Max is HP’s latest attempt to compete with Lenovo’s ThinkPad, specifically the ThinkPad X1 Nano. That means it aims for plenty of productivity power, and comes equipped with the slightly more powerful Intel Core i7-1185G7 to accomplish this. But the ThinkPad, with the Intel Core i7-1160G7 and the Dell XPS 13 2-in-1 and the Razer Book 13 with Intel’s Core i7-1165G7 CPU still offered strong performance and won out in some tests.
In Geekbench 5, a synthetic benchmark for testing general performance, the Elite Dragonfly Max achieved a single core score of 1,512 and a multi-core score of 5,195. That puts it slightly ahead of the ThinkPad X1 Nano’s 1,473 single core score but about on par with its 5,155 multi-core score. But the XPS 13 2-in-1 and the Razer Book 13 beat it on both fronts, and by a much wider margin when it comes to multi-core performance. The former earned scores of 1,539/5,571, and the latter hit scores of 1,556 and 5,495.
The Elite Dragonfly Max did have a slightly faster SSD than its competitors, transferring 25GB of files at a rate of 558.9 MBps. The Razer Book 13 was the next fastest, hitting 479 MBps, while the ThinkPad X1 Nano came in towards the bottom of the pack with a 424.81 MBps speed. The XPS 13 2-in-1 was the slowest computer here, transferring the files at a rate of 405.55 MBps.
Our Handbrake video transcoding test, which tracks how long it takes a machine to transcode a video down from 4K to FHD, saw the Elite Dragonfly Max once again land on the weaker side. It took 19:44 to finish transcoding, while the ThinkPad X1 Nano took 16:55. The XPS 13 2-in-1 was faster at 15:52, while the Razer Book 13 was the quickest at 14:46.
We also ran the HP Elite Dragonfly Max through Cinebench R23 for 20 consecutive runs to see how well it operates during an extended work session. Scores started out at 4,172 before dropping to the high 3,000s for most runs, and achieved an average of 3,925. There were a few peaks and valleys during tests, which might have been related to short bursts of throttling we noticed throughout the 20 runs. Most of the throttling happened during the beginning of the tests, but there were instances of it throughout. The CPU ran at an average 2,405.82 MHz clock speed during this test, and sat at an average temperature of 69.16 degrees Celsius (156.49 degrees Fahrenheit).
Networking Performance of the HP Elite Dragonfly Max
Our configuration of the HP Elite Dragonfly Max came with a Nano Sim card slot for 5G networking, plus a prepaid card from AT&T. When I tested the laptop in downtown Brooklyn, I found that it was only slightly slower than my home Verizon Fios connection.
I was able to watch videos, download apps and stream music with no interruptions. The biggest difference I noticed was the time it took to load pages, which would sometimes take about a second longer than on Wi-Fi.
Still, your experience might differ based on where you live and your choice of carrier.
Display on the HP Elite Dragonfly Max
The HP Elite Dragonfly Max is, no matter how you configure it, a pricey computer. And for that extra cost, you do get a new, almost absurdly bright HP Sure View Reflect display, which also packs novel privacy and anti-blue light technology. While we were impressed with a measured 707 nits of average brightness, we were let down by extremely strict viewing angles. This screen tended to wash out for me when I moved more than 45 degrees away from it, perhaps because of the privacy features.
But when I was sitting directly in front of the screen, I had a great experience even in my brightly lit office. I tested the screen by watching the latest trailer for Cruella on it, and colors were vivid while blacks were deep. Glare also wasn’t an issue, although the screen had some minor reflectivity to it.
When I looked at the screen in a darker environment, reflectivity became less of a problem, but viewing angles still remained tight.
HP Sure View Reflect is one of HP’s privacy-oriented displays, with a built-in app (you can also turn it on with the F2 button) that turns the image into a blank copper rectangle when you look at it from more than 45 degrees away. This worked well for me when I turned it on, but given that the image is already so washed out at those angles, it seems like an unnecessary addition, especially because it also made my screen uncomfortably dim even when looking at it from straight on. I also wonder if building the screen to accommodate this technology reduces viewing angles even when the privacy feature isn’t turned on.
Still, there’s no denying that the screen is pleasant under optimal conditions. Our colorimeter showed it covered 81.7% of the DCI-P3 spectrum, which is much higher than the ThinkPad X1 Nano’s 71.6% and the XPS 13 2-in-1’s 70%. Only the Razer Book 13 came close, with 80.7%.
And, of course, 707 nits is immensely bright. The ThinkPad X1 Nano is much dimmer at the still very bright 430 nits. At 426 and 488 nits, respectively, the Razer Book 13 and the Dell XPS 13 2-in-1 are in a similar boat. However, there is such a thing as diminishing returns, and we’re not sure that the extra brightness is worth it — we still had great viewing experiences on these competitors, some of which boast better viewing angles.
What might be worth the extra cost is HP’s Eye Ease technology. This always-on, hardware level anti-blue light filter supposedly shifts harmful blue light spectrum images to more comfortable places on the spectrum without affecting the look of the image. This is because the screen only targets a very specific area of blue light, rather than tinting the whole image yellow like most solutions. After a whole day of working on the Elite Dragonfly Max, I did notice a lack of eye strain; however, I’m not sure if it was a placebo effect. I tend not to feel too much strain from my regular monitor, either, and I feel like I’d need to judge this feature over the course of a few weeks to fairly assess it.
Keyboard, Touchpad and Stylus on the HP Elite Dragonfly Max
The HP Elite Dragonfly Max has a chiclet style keyboard that feels stiff and hard when pressing down keys, but I still managed to type quickly on it
On 10fastfingers.com, I regularly hit 78 – 79 words per minute, which is towards the upper end of my usual score range. However, I also had a number of typos during my tests, and keypresses didn’t exactly feel cushiony. Aside from the typical notches on the F and J keys, the keycaps also don’t have any distinct build features to help you find your fingers’ position by touch alone. This left typing feeling a bit like a chore, even if I technically typed speedily.
The large, 4.3 x 2.6 inch precision touchpad is, by contrast, a more pleasant experience. It feels smooth to the touch, and scrolling happens just as smoothly, although there’s enough friction to easily make precise adjustments. Multi-touch gestures like scrolling with two fingers or switching apps with three fingers were also a breeze to pull off.
There’s also a small, separate fingerprint reader to the right of the touchpad, which is a nice plus given that much of this computer’s competition integrates fingerprint readers into the touchpad instead, which creates dead zones.
Audio on the HP Elite Dragonfly Max
The HP Elite Dragonfly Max comes with four speakers by Bang & Olufsen (two top-firing and two bottom-firing) that have impressive bass. I listened to “Butter” by BTS on them, and I didn’t feel like I lost any information from the beat heavy song. Audio was also clear with no tinniness, even on high vocals, and I could easily hear the song across my two-bedroom apartment at max volume.
At around 50% volume, I had about as optimal of a listening experience as I would expect to get from a device this size.
The HP Elite Dragonfly Max also comes with an audio control program called, well, HP Audio Control. Unfortunately, I didn’t hear much of a difference between its music, movie and voice presets.
Upgradeability of the HP Elite Dragonfly Max
Image 1 of 2
Image 2 of 2
The HP Elite Dragonfly Max is surprisingly easy to open for an ultraportable. It’s got five Torx T5 screws on the bottom, and the case easily lifts off after removing them. (The hardest part may be finding a Torx screwdriver.) Once you’re inside the laptop, you’ll have immediate access to both the Wi-Fi and 5G chips, plus you’ll see a silver shield above the battery with a pull tab on it. If you pull on that tab, you’ll have direct access to the laptop’s SSD.
Battery Life of the HP Elite Dragonfly Max
The HP Elite Dragonfly Max has an edge on battery life over its competition. In our battery benchmark, which continually browses the web, runs OpenGL tests over-Wi-Fi and streams video at 150 nits, the HP Elite Dragonfly Max held on for 13 hours and 9 minutes.
That’s a bit more than an hour longer than its longest-lasting competition, the ThinkPad X1 Nano, which had a 12 hour battery life on the same test. The Razer Book 13 lasted for 11 hours and 44 minutes, while the Dell XPS 13 2-in-1 was the quickest to die with a 10 hour and 52 minute battery life.
Heat on the HP Elite Dragonfly Max
The HP Elite Dragonfly Max runs on the cool side for an ultraportable laptop, plus it has special software to keep it extra cool when it’s on your lap.
After 15 minutes of streaming video, the laptop’s touchpad measured 77.5 degrees Fahrenheit, while the center of its keyboard (between the G and H keys) was about 10 degrees hotter at 88.9 degrees Fahrenheit. The laptop’s underside was mostly about 90.1 degrees Fahrenheit, although it ran closer to 102.7 degrees Fahrenheit closer to its vents.
The HP Elite Dragonfly Max also has HP Context Aware software, which uses machine learning to detect when the laptop is on your lap so it can lower the performance mode. HP claims this can reduce the temperature by up to 9 degrees Fahrenheit, although you can turn the feature off if you’re using a lap desk and would prefer to prioritize performance. For my part, I noticed that the Dragonfly was still warm on my lap, but it did adjust its performance mode on and off as advertised. Unfortunately, I don’t have a temperature reading camera at home to test lap temperatures.
HP Elite Dragonfly Max Webcam
The HP Elite Dragonfly Max comes with a 5MP webcam that captures photos at 1440p, which is a higher resolution than you’ll find on even most desktop webcams. Plus, it’s also got a physical camera shutter.
That said, artifacts are still present on photos taken with this laptop’s camera, although lighting and color is accurate. The quality should be more than enough for most casual use cases, but my face is more pixelated than I like when I view this camera’s photos at full screen.
Pixelation becomes more noticeable in low-light environments, but color and lighting remains strong.
This camera’s performance in saturated lighting conditions is unique, but maybe flawed. I’ve never seen a webcam take such a detailed photo through a window pane before (usually, they’ll just depict windows as sheets of white), but my face is bathed in so much shadow that I’m not sure the camera counts as usable under these conditions.
The HP Elite Dragonfly Max also has two front facing mics and two world facing mics, which lets it use AI noise cancellation to help keep background noise out of calls. I found that the AI noise cancellation works well, although the microphone quality itself is questionable. My recordings sounded echo-y and especially muffled, and part of me wonders if the AI noise cancellation contributed to this.
This laptop also has a sliding physical webcam cover.
Software and Warranty on the HP Elite Dragonfly Max
This laptop does not skimp on the pre-installed software, with over 16 HP-branded programs alone coming pre-loaded on it. And that’s not even everything. There’s also a program that tries to get you to install free trials for different Adobe Creative Cloud programs, plus typical Windows pre-installs like Microsoft Solitaire Collection and Maps.
At least the HP apps are generally useful. HP Wolf Security, for instance, is a free firewall not unlike Windows Defender. HP QuickDrop lets you easily transfer files across devices, including mobiles phones. There’s even HP Easy Clean, which is a novel app that shuts down all of your laptop’s input for a few minutes so you can sanitize it without accidentally pressing any buttons (there is a 2-button keyboard shortcut to unlock your PC early if you need to, though).
But there’s no reason all of these utilities have to be their own separate programs. It’s easy to see them as clutter that way. If I were HP, I’d consider rounding up most of these functions into one central hub app, similar to Lenovo’s Vantage program.
The HP Elite Dragonfly Max also comes with a three year limited warranty.
HP Elite Dragonfly Max Configurations
The HP Elite Dragonfly Max has two pre-built Wi-Fi only configurations, one pre-built Wi-Fi and 5G configuration and one fully customizable option. Our review configuration was that Wi-Fi and 5G pre-built option, which came with an Intel Core i7-1185G7 CPU, 16GB of RAM, a 512GB SSD and a 13.3 inch FHD display. It costs $2,789.
The Wi-Fi only pre-built models are $2,199 and $2,399, respectively, although the only difference between them seems to be whether the laptop uses an i7-1165G7 chip or an i7-1186G7 chip. Otherwise, you’ll get 16GB of RAM, a 512GB SSD and a 13.3 inch FHD display.
The configurable option is exclusive to HP’s website, and starts at $2,409 for the Windows version (the website says it technically costs $3,347, but there’s a permanent $1,000 discount applied to it). You can shave $236 off the price if you want to go for FreeDOS, which might be useful if you intend to install Linux on the device.
More realistically, you’ll be configuring your PC to add on to it. Here, you can bump the CPU up to an i7-1185G7 processor and the RAM up to 32GB for a combined $489, and the SSD up to 2TB for $865. There’s also in-between options— bumping the SSD to just 1TB will cost you an extra $235, and there are 16GB and 32GB RAM bundles available for both the cheaper i7-1165G7 CPU and the more costly i7-1185G7 CPU.
You can also choose to go Wi-Fi only in a custom build, or go for either Intel XMM LTE ($155) or Qualcomm SnapDragon 5G ($440) networking. Plus, there’s add-ons like an optional Wacom pen, which costs $74.
HP’s website says custom builds won’t ship until October, although HP assured us that this is incorrect, and is in the process of sending us more information.
Bottom Line
The HP Elite Dragonfly Max is an expensive convertible with a great look and a bright screen that purports to have an anti-blue light feature, but it doesn’t have a worthwhile power boost compared to cheaper options and doesn’t exactly make up for it with its keyboard or its display’s other specs.
I acknowledge that our configuration has an extra cost tied to it thanks to the 5G, which was admittedly only slightly slower than my Wi-Fi when I tested it in downtown Brooklyn. But even without the 5G, this computer costs more than $2,000. Compare that to the ThinkPad X1 Nano, another business class convertible which either beat it or performed on par with it in all of our productivity tests and only costs around $1,600 from certain e-tailers, and it’s hard to justify getting the Elite Dragonfly Max.
Granted, the HP Elite Dragonfly Max has a slightly higher battery life and a much brighter screen than the ThinkPad X1 Nano. But viewing angles on this display are excessively strict, so it still comes with caveats. Plus, you lose out on that great ThinkPad keyboard and the ThinkPad X1 Nano’s 16:10 aspect ratio.
If you go for a non business-class computer like the XPS 13 2-in-1 9310, you can get even more power for even less.
If you’re a business-oriented buyer and you really want 5G or bright displays or niche security software like HP Sure View, then this laptop might be for you. Otherwise, you can get more raw power for less elsewhere, plus maybe some better viewing angles while you’re at it.
Razer announced its first AMD-based gaming laptop, the Razer Blade 14, during its E3 keynote. Until now, Razer had been the last major laptop manufacturer that had stuck exclusively with Intel.
Razer is calling the new Blade “the most powerful 14-inch gaming laptop.” And with an AMD Ryzen 9 5900HX processor and GPU options ranging from an Nvidia GeForce RTX 3060 up to an RTX 3080 with 8GB of VRAM and a 100W TGP, it could be a strong contender for our
best gaming laptops
list. But admittedly, 14-inches isn’t a very popular size for gaming laptops, which are often 15-inches or larger.
Razer Blade 14
Price
$1,799
$2,199
$2,799
CPU
AMD Ryzen 9 5900HX
AMD Ryzen 9 5900HX
AMD Ryzen 9 5900HX
GPU
Nvidia GeForce RTX 3060
Nvidia GeForce RTX 3070
Nvidia GeForce RTX 3080 (8GB)
Display
1920 x 1080, 144 Hz, AMD FreeSync Premium
2560 x 1440p, 165 Hz, AMD FreeSync Premium
2560 x 1440p, 165 Hz, AMD FreeSync Premium
Memory
16GB DDR4-3200 (soldered)
16GB DDR4-3200 (soldered)
16GB DDR4-3200 (soldered)
Storage
1TB PCIe SSD
1TB PCIe SSD
1TB PCIe SSD
Battery
61.6 WHr
61.6 WHr
61.6 WHr
Dimensions
12.59 x 8.66 x 0.66 inches / 319.7 x 220 x 16.8 mm
12.59 x 8.66 x 0.66 inches / 319.7 x 220 x 16.8 mm
12.59 x 8.66 x 0.66 inches / 319.7 x 220 x 16.8 mm
The company is claiming that, at 16.8 mm (0.66 inches) thin, it is the “thinnest 14-inch gaming laptop.” Like Razer’s other notebooks, the Blade 14 is milled from CNC aluminum with an anodized finish.
To cool those components, Razer is using vapor chamber cooling and what it calls “touchpoint thermal engineering” to keep commonly-touched surfaces, like the WASD keys, from getting too hot.
There are two display options: a
1920 x 1080
screen with a 144 Hz refresh rate, or a 2560 x 1550p panel with a 165 Hz refresh rate. Both use
AMD FreeSync
Premium to eliminate tearing.
For $1,799, you get an RTX 3060 and the FHD display. $2,199 nets you an RTX 3070 with the
QHD
screen, and for $2,799, Razer offers the RTX 3080 with the QHD panel. In every version, you get the same Ryzen 9 5900HX, 16GB of soldered RAM and a 1TB PCie SSD.
Ports include two USB Type-C 3.2 Gen 2 ports, HDMI 2.1, a single USB 3.2 Gen 2 Type-A port and a 3.5 mm headphone jack. Other features include Wi-Fi 6E support, Bluetooth 5.2, an IR camera to log in with Windows Hello, of course, per-key RGB lighting. The design also includes top-firing speakers tuned by THX, which Razer owns.
As of right now, the Blade 14 will be the only laptop in Razer’s lineup with an AMD processor. The Blade 15, which still exclusively uses Intel chips, remains the flagship notebook. It’s unclear if Razer intends to add the choice of either chip at any point in the future.
Razer is also using E3 to get into the laptop charger market. It announced the Razer USB-C 130W GaN Charger (GaN is short for Gallium Nitride) with two USB-C ports at 100W and two USB-A ports at 12W. It weighs just 349 grams (0.77 pounds) and measures 3,2 x 7.7 x 6.2 mm. It will compete with the best USB-C laptop chargers.
The device can charge four devices at a time, including a laptop, though it wouldn’t be enough to power the Blade 14 while gaming. The charger also comes with adapters for global travel. It’s $179.99 and available for pre-order from Razer.com, Razer stores and other retailers. It’s scheduled to ship within 30 days.
Both Intel and Toshiba have become increasingly confident in their projections for the debut of PLC flash, which packs in five bits per cell to reduce SSD pricing, but Western Digitial recently downplayed the feasibility of PLC SSDs before 2025.
WD says this type of memory will only become viable sometime in the second half of this decade when SSD controllers become more advanced. The claim contradicts other 3D NAND suppliers that believe 3D PLC SSDs could rival hard drives in the next few years.
Each new type of flash brings reduced SSD pricing, but as we’ve seen with QLC NAND, that can lead to big reductions in endurance and performance. That takes some of the shine off of a future transition to PLC (Penta Level Cell) flash that packs in five bits per cell to reduce pricing but results in even lower endurance and performance.
“I expect that transition [from QLC to PLC] will be slower,” said Siva Sivaram, Western Digital’s technology and strategy chief, at Bank of America Merrill Lynch 2021 Global Technology Conference (via SeekingAlpha). “So maybe in the second half of this decade we are going to see some segments starting to get 5 bits per cell.”
TLC flash is the most widely used variant today, and while there are 3D QLC NAND chips available, they aren’t as widely used. Western Digital expects this to change only with its BiCS6 NAND memory and new controllers/firmware.
“We think that QLC across the broad segment will happen in the next [BiCS 6 generation, when] the majority of bits will switch over to QLC in the marketplace,” said Sivaram. “[…]In the next two years plus we are going to see the rapid acceleration of QLC adoption.”
Modern SSD controllers powered by Arm’s Cortex-R8 cores can handle advanced error correction (4KB LDPC) algorithms while ensuring decent performance, but 3D PLC flash will require even more complex error correction, and hence more compute horsepower from the controller. The controller will also have to support more redundant capacity and robust wear-leveling.
“The incremental gain is not quite as much when we are going from 4 to 5 bits on the same cell, so you are getting [25%],” said Sivaram. “To get that gain you are sacrificing a lot, you need additional redundancy, additional ECC, so the net gain supposed to the performance loss may not be quite as desirable.”
Arm introduced its 64-bit Cortex-R82 core for next-generation SSD controllers in September 2020. Arm says the design is 1.74x ~ 2.25x faster than the Cortex-R8 in real-world applications and 21% and 23% faster than the Cortex-A55 in SPECint2006 and SPECfp2006, respectively. The Cortex-R82 is designed to run in clusters with up to eight cores, so controller makers could build rather formidable processors based on the new core, which will be quite handy for PLC SSDs.
There is a catch, though. The first controllers with the Cortex-R82 (probably due sometime in 2023 or 2024) will likely be aimed primarily at high-end drives with in-storage compute capabilities, and not on high-density SSDs featuring cheap 3D PLC flash. As a result, 3D PLC flash is unlikely to become mainstream any time soon.
There are certainly plenty of challenges involved with moving to PLC flash. For example, 3D PLC NAND can store five bits per cell (5 bpc), a 25% increase over quad-level cell (QLC) flash, and a 66% increase over the triple-level cell (TLC) flash memory used today.
To do so, NAND cells have to store 32 distinct voltage levels, and SSD controllers have to read them properly and record them fast. In contrast, TLC uses eight voltage levels, and QLC uses 16 voltage levels. In addition to the complexity of PLC 3D NAND cells, challenges like cell-to-cell interference and temperatures make it harder to read data.
To offer decent performance and endurance characteristics, 3D TLC-based SSDs use 120 bit/1KB or even 340 bit/2KB LDPC ECC algorithms that are already quite complex. In addition, manufacturers also implement static and dynamic wear-leveling, RAID ECC, and overprovisioning to further maximize endurance.
With 3D QLC-powered SSDs, we’ll need support for 2KB and 4KB LDPC codewords, more complex wear-leveling, and more overprovisioned capacities. Furthermore, memory makers also have to change the design of their cells (e.g., use slightly different materials, etc.) to reliably store 16 voltage levels.
All of this means that we’ll see PLC SSDs later rather than sooner, largely due to needed advances that aren’t directly associated with manufacturing the flash itself.
Hynix has been one of the “big three” memory manufacturers for decades. Together with Samsung and Micron they have been dominating the memory market and are producing a substantial percentage of the world’s DRAM and NAND memory. A few years ago the company “Hynix”, which was originally founded as “Hyundai Electronics Industrial Co” in 1983, was sold to “SK Group”—a large Korean conglomerate, hence the name “SK Hynix”. Just last year, Hynix decided to purchase Intel’s NAND business for $9 billion.
The SK Hynix Gold P31 SSD was announced in October last year and has since been receiving attention from enthusiasts. Today we finally bring you our review of the Hynix Gold P31, which is built with only Hynix components—an ability that only Samsung had in the past. The controller is an in-house design by Hynix, called ACNT038 or “Cepheus”. The flash chips are modern 128-layer 3D TLC. A LPDDR4-3733 DRAM chip provides 1 GB of memory for the mapping tables of the SSD.
The Hynix Gold P31 comes in capacities of 512 GB ($75) and 1 TB ($135). Endurance for these models is set to 500 TBW and 750 TBW respectively. Hynix includes a five-year warranty with the Gold P31.
Not to be confused with Korben’s Taxi from The Fifth Element, Seagate has launched the new FireCuda 520 Cyberpunk 2077 Limited Edition SSD. Seagate only produced 2,077 of these drives so they’ll ultimately turn into collectible for Cyberpunk 2077fanatics.
The FireCuda 520 Cyberpunk 2077 LE is still very much a FireCuda 520 SSD, but with the addition of custom, neon yellow heatsink inspired by the CD Projekt RED game. Also on the heatsink is a Cyberpunk logo equipped with customizable RGB illumination, which requires a 5V addressable RGB header for control.
Although it adheres to the standard M.2 2280 form factor, the FireCuda 520 Cyberpunk 2077 LE is aimed at desktops, due to its bulky heatsink. Seagate claims it helps reduce the drive’s temperature by up to 22 degress Celsius. In fact, the manufacturer doesn’t recommend taking the heatsink off because this can damage the device. Therefore, the FireCuda 520 Cyberpunk 2077 LE can get in the way of your other hardware, especially with motherboards that have M.2 slots that are very close to the PCIe expansion slots.
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Seagate uses the same recipe for the FireCuda 520 Cyberpunk 2077 LE as with its vanilla FireCuda 520 offerings. The Cyberpunk 2077 version looks to compete with the best SSDs with a Phison PS5016-E16 SSD controller with Toshiba 96L TLC (triple-level cell) NAND. The drive has a 1,800 TBW endurance rating, and Seagate backs it with a limited 5-year warranty.
Being a PCIe 4.0 x4 drive, the FireCuda 520 Cyberpunk 2077 LE offers read and write speeds up to 5,000 MBps and 4,400 MBps, respectively. The SSD’s random performance is rated for 760,000 IOPS reads and 700,000 IOPS writes.
The FireCuda 520 Cyberpunk 2077 LE (ZP1000GM30012) is only available in a 1TB capacity. Seagate didn’t reveal the pricing for the drive, but the regular FireCuda 520 1TB retails for $189.99,so we expect the Cyberpunk 2077 variant to carry a small premium for the extra eye candy.
The Nvidia GeForce RTX 3070 Ti continues the Ampere architecture rollout, which powers the GPUs behind many of the best graphics cards. Last week Nvidia launched the GeForce RTX 3080 Ti, a card that we felt increased the price too much relative to the next step down. RTX 3070 Ti should do better, both by virtue of only costing $599 (in theory), and also because there’s up to a 33% difference between the existing GeForce RTX 3070 and GeForce RTX 3080. That’s a $100 increase in price relative to the existing 3070, but both the 3070 and 3080 will continue to be sold, in “limited hash rate” versions, for the time being. We’ll be adding the RTX 3070 Ti to our GPU benchmarks hierarchy shortly, if you want to see how all the GPUs rank in terms of performance.
The basic idea behind the RTX 3070 Ti is simple enough. Nvidia takes the GA104 GPU that powers the RTX 3070 and RTX 3060 Ti, only this time it’s the full 48 SM variant of the chip, and pairs it with GDDR6X. While Nvidia could have tried doing this last year, both the RTX 3080 and RTX 3090 were already struggling to get enough GDDR6X memory, and delaying by nine months allowed Nvidia to build up enough inventory of both the GPU and memory for this launch. Nvidia has also implemented its Ethereum hashrate limiter, basically cutting mining performance in half on crypto coins that use the Ethash / Dagger-Hashimoto algorithm.
Will it be enough to avoid having the cards immediately sell out at launch? Let me think about that, no. Not a chance. In fact, miners are probably still trying to buy the limited RTX 3080 Ti, 3080, 3070, 3060 Ti, and 3060 cards. Maybe they hope the limiter will be cracked or accidentally unlocked again. Maybe they made too much money off of the jump in crypto prices during the past six months. Or maybe they’re just optimistic about where crypto is going in the future. The good news, depending on your perspective, is that mining profitability has dropped significantly during the past month, which means cards like the RTX 3090 are now making under $7 per day after power costs, and the RTX 3080 has dropped down to just over $5 per day.
GeForce RTX 3070 Ti: Not Great for Mining but Still Profitable
Image 1 of 3
Image 2 of 3
Image 3 of 3
Even if the RTX 3070 Ti didn’t have a limited hashrate, it would only net about $4.25 a day. With the limiter in place, Ravencoin (KAWPOW) and Conflux (Octopus) are the most profitable crypto coins right now, and both of those hashing algorithms still appear to run at full speed. Profitability should be a bit higher with tuning, but right now, we’d estimate making only $3.50 or so per day. That’s still enough for the cards to ‘break even’ in about six months, but again, profitability has dropped and may continue to drop.
The gamers among us will certainly hope so, but even without crypto coin mining, demand for GPUs continues to greatly exceed supply. By launching the RTX 3070 Ti, with its binned GA104 chips and GDDR6X memory, Nvidia continues to steadily increase the number of GPUs it’s selling. Nvidia is also producing more Turing GPUs right now, mostly for the CMP line of miner cards, and at some point, supply should catch up. Will that happen before the next-gen GPUs arrive? Probably, but only because the next-gen GPUs are likely to be pushed back thanks to the same shortages facing current-gen chips.
Okay, enough of the background information. Let’s take a look at the specifications for the RTX 3070 Ti, along with related Nvidia GPUs like the 3080, 3070, and the previous-gen RTX 2070 Super:
GPU Specifications
Graphics Card
RTX 3080
RTX 3070 Ti
RTX 3070
RTX 2070 Super
Architecture
GA102
GA104
GA104
TU104
Process Technology
Samsung 8N
Samsung 8N
Samsung 8N
TSMC 12FFN
Transistors (Billion)
28.3
17.4
17.4
13.6
Die size (mm^2)
628.4
392.5
392.5
545
SMs / CUs
68
48
46
40
GPU Cores
8704
6144
5888
2560
Tensor Cores
272
192
184
320
RT Cores
68
48
46
40
Base Clock (MHz)
1440
1575
1500
1605
Boost Clock (MHz)
1710
1765
1725
1770
VRAM Speed (Gbps)
19
19
14
14
VRAM (GB)
10
8
8
8
VRAM Bus Width
320
256
256
256
ROPs
96
96
96
64
TMUs
272
192
184
160
TFLOPS FP32 (Boost)
29.8
21.7
20.3
9.1
TFLOPS FP16 (Tensor)
119 (238)
87 (174)
81 (163)
72
RT TFLOPS
58.1
42.4
39.7
27.3
Bandwidth (GBps)
760
608
448
448
TDP (watts)
320
290
220
215
Launch Date
Sep 2020
Jun 2021
Oct 2020
Jul 2019
Launch Price
$699
$599
$499
$499
The GeForce RTX 3070 Ti provides just a bit more theoretical computational performance than the 3070, thanks to the addition of two more SMs. It also has slightly higher clocks, giving it 7% more TFLOPS — and it still has 27% fewer TFLOPS than the 3080. More important by far is that the 3070 Ti goes from 14Gbps of GDDR6 and 448 GB/s of bandwidth to 19Gbps GDDR6X and 608 GB/s of bandwidth, a 36% improvement. In general, we expect performance to land between the 3080 and 3070, but closer to the 3070.
Besides performance specs, it’s also important to look at power. It’s a bit shocking to see that the 3070 Ti has a 70W higher TDP than the 3070, and we’d assume nearly all of that goes into the GDDR6X memory. Some of it also allows for slightly higher clocks, but generally, that’s a significant increase in TDP just for a change in VRAM.
There’s still the question of whether 8GB of memory is enough. These days, we’d say it’s sufficient for any game you want to play, but there are definitely instances where you’ll run into memory capacity issues. Not surprisingly, many of those come in games promoted by AMD, it’s almost like AMD has convinced developers to target 12GB or 16GB of VRAM at maximum quality settings. But a few judicious tweaks to settings (like dropping texture quality a notch) will generally suffice.
The difficulty is that there’s no good way to get more memory other than simply doing it. The 256-bit interface means Nvidia can do 8GB or 16GB — nothing in between. And with the 3080 and 3080 Ti offering 10GB and 12GB, respectively, there was basically no chance Nvidia would equip a lesser GPU with more GDDR6X memory. (Yeah, I know, but the RTX 3060 12GB remains a bit of an anomaly in that department.)
GeForce RTX 3070 Ti Design: A Blend of the 3070 and 3080
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Unlike the RTX 3080 Ti, Nvidia actually made some changes to the RTX 3070 Ti’s design. Basically, the 3070 Ti has a flow-through cooling fan at the ‘back’ of the card, similar to the 3080 and 3090 Founders Edition cards. In comparison, the 3070 just used two fans on the same side of the card. This also required some tweaks to the PCB layout, so the 3070 Ti doesn’t use the exact same boards as the 3070 and 3060 Ti. It’s not clear exactly how much the design tweak helps with cooling, but considering the 290W vs. 220W TDP, presumably Nvidia did plenty of testing before settling on the final product.
Overall, whether the change significantly improves the cooling or not, we think it does improve the look of the card. The RTX 3070 and 3060 Ti Founders Editions looked a bit bland, as they lacked even a large logo indicating the product name. The 3080 and above (FE models) include RGB lighting, though, which the 3070 Ti and below lack. Third party cards can, of course, do whatever they want with the GPU, and we assume many of them will provide beefier cooling and RGB lighting, along with factory overclocks.
One question we had going into this review was how well the card would cool the GDDR6X memory. The various Founders Edition cards with GDDR6X memory can all hit 110 degrees Celsius on the memory with various crypto mining algorithms, at which point the fans kick into high gear and the GPU throttles. Gaming tends to be less demanding, but we still saw 102C-104C on the 3080 Ti. The 3070 Ti doesn’t have that problem. Even with mining algorithms, the memory peaked at 100C, and temperatures in games were generally 8C–12C cooler. That’s the benefit of only having to cool 8GB of GDDR6X instead of 10GB, 12GB, or 24GB.
GeForce RTX 3070 Ti: Standard Gaming Performance
TOM’S HARDWARE GPU TEST PC
Our test setup remains unchanged from previous reviews, and like the 3080 Ti, we’ll be doing additional testing with ray tracing and DLSS — using the same tests as our AMD vs. Nvidia: Ray Tracing Showdown. We’re using the test equipment shown above, which consists of a Core i9-9900K, 32GB DDR4-3600 memory, 2TB M.2 SSD, and the various GPUs being tested — all of which are reference models here, except for the RTX 3060 (an EVGA model running reference clocks).
That gives us two sets of results. First is the traditional rendering performance, using thirteen games, at 1080p, 1440p, and 4K with ultra/maximum quality settings. Then we have ten more games with RT (and sometimes DLSS, where applicable). We’ll start with 4K, since this is a top-tier GPU more likely to be used at that resolution, plus it’s where the card does best relative to the other GPUs — CPU bottlenecks are almost completely eliminated at 4K, but more prevalent at 1080p. If you want to check 1080p/1440p/4K medium performance, we’ll have those results in our best graphics cards and GPU benchmarks articles — though only for nine of the games.
Image 1 of 14
Image 2 of 14
Image 3 of 14
Image 4 of 14
Image 5 of 14
Image 6 of 14
Image 7 of 14
Image 8 of 14
Image 9 of 14
Image 10 of 14
Image 11 of 14
Image 12 of 14
Image 13 of 14
Image 14 of 14
The RTX 3070 Ti does best as a 1440p gaming solution, which remains the sweet spot in terms of image quality and performance requirements. Overall performance ended up 9% faster than the RTX 3070 and 13% slower than the RTX 3080, so the added memory bandwidth only goes so far toward removing bottlenecks. However, a few games benefit more, like Assassin’s Creed Valhalla, Dirt 5, Horizon Zero Dawn, Shadow of the Tomb Raider, and Strange Brigade — all of which show double-digit percentage improvements relative to the 3070.
Some of the games are also clearly hitting other bottlenecks, like the GPU cores. Borderlands 3, The Division 2, Far Cry 5, FFXIV, Metro Exodus, and Red Dead Redemption 2 all show performance gains closer to the theoretical 7% difference in compute that we get from core counts and clock speeds. Meanwhile, Watch Dogs Legions ends up showing the smallest change in performance, improving just 3% compared to the RTX 3070.
The RTX 3070 Ti makes for a decent showing here, but we’re still looking at an MSRP increase of 20% for a slightly less than 10% increase in performance. Compared to AMD’s RX 6000 cards, the 3070 Ti easily beats the RX 6700 XT, but it comes in 6% behind the RX 6800 — which, of course, means it trails the RX 6800 XT as well.
On the one hand, AMD’s GPUs tend to sell at higher prices, even when you see them in places like the Newegg Shuffle. At the same time, RTX 30-series hardware on eBay remains extremely expensive, with the 3070 selling for around $1,300, compared to around $1,400 for the RX 6800. Considering the RTX 3070 Ti is faster than the RTX 3070, it remains to be seen where street pricing lands. Of course, the reduced hashrates for Ethereum mining on the 3070 Ti may also play a role.
Image 1 of 14
Image 2 of 14
Image 3 of 14
Image 4 of 14
Image 5 of 14
Image 6 of 14
Image 7 of 14
Image 8 of 14
Image 9 of 14
Image 10 of 14
Image 11 of 14
Image 12 of 14
Image 13 of 14
Image 14 of 14
Next up is 1080p testing. Lowering the resolution tends to make games more CPU limited, and that’s exactly what we see. The 3070 Ti was 7% faster than the 3070 this time and 11% slower than the 3080. It was also 7% faster than the 6700 XT and 6% slower than the 6800. While you can still easily play games at 1080p on the RTX 3070 Ti, the same is true of most of the other GPUs on our charts.
We won’t belabor the point, other than to note that our current test suite is slightly more tilted in favor of AMD GPUs (six AMD-promoted games compared to four Nvidia-promoted games, with three ‘agnostic’ games). We’ll make up for that when we hit the ray tracing benchmarks in a moment.
Image 1 of 14
Image 2 of 14
Image 3 of 14
Image 4 of 14
Image 5 of 14
Image 6 of 14
Image 7 of 14
Image 8 of 14
Image 9 of 14
Image 10 of 14
Image 11 of 14
Image 12 of 14
Image 13 of 14
Image 14 of 14
Not surprisingly, while 4K ultra gaming gave the RTX 3070 Ti its biggest lead over the RTX 3070 (11%), it also got its biggest loss (17%) against the 3080. 4K also narrowed the gap between the 3070 Ti and the RX 6800, as AMD’s Infinity Cache starts to hit its limits at 4K.
Technically, the RTX 3070 Ti can still play all of the test games at 4K, just not always at more than 60 fps. Nearly half of the games we tested came in below that mark, with Valhalla and Watch Dogs Legion being the two lowest scores — and they’re still in the mid-40s. The RTX 3070 was already basically tied with the previous generation RTX 2080 Ti, which means the RTX 3070 Ti is now clearly faster than the previous-gen halo card, at half the price.
GeForce RTX 3070 Ti: Ray Tracing and DLSS Gaming Performance
So far, we’ve focused on gaming performance using traditional rasterization graphics. We’ve also excluded using Nvidia’s DLSS technology in order to provide an apples-to-apples comparison. Now we’ll focus on ray tracing performance, with DLSS 2.0 enabled where applicable. We’re only using DLSS in Quality mode (2x upscaling) in the six games where it is supported. We’ll have to wait for AMD’s FSR to see if it can provide a reasonable alternative to DLSS 2.0 in the coming months, though Nvidia clearly has a lengthy head start. Note that these are the same tests we used in our recent AMD vs. Nvidia Ray Tracing Battle.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
Nvidia’s RTX 3070 Ti does far better — at least against the AMD competition — in ray tracing games. It’s not a complete sweep, as the RX 6800 still leads in Godfall, but the 3070 Ti ties or wins in every other game. In fact, the 3070 Ti basically ties the RX 6800 XT in our ray tracing test suite, and that’s before we enable DLSS 2.0.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
Even 1080p DXR generally ends up being GPU limited, so the rankings don’t change much from above. DLSS doesn’t help quite as much at 1080p, but otherwise, the 3070 Ti ends up right around 25% faster than the RX 6800 — the same as at 1440p. We’ve mentioned before that Fortnite is probably the best ‘neutral’ look at advanced ray tracing techniques, and the 3070 Ti is about 5–7% faster there. Turn on DLSS Quality and it’s basically double the framerate of the RX 6800.
GeForce RTX 3070 Ti: Power, Clocks, and Temperatures
We’ve got our Powenetics equipment working again, so we’ve added the 3080 Ti to these charts. Unfortunately, there was another slight snafu: We couldn’t get proper fan speeds this round. It’s always one thing or another, I guess. Anyway, we use Metro Exodus running at 1440p ultra (without RT or DLSS) and FurMark running at 1600×900 in stress test mode for our power testing. Each test runs for about 10 minutes, and we log the result to generate the charts. For the bar charts, we only average data where the GPU load is above 90% (to avoid skewing things in Metro when the benchmark restarts).
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Nvidia gives the RTX 3070 Ti a 290W TDP, and it mostly makes use of that power. It averaged about 282W for our Metro testing, but that’s partly due to the lull in GPU activity between benchmark iterations. FurMark showed 291W of power use, right in line with expectations.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Core clocks were interesting, as the GeForce RTX 3070 Ti actually ended up with slightly lower clocks than the RTX 3070 in FurMark and Metro. On the other hand, both cards easily exceeded the official boost clocks by about 100 MHz. Custom third-party cards will likely hit higher clocks and performance, though also higher power consumption.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
While we don’t have fan data (or noise data — sorry, I’m still trying to get unpacked from the move), the RTX 3070 Ti did end up hitting the highest temperatures of any of the GPUs in both Metro and FurMark. As we’ve noted before, however, none of the cards are running “too hot,” and we’re more concerned with memory temperatures. The 3070 Ti thankfully didn’t get above 100C on GDDR6X junction temperatures when testing, and even that value occured while testing crypto coin mining.
GeForce RTX 3070 Ti: Good but With Diminishing Returns
We have to wonder what things would have been like for the RTX 3070 Ti without the double whammy of the Covid pandemic and the cryptocurrency boom. If you look at the RTX 20-series, Nvidia started at higher prices ($599 for the RTX 2070 FE) and then dropped things $100 with the ‘Super’ updates a year later. Ampere has gone the opposite route: Initial prices were excellent, at least on paper, and every one of the cards sold out immediately. That’s still happening today, and the result is a price increase — along with improved performance — for the 3070 Ti and 3080 Ti.
Thankfully, the jump in pricing on the 3070 Ti relative to the 3070 isn’t too arduous. $100 more for the switch to GDDR6X is almost palatable. Except, while the 3070 offers about 90% of the 3070 Ti performance for 80% of the price and represents an arguably better buy, the real problem is the RTX 3080. It’s about 12–20% faster across our 13 game test suite and only costs $100 more (a 17% price increase).
Well, in theory anyway. Nobody is really selling RTX 3080 for $700, and they haven’t done so since it launched. The 3080 often costs over $1,000 even in the lottery-style Newegg Shuffle, and the typical price is still above $2,000 on eBay. It’s one of the worst cards to buy on eBay, based on how big the markup is. In comparison, the RTX 3070 Ti might only end up costing twice its MSRP on eBay, but that’s still $1,200. And it could very well end up costing more than that.
We’ll have to see what happens in the coming months. Hopefully, the arrival of two more desktop graphics cards in the form of the RTX 3080 Ti and RTX 3070 Ti will alleviate the shortages a bit. The hashrate limiter can’t hurt either, at least if you’re only interested in gaming performance, and the drop in mining profitability might help. But we’re far from being out of the shortage woods.
If you can actually find the RTX 3070 Ti for close to its $600 MSRP, and you’re in the market for a new graphics card, it’s a good option. Finding it will be the difficult part. This is bound to be a repeat of every AMD and Nvidia GPU launch of the past year. If you haven’t managed to procure a new card yet, you can try again (and again, and again…). But for those who already have a reasonable graphics card, there’s nothing really new to see here: slightly better performance and higher power consumption at a higher price. Let’s hope supply and prices improve by the time fall blows in.
Sometimes small chips cause major problems. According to a report from Igor’sLab, Intel recently had to allow its partners to use previously uncertified USB Type-C and Power Delivery controllers from Texas Instruments with its latest Tiger Lake platforms, as well as previously-uncertified discrete Thunderbolt 4 JHL8440/JHL8540 ‘Maple Ridge’ controllers.
Every personal computer nowadays uses multiple power management ICs (PMICs), and if a PC maker cannot get enough PMICs of a certain type, it cannot ship the whole system.
A handful of companies make USB Type-C and PD controllers, with the main suppliers being Texas Instruments and Cypress. Intel usually demands that its partners use very specific USB Type-C and PD controllers with its TB3 and TB4 controllers to ensure compatibility and a consistent user experience, but chip shortages have reportedly forced the company to reconsider those requirements.
Typically, Thunderbolt 3 (TB3), Thunderbolt 4 (TB4) and USB 4 implementations include two or three key chips: a controller, a retimer or a redriver (always for TB4, sometimes for other interfaces), and a USB Type-C and Power Delivery (PD) controller that detects cable orientation, assigns USB PD, and arranges alternate mode settings for internal and external multiplexers.
For Tiger Lake-based systems with Thunderbolt 4 ports, Intel wants its partners to use Texas Instruments’ TPS65994AD USB Type-C and Power Delivery controller. However, because the chips aren’t currently available, Intel will temporarily certify Thunderbolt 4 implementations that use TPS65993AC and TPS65994AC controllers.
These controllers are not formally USB 4 compliant, but they are USB 4 compatible. As such, Intel wants its OEM partners to communicate the benefits of Thunderbolt 4 and USB 4 ‘compatibility,’ or “exclude mention of USB4,” according to documents reviewed by Igor’sLab.
It is unclear when Texas Instruments will resolve the supply issue with its controllers. Given that Intel is taking a rather unusual action, we are probably talking about weeks, if not months. In any case, this might be an unpleasant but solvable problem.
Apparently, Intel has another problem at hand. The company has been unable to produce enterprise SSDs due to a shortage of power management ICs (PMICs), reports TrendForce. Since enterprise-grade SSDs have always been the company’s top priority as far as its storage business is concerned, if Intel cannot supply enterprise drives, it means that the company cannot get enough PMICs in general for its SSD businesses. We’re following up with the company to see if the shortage includes its consumer SSD lineup.
Intel hasn’t commented on reports about the shortages of USB Type-C and PD controllers and SSD PMICs, but it has admitted in the past that the supply of power controllers affects its business.
“We are supply-constrained,” said Pat Gelsinger, CEO of Intel, at an investor conference recently (via SeekingAlpha). “We have substrate constraints; also, our customers are supply-constrained. We are now wrestling through the issues that they say boy, hey, I don’t have enough power controllers, right, to have a mix, a matched set.”
We’ve reached out to Intel about the reports and will update as necessary.
(Pocket-lint) – The Ratchet & Clank series has been a PlayStation stalwart for almost two decades. But we’ve not had an original outing since Into the Nexus in 2013 – and that was on the PlayStation 3.
Yes, developer Insomniac Games remade the first game for PS4 in the shape of 2016’s Ratchet & Clank, but it has rather focused its attention on Sunset Overdrive and the superb Marvel’s Spider-Man games instead.
That’s why we’re thrilled to see the return of everyone’s favourite Lombax and his robot chum. And, thanks to the leap to PlayStation 5 proper, they have never looked – or arguably played – better.
Next-gen necessity
Insomniac cut its next-gen teeth on Marvel’s Spider-Man: Miles Morales, plus a dolled-up remaster of its predecessor, but Ratchet & Clank: Rift Apart is its first PS5 exclusive. That has given the studio free rein on a featureset of tools and talents only the latest in Sony’s kitbag can provide.
The entire premise of the game is only possible thanks to clever compression techniques and superfast SSD loading speeds. The graphics drip with ray-tracing and other wizardry from every pore. And the tricks afforded by the PS5’s DualSense controller are exploited to the max. In many respects, this is the first truly next-gen game on any console and has us salivating for what’s possible in the future.
We’re getting ahead of ourselves though. Technical bells and whistles aside, this is a Ratchet & Clank game through and through, so we’ll start there.
Best PS5 games: The most amazing PlayStation 5 titles to pick up right now
Best upcoming PS5 games: The future PlayStation titles to anticipate
Like most others in the series, this is essentially a shooter-meets-platformer with a keen sense of humour and stunning, Pixar-like visuals. It is split across a fair few planets and regions, each with their own puzzles, secrets, bosses and, in the case of some, open-world landscapes.
Favourite elements return, such as crazy, often hilarious weapons, hover boots, rail riding, the weapons shop (which is now in shape of Mrs Zurkon – an enemy in the 2016 remake), and plenty more besides – but there are some key differences too. Not least the fact you play as two Lombaxes this time around.
That’s because, after Dr Nefarious gets his hands on the oft-featured Dimensionator and accidentally opens up huge dimensional rifts, our eponymous heroes are split up. That leads Clank to meet with Rivet, a female Lombax who is new to the series, and Ratchet, to eventually hook-up with an alternative robot pal named Kit.
Sony Interactive Entertainment
This allows for missions to be split between them all – and provides variety in both gameplay and dialogue. You will often swap characters when choosing which mission to undertake from the navigation screen and likely not return until it is complete – certainly for the first part, anyway.
There are also other-dimensional versions of many recognisable friends and foes, to add extra weight and humour to the story. Certainly, as fans of the series, we loved the references and a few Easter Eggs. However, if you’re new to it, you’ll still get plenty from it – you might even end up seeking older outings elsewhere, such as on PlayStation Now.
Dimensionally speaking
Levels in the game will often require a lot of blasting, but are reasonably varied. Some are based on massive open areas that can be explored, a la the R&C remake, others tighter and largely on rails. One thing that ties them all together is the ability to jump through dimension portals to reach different areas in a zone.
Sony Interactive Entertainment
For example, a small rift might appear on an otherwise hard-to-reach patform, so you just focus at it, tap a button, then are instantly zipped over to that location. It certainly helps you get around a map during a battle, zipping through portals to keep ahead of enemies.
Traversing different dimensions is also used cleverly too, with one level in particular requiring you to hop between an existing, thriving world and a destroyed version in another dimension. By jumping between the two, you can get past barriers in one, or solve an otherwise impossible puzzle.
It is here where the PS5 exclusivity becomes obvious. Travelling through rifts or swapping between dimensions is instant – you certainy don’t notice any loading time, even when everything in the landscape has completely changed or you are on a totally new part of the map. Insomniac has previously said that this needs both the SSD and Sony’s clever loading shenanigans to work, and it’s easy to see why.
The DualSense controller is also a necessity for gunplay, as the game uses both haptic feedback and the adaptive triggers as effectively as Returnal (a very different game, but an amazing one – as we said in our review).
Not only do you feel every shot – with the gamepad’s speaker also utilised for some elements – you get different shot options on the right trigger. Press it down halfway and you get one weapon mode, pull it harder and the other activates. It takes a little getting used to, but is intuitive and immersive when you do.
So, so pretty
The last, obvious reason why this is a PS5-only game lies not in its gameplay but in its look. This is quite simply the most gorgeous next-gen game yet. Easily the best use of high dynamic range (HDR) that we’ve seen.
As with Miles Morales, the developer has provided three graphics modes: Performance, Performance RT, and Fidelity.
Sony Interactive Entertainment
The prettiest – Fidelity – runs at 30 frames-per-second (30fps) but is in 4K resolution with HDR and features ray-tracing, enhanced lighting, additional VFX, and increased scene density. This is the way we preferred to play, even with the lower frame rate. It looks incredible. The different worlds are bursting with details and creativity, so having the enhancements make it for us.
Performance RT keeps some of the options, such as ray-tracing, but drops the resolution and some of the effects in favour of 60fps. While, Performance mode offers 60fps with an increased resolution over the last, it ditches the ray-tracing entirely.
Whichever you opt for, the game is still a stunner. We played it on a 65-inch OLED telly, where the colours popped out of the screen so vividly, but we’d expect it’d look great whatever your TV or display tech. There are also so many instances of neon lighting in the game – not least cascading from your weapons – that it would even be a great reference test for a new HDR TV.
Sound is superb, too, especially the excellent voice acting. And the use of Sony’s 3D Audio tech is great if you have compatible headphones. The spacing in open-world segments is especially good.
Sony Interactive Entertainment
Indeed, our only minor quibbles with the game is that there is a fair amount of repetition in bog standard enemy types and, like with previous outings, it’s a little short.
Still, there are sub-quests on most of the worlds, and there is a decent enough challenge here, with some bosses that will take you multiple tries to defeat. Also, it’s refreshing to have a game that doesn’t take over your life for a month once in a while.
Steelseries celebrates its 20th anniversary, a legacy of glory
By Pocket-lint Promotion
·
Verdict
Rachet & Clank: Rift Apart is an excellent return for the franchise. It is steeped in invention and it wrings every ounce out of the PlayStation 5’s capabilities.
We’ll no doubt see more complex, even better-looking games over this generation of gaming, but considering we are still relatively at the beginning, this is highly impressive stuff.
Also, don’t be fooled into thinking that, because it looks like a cartoon, this is a kids game. Like previous R&C adventures, there’s plenty to enjoy here for young and old, with ample challenge too.
Throwing new playable characters into the mix is also inspired, because it both breaks up the narrative and gameplay a touch. There’s a fair amount of repetition, which is par for the course, but apart from that, this is top-level stuff.
Let’s just hope it doesn’t take Insomniac eight years and a new console generation to deliver another slice. But then when it’s this good, it’s worth the wait.
Writing by Rik Henderson.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.