Nvidia announced its Cryptocurrency Mining Processor (CMP) just three weeks ago. Today, VideoCardz shared what appear to be renders of the first custom CMP 30HX, which originates from Gigabyte’s camp.
Although Nvidia hasn’t admitted it, there is enough evidence to suggest that most of its CMP graphics cards are based on the Turing arechitecture. The chipmaker advertises the 30HX with an Ethereum hash rate of up to 26 MH/s, and the graphics card comes with 6GB of memory, a 125W TDP, and requires only one 8-pin PCIe power connector. The 30HX’s performance and partial specifications basically describe Nvidia’s GeForce GTX 1660 Super, and the latest Gigabyte CMP 30HX helps confirm the early suspicions.
It’s not hard to see that the Gigabyte CMP 30HX is a close copy of the brand’s GeForce GTX 1660 Super OC 6G. The only apparent difference is that Gigabyte removed the display outputs. The graphics card uses the exact WindForce 2X cooler with the same pair of 90mm semi-passive cooling fans as its gaming counterpart. It also retains the single 8-pin PCIe power connector.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The CMP 30HX is in all likelihood based on the TU116 silicon. The jury is still out on whether the 30HX will come with the same 1,408 CUDA cores as the GeForce GTX 1660 Super. Mining Ethereum requires a certain level of compute performance after which memory bandwidth becomes the main factor.
According to VideoCardz’s report, Nvidia’s AIBs have already started mass-producing the CMP 30HX and 40HX. The 30HX and 40HX are scheduled to debut in the first quarter of this year, so these cryptomining devices are right around the corner. However, Nvidia left something very important out of its CMP announcement: the pricing.
With the existing graphics card shortage, pricing is crazy right now. For reference, the GeForce GTX 1660 Super launched two years ago for $229. Presently, custom GeForce GTX 1660 Super models start at $599 and scale up to $899. For the CMP 30HX to succeed, Nvidia will have to price the graphics card very attractively. If not, cryptocurrency miners would just stick to the GeForce GTX 1660 Super since it offers the same level of Ethereum as the CMP 30HX. The GeForce RTX 3060 was also a very convincing graphics for mining Ethereum too before Nvidia’s nerf.
Google has detailed the efficiency improvements it made with Chrome 89, the latest version of its browser released earlier this month. Depending on whether you’re using the browser on Windows, macOS, or Android, Google says the browser should use less resources, launch quicker, and feel more responsive to use. There’s no mention of any improvements specifically for users on iOS.
The exact benefits vary by OS. Across platforms, Google says Chrome is able to reclaim as much as 100MiB (or over 20 percent on some sites) by using foreground tab memory more efficiently, and on macOS it’s saving up to 8 percent of its memory usage based on how it handles background tabs (something which Chrome already does on other platforms). Google says these improvements on macOS have benefited the browser’s Energy Impact score by as much as 65 percent, “keeping your Mac cooler and those fans quiet.”
On Windows and Android, the browser is also using a more advanced memory allocator across more areas to further reduce memory usage, and increase browser responsiveness. On Windows, Google says it’s seeing “significant memory” savings of up to 22 percent in the “browser process,” 8 percent in the renderer, 3 percent in the GPU, and that overall browser responsiveness is improved by up to 9 percent.
There are also a host of improvements specific to Android, which google says result in 5 percent less memory usage, fewer crashes, 7.5 percent faster startup, 2 percent faster page loads, and a 13 percent faster startup. High-end Android devices running on Android 10 and newer with at least 8GB of RAM should also load pages 8.5 percent faster, and be 28 percent smoother to use.
Google has made similar promises about previous Chrome releases. For example it said Chrome 87, released at the end of last year, was “the largest gain in Chrome performance in years.” Under-the-hood performance improvements were said to improve everything from CPU usage, power efficiency, and startup times.
Although I assembled it myself, and its software all comes from an open-source DIY project, in many ways my MiSTer is the most versatile computer I own. It’s a shapeshifting wonderbox that can change its own logic to make itself run like countless other machines as accurately as possible. From old arcade boards to early PCs to vintage consoles, MiSTer developers are devoted to helping it turn into an ever-expanding range of hardware.
If you’ve ever wanted to use computer software or hardware that is no longer available for sale, you’ve probably run into emulation before. It’s a huge field that often involves a ton of people working on a technically challenging feat: how to write software that lets one computer run code that was written for another. But there’s only so much traditional emulators can do. There are always inherent compromises and complexities involved in getting your current hardware to run software it was never designed to handle. Emulated operating systems or video games often encounter slowdown, latency, and bugs you’d never have encountered with the original devices. So what if there was a way to alter the hardware itself?
Well, that’s MiSTer. It’s an open-source project built upon field-programmable gate array (FPGA) technology, which means it makes use of hardware that can be reconfigured after the fact. While traditional CPUs are fixed from the point of manufacture, FPGAs can be reprogrammed to work as if they came right off the conveyor belt with the actual silicon you want to use.
What this means is, you’re not tricking a processor into believing it’s something else, you’re setting it up to run that way from the start. A MiSTer system can theoretically run software from the NES to the Neo Geo, to the Apple II or Acorn Archimedes, and deliver responsive, near-as-dammit accurate performance next to what you’d get from the actual devices.
Of course, it’s not as easy as that makes it sound. In order to program an FPGA to act like a computer from three decades ago, you have to intimately understand the original hardware. And that’s what makes MiSTer one of the technically coolest DIY projects going today, building on the knowledge of developers around the globe.
FPGAs aren’t new technology. Two early companies in the field (sorry) were Altera, now owned by Intel, and Xilinx, now part of AMD. The two have competed since the 1980s for market share in programmable logic devices, largely serving enterprise customers. One of the biggest advantages of FPGAs on an industrial scale is that companies can iterate their software design on hardware before they need to manufacture the final silicon. FPGAs are widely used to develop embedded systems, for example, because the software and the hardware can be designed near-concurrently.
You might be familiar with FPGAs if you’ve come across Analogue’s boutique console clones, like the Mega Sg and the Super Nt. Those use FPGAs programmed in a certain way to replicate a single, specific piece of hardware, so you can use your original physical cartridges with them and get an experience that’s very close to the actual consoles.
The MiSTer project is built around more accessible FPGA hardware than you’d find in commercial or enterprise applications. The core of the system is an FPGA board called the DE10-Nano, produced by another Intel-owned company called Terasic that’s based out of Taiwan. It was originally intended for students as a way to teach themselves how to work with FPGAs.
The DE10-Nano looks somewhat similar to a Raspberry Pi — it’s a tiny motherboard that ships without a case and is designed to be expanded. The hardware includes an Altera Cyclone V with two ARM Cortex-A9 CPU cores, 1GB of DDR3 SDRAM, an HDMI out, a microSD card slot, a USB-A port, and Ethernet connectivity. It runs a Linux-based OS out of the box and sells for about $135, or $99 to students.
MiSTer is inspired by MiST, an earlier project that made use of an Altera FPGA board to recreate the Atari ST. But the DE10-Nano is cheaper, more powerful, and expandable, which is why project leader Alexey Melnikov used it as the basis for MiSTer when development started a few years back. Melnikov also designed MiSTer-specific daughterboards that enhance the DE10-Nano’s capability and make a finished machine a lot more versatile; the designs are open-source, so anyone is free to manufacture and sell them.
You can run MiSTer on a single DE10-Nano, but it’s not recommended, because the board alone will only support a few of the cores available. (A “core” is a re-creation of a specific console or computer designed to run on the MiSTer platform.) The one upgrade that should be considered essential is a 128MB stick of SDRAM, which gives MiSTer enough memory at the right speed to run anything released for the platform to date.
Beyond that, you’ll probably want a case, assuming you’d rather not run open circuitry exposed to the elements. There are various case designs available, many of which are intended for use with other MiSTer-specific add-ons that vertically attach to the DE10-Nano. An I/O board isn’t necessary for most cores, for example, but it adds a VGA port along with digital and analog audio out, which is useful for various setups. (A lot of MiSTer users prefer to hook up their systems to CRT TVs to make the most of the authentic output and low latency.) You can add a heatsink or a fan, which can be a good idea if you want to run the system for extended periods of time. And there’s a USB hub board that adds seven USB-A ports.
For my setup, I ordered the DE10-Nano, a 128MB SDRAM stick, a VGA I/O board with a fan, a USB hub board, and a case designed for that precise selection of hardware. These largely came from different sources and took varying amounts of time to show up; you can order the DE10-Nano from countless computer retailers, but other MiSTer accessories involve diving into a cottage industry of redesigns and resellers. Half of my parts arrived in a battered box from Portugal filled with shredded paper and loosely attached bubble wrap.
MiSTer accessories are based on Melnikov’s original designs, but since the project is open-source, many sellers customize their own versions. My case, for example, includes a patch cable that hooks directly into the IO board to control its lighting, while some others require you to route the LEDs yourself. The USB board, meanwhile, came with a bridge to the DE10-Nano that seemed to be a different height from most others, which meant I had to improvise a little with screw placements. Nothing I ordered came with instructions, so it did take some time to figure out what should go where, but everything worked fine in the end. The only other thing I had to do was go buy a small hex screwdriver for the final screws in the case.
That’s part of the fun with MiSTer. There’s a base specification that everything works around, but you’re still ultimately assembling your own FPGA computer, and you can adjust the build as much or as little as you want.
Once your hardware is set, you need to install the MiSTer software. There are a few ways to do this, and you’ll want to dig around forums and GitHub for a while so you know what you’re doing, but the method I went with was simple in the end — essentially, you format your microSD card with an installer package, put it into the DE10-Nano, plug in an Ethernet cable and a USB keyboard, power on the system, and it’ll download all of the available cores. Your SD card will then be set up to boot the MiSTer OS directly, and you can run another script to make sure everything’s updated with the most recent versions.
The MiSTer OS is very simple, with a default background that looks like pixelated TV static and a basic menu in a monospaced font that lets you select from lists of console and computer cores. The first thing I did was load some old Game Boy Advance ROMs I dumped well over a decade ago, because for some reason Nintendo doesn’t want to sell them for the Switch. (Please sell them for the Switch, Nintendo.) The performance felt about as authentic as I could’ve expected, except for the fact that I was looking at a 4K TV instead of a tiny screen.
My main reason for getting into MiSTer is to have a hardware-based way to access the parts of computer history that I missed, or to revisit forgotten platforms that I was around for. I knew that computer systems like the Apple II and the Amiga were big gaps in my knowledge, so it’s great to have a little box that can run like either of them on command. I’ve also been getting into the MSX platform, which was popular in Japan in the ’80s. My next rainy-day project is to work on an install of RISC OS, the Acorn operating system that was on the first computers I ever used at school in the UK. (You can actually still buy licensed ROM copies of various versions of the OS, which was a neat surprise.)
MiSTer development is a vibrant scene. Melnikov has a Patreon that’s updated several times a week with improvements he’s made to various cores, but there are lots of other people contributing to the project on a daily or weekly basis. A colleague introduced me to the work of Jose Tejada, for example, who’s based in Spain and has made a ton of progress on replicating old Capcom arcade machine boards. There’s another project aiming to get the original PlayStation running, marking the biggest step yet into 3D hardware on MiSTer.
FPGAs are often talked about as if they’re a silver bullet for perfect emulation, but that’s really not the case — at least, not without a lot of effort. Anything that runs perfectly on MiSTer, or as close to perfectly as is otherwise imperceptible, is the result of a ton of work by talented programmers who have spent time figuring out the original hardware and applying the knowledge to their cores. Just read this post from the FPGA PSX Project about what it took to get Ridge Racer running on MiSTer, as well as the assessment of how far they have to go. The cores can vary in quality, accuracy, and state of completion, but a lot of them are still under active development and huge strides have been made in the past couple of years.
Analogue lead hardware engineer Kevin Horton spoke to The Verge in 2019 about the work that went into re-creating the Sega Genesis for the Mega Sg console. The process took him nine months, including two-and-a-half months figuring out the CPU at the heart of the console. “I didn’t know Genesis very well, and knew literally nothing about the 68000 CPU at all!” he said. “This was my first foray into both things and probably slowed the process down since I had to learn it all as I went.”
Ultimately, Horton confirmed the accuracy of his work by directly connecting a 68000 to an FPGA and comparing their performance on a test that ran for a week straight. It demonstrates the lengths that FPGA enthusiasts go to in pursuit of the most accurate results possible, but what makes MiSTer special is that this is largely the work of hobbyists. No one’s paying anyone a salary to make incremental tweaks to the performance of the arcade version of Bionic Commando, but that’s where Tejada has directed his passion.
MiSTer is an important project because it speaks to the concept of preservation in a way that all too often goes underserved by the technology industry. The project makes the argument that the way we run software is as big a part of our experience as its content. Yes, you can port or emulate or re-release software to run on modern hardware, but there’s always going to be a compromise in the underlying code that moves the pixels in front of your eyes.
Of course, that might sound like a pretty niche concern for anyone who’s satisfied with, say, the emulated software you can run in a browser at Archive.org. I’m often one of those people myself — emulation can be great, and it’s hard to beat the convenience. But the MiSTer project is an incredible effort all the same. I’ll never have a shred of the technical knowledge possessed by MiSTer developers, but I’m grateful for their effort. Once you build your own system, it’s hard not to feel invested in the work that goes into it; MiSTer is a never-ending pursuit of perfection, and there’s something beautiful about that.
In a recent filing from ASUS to the ECC, Asus has revealed several new model numbers relating to the rumored RX 6700. In all of these model numbers, the amount of VRAM listed for the 6700 was 12GB, indicating that the RX 6700 could come with 12GB of VRAM like the RX 6700XT, RTX 3060 (desktop), and the rumored RX 6600XT.
Before you get your hopes up, ECC filings are very unreliable. We’ve seen ECC listings turn out to be false on a number of occasions. So take this data as a rumor and nothing more.
This news comes just a month after we saw ECC filings from two AIB partners, showcasing 6GB configurations for the RX 6700. What we’re seeing here most likely are two VRAM configurations for the 6700, which wouldn’t be surprising to see in a new mid-range part from AMD.
AMD has been selling its budget and mid-range cards with multiple memory configurations for years at this point (the RX 480 and RX 580 come to mind) to help keep its SKUs more relevant to more buyers. Having both a smaller and a larger VRAM capacity on a specific SKU can change its price quite drastically, especially when we’re talking about lower-end cards.
What we don’t know is how large of a price gap will exist between the 6GB model and the 12GB model, assuming two VRAM configurations come out. But, in a time where graphics cards are scarcely in stock, it’s very doubtful pricing will matter. These cards will fly off of store shelves as fast as retailers can re-supply them.
AMD says it plans to have much more stock of its recently-announced RX 6700 XT than any other RX 6000 series graphics card at this point. So hopefully, this will also be the case with the RX 6700. If anything, having multiple memory configurations for the 6700 should help AMD build more graphics cards, despite the shortage of GDDR6 memory and other materials right now.
You know the graphics card market is in a bad place when vendors resort to rereleasing five-year old graphics cards. Kuroutoshikou, a Japanese vendor, has announced that its GeForce GTX 1050 Ti (GF-GTX1050Ti-E4GB/SF/P2) will hit the domestic market in mid-March.
In reality, the GF-GTX1050Ti-E4GB/SF/P2 is a rebranded version of Palit’s GeForce GTX 1050 Ti StormX. Based on the GP107 (Pascal) silicon, the graphics card is equipped with 768 CUDA cores with a 1,392 MHz boost clock and 4GB of 7 Gbps GDDR5 memory. The GeForce GTX 1050 Ti is rated for 75W so it doesn’t require any external PCIe power connectors, making it a good plug-n-play option for entry-level gamers, even though it is no longer among the best graphics cards.
The GeForce GTX 1050 Ti’s revival isn’t a coincidence though. It was Nvidia itself who decided to replenish its partners with Pascal GPUs in the middle of the ongoing graphics card crysis. Nvidia’s actions also paved the way for other vendors to get rid of their old Pascal stock, including Palit who might launch new specialized GeForce GTX 1060 models for cryptocurrency mining.
We’ve already started seeing more GeForce GTX 1050 Ti availability here in the U.S. Sadly, the pricing leaves much to be desired. While Kuroutoshikou’s GeForce GTX 1050 Ti will arrive in Japan with a price tag of ¥20,727 (~$190.97), custom models in the U.S. market currently retail between $330 and $600. That’s pretty insane since the GeForce GTX 1050 Ti has five years under its belt now and had launched for $139.
With how ridiculous pricing is right now and the graphics card shortage, picking up a pre-built PC, especially one of the best gaming PCs, suddenly doesn’t sound like a bad idea anymore.
After almost a decade of total market dominance, Intel has spent the past few years on the defensive. AMD’s Ryzen processors continue to show improvement year over year, with the most recent Ryzen 5000 series taking the crown of best gaming processor: Intel’s last bastion of superiority.
Now, with a booming hardware market, Intel is preparing to make up some of that lost ground with the new 11th Gen Intel Core Processors. Intel is claiming these new 11th Gen CPUs offer double-digit IPC improvements despite remaining on a 14 nm process. The top-end 8-core Intel Core i9-11900K may not be able to compete against its Ryzen 9 5900X AMD rival in heavily multi-threaded scenarios, but the higher clock speeds and alleged IPC improvements could be enough to take back the gaming crown. Along with the new CPUs, there is a new chipset to match, the Intel Z590. Last year’s Z490 chipset motherboards are also compatible with the new 11th Gen Intel Core Processors, but Z590 introduces some key advantages.
First, Z590 offers native PCIe 4.0 support from the CPU, which means the PCIe and M.2 slots powered off the CPU will offer PCIe 4.0 connectivity when an 11th Gen CPU is installed. The PCIe and M.2 slots controlled by the Z590 chipset are still PCI 3.0. While many high-end Z490 motherboards advertised this capability, it was not a standard feature for the platform. In addition to PCIe 4.0 support, Z590 offers USB 3.2 Gen 2×2 from the chipset. The USB 3.2 Gen 2×2 standard offers speeds of up to 20 Gb/s. Finally, Z590 boasts native support for 3200 MHz DDR4 memory. With these upgrades, Intel’s Z series platform has feature parity with AMD’s B550. On paper, Intel is catching up to AMD, but only testing will tell if these new Z590 motherboards are up to the challenge.
The ASRock Z590 Steel Legend WiFi 6E aims to be a durable, dependable platform for the mainstream market. The ASRock Z590 Steel Legend WiFi 6E features a respectable 14-phase VRM that takes advantage of 50 A power stages from Vishay. Additionally, ASRock has included a 2.5 Gb/s LAN controller from Realtek as well as the latest WiFi 6 connectivity. The ASRock Z590 Steel Legend WiFi 6E has all the mainstream features most users need packaged in at a reasonable price. All that is left is to see how the ASRock Z590 Steel Legend WiFi 6E stacks up against the competition!
2x Antenna Ports 1x PS/2 Mouse/Keyboard Port 1x HDMI Port 1x DisplayPort 1.4 1x Optical SPDIF Out Port 1x USB 3.2 Gen2 Type-A Port 1x USB 3.2 Gen2 Type-C Port 2x USB 3.2 Gen1 Ports 2x USB 2.0 Ports 1x RJ-45 LAN Port 5x HD Audio Jacks
Audio:
1x Realtek ALC897 Codec
Fan Headers:
7x 4-pin
Form Factor:
ATX Form Factor: 12.0 x 9.6 in.; 30.5 x 24.4 cm
Exclusive Features:
ASRock Super Alloy
XXL Aluminium Alloy Heatsink
Premium Power Choke
50A Dr.MOS
Nichicon 12K Black Caps
I/O Armor
Shaped PCB Design
Matte Black PCB
High Density Glass Fabric PCB
2oz copper PCB
2.5G LAN
Intel® 802.11ax Wi-Fi 6E
ASRock steel Slot
ASRock Full Coverage M.2 Heatsink
ASRock Hyper M.2 (PCIe Gen4x4)
ASRock Ultra USB Power
ASRock Full Spike Protection
ASRock Live Update & APP Shop
Testing for this review was conducted using a 10th Gen Intel Core i9-10900K. Stay tuned for an 11th Gen update when the new processors launch!
TechPowerUp is one of the most highly cited graphics card review sources on the web, and we strive to keep our testing methods, game selection, and, most importantly, test bench up to date. Today, I am pleased to announce our newest March 2021 VGA test system, which has one of many firsts for TechPowerUp. This is our first graphics card test bed powered by an AMD CPU. We are using the Ryzen 7 5800X 8-core processor based on the “Zen 3” architecture. The new test setup fully supports the PCI-Express 4.0 x16 bus interface to maximize performance of the latest generation of graphics cards by both NVIDIA and AMD. The platform also enables the Resizable BAR feature by PCI-SIG, allowing the processor to see the whole video memory as a single addressable block, which could potentially improve performance.
A new test system heralds completely re-testing every single graphics card used in our performance graphs. It allows us to kick out some of the older graphics cards and game tests to make room for newer cards and games. It also allows us to refresh our OS, testing tools, update games to the latest version, and explore new game settings, such as real-time raytracing, and newer APIs.
A VGA rebench is a monumental task for TechPowerUp. This time, I’m testing 26 graphics cards in 22 games at 3 resolutions, or 66 game tests per card, which works out to 1,716 benchmark runs in total. In addition, we have doubled our raytracing testing from two to four titles. We also made some changes to our power consumption testing, which is now more detailed and more in-depth than ever.
In this article, I’ll share some thoughts on what was changed and why, while giving you a first look at the performance numbers obtained on the new test system.
Hardware
Below are the hardware specifications of the new March 2021 VGA test system.
Windows 10 Professional 64-bit Version 20H2 (October 2020 Update)
Drivers:
AMD: 21.2.3 Beta NVIDIA: 461.72 WHQL
The AMD Ryzen 7 5800X has emerged as the fastest processor we can recommend to gamers for play at any resolution. We could have gone with the 12-core Ryzen 9 5900X or even maxed out this platform with the 16-core 5950X, but neither would be faster at gaming, and both would be significantly more expensive. AMD certainly wants to sell you the more expensive (overpriced?) CPU, but the Ryzen 7 5800X is actually the fastest option because of its single CCD architecture. Our goal with GPU test systems over the past decade has consistently been to use the fastest mainstream-desktop processor. Over the years, this meant a $300-something Core i7 K-series LGA115x chip making room for the $500 i9-9900K. The 5900X doesn’t sell for anywhere close to this mark, and we’d rather not use an overpriced processor just because we can. You’ll also notice that we skipped upgrading to the 10-core “Comet Lake” Core i9-10900K processor from the older i9-9900K because we saw no significant increases and negligible gaming performance gains, especially considering the large overclock on the i9-9900K. The additional two cores do squat for nearly all gaming situations, which is the second reason besides pricing that had us decide against the Ryzen 9 5900X.
We continue using our trusted Thermaltake TOUGHRAM 16 GB dual-channel memory kit that served us well for many years. 32 GB isn’t anywhere close to needed for gaming, so I didn’t want to hint at that, especially to less experienced readers checking out the test system. We’re running at the most desirable memory configuration for Zen 3 to reduce latencies inside the processor: Infinity Fabric at 2000 MHz, memory clocked at DDR4-4000, in 1:1 sync with the Infinity Fabric clock. Timings are at a standard CL19 configuration that’s easily found on affordable memory modules—spending extra for super-tight timings usually is overkill and not worth it for the added performance.
The MSI B550-A PRO was an easy choice for a motherboard. We wanted a cost-effective motherboard for the Ryzen 9 5800X and don’t care at all about RGB or other bling. The board can handle the CPU and memory settings we wanted for this test bed, the VRM barely gets warm. It also doesn’t come with any PCIe gymnastics—a simple PCI-Express 4.0 x16 slot wired to the CPU without any lane switches along the way. The slot is metal-reinforced and looks like it can take quite some abuse over time. Even though I admittedly swap cards hundreds of times each year, probably even 1000+ times, it has never been any issue—insertion force just gets a bit softer, which I actually find nice.
Software and Games
Windows 10 was updated to 20H2
The AMD graphics driver used for all testing is now 21.2.3 Beta
All NVIDIA cards use 461.72 WHQL
All existing games have been updated to their latest available version
The following titles were removed:
Anno 1800: old, not that popular, CPU limited
Assassin’s Creed Odyssey: old, DX11, replaced by Assassin’s Creed Valhalla
Hitman 2: old, replaced by Hitman 3
Project Cars 3: not very popular, DX11
Star Wars: Jedi Fallen Order: horrible EA Denuvo makes hardware changes a major pain, DX11 only, Unreal Engine 4, of which we have several other titles
Strange Brigade: old, not popular at all
The following titles were added:
Assassin’s Creed Valhalla
Cyberpunk 2077
Hitman 3
Star Wars Squadrons
Watch Dogs: Legion
I considered Horizon Zero Dawn, but rejected it because it uses the same game engine as Death Stranding. World of Warcraft or Call of Duty won’t be tested because of their always-online nature, which enforces game patches that mess with performance—at any time. Godfall is a bad game, Epic exclusive, and commercial flop.
The full list of games now consists of Assassin’s Creed Valhalla, Battlefield V, Borderlands 3, Civilization VI, Control, Cyberpunk 2077, Death Stranding, Detroit Become Human, Devil May Cry 5, Divinity Original Sin 2, DOOM Eternal, F1 2020, Far Cry 5, Gears 5, Hitman 3, Metro Exodus, Red Dead Redemption 2, Sekiro, Shadow of the Tomb Raider, Star Wars Squadrons, The Witcher 3, and Watch Dogs: Legion.
Raytracing
We previously tested raytracing using Metro Exodus and Control. For this round of retesting, I added Cyberpunk 2077 and Watch Dogs Legion. While Cyberpunk 2077 does not support raytracing on AMD, I still felt it’s one of the most important titles to test raytracing with.
While Godfall and DIRT 5 support raytracing, too, neither has had sufficient commercial success to warrant inclusion in the test suite.
Power Consumption Testing
The power consumption testing changes have been live for a couple of reviews already, but I still wanted to detail them a bit more in this article.
After our first Big Navi reviews I realized that something was odd about the power consumption testing method I’ve been using for years without issue. It seemed the Radeon RX 6800 XT was just SO much more energy efficient than NVIDIA’s RTX 3080. It definitely is more efficient because of the 7 nm process and AMD’s monumental improvements in the architecture, but the lead just didn’t look right. After further investigation, I realized that the RX 6800 XT was getting CPU bottlenecked in Metro: Last Light at even the higher resolutions, whereas the NVIDIA card ran without a bottleneck. This of course meant NVIDIA’s card consumed more power in this test because it could run faster.
The problem here is that I used the power consumption numbers from Metro for the “Performance per Watt” results under the assumption that the test loaded the card to the max. The underlying reason for the discrepancy is AMD’s higher DirectX 11 overhead, which only manifested itself enough to make a difference once AMD actually had cards able to compete in the high-end segment.
While our previous physical measurement setup was better than what most other reviewers use, I always wanted something with a higher sampling rate, better data recording, and a more flexible analysis pipeline. Previously, we recorded at 12 samples per second, but could only store minimum, maximum, and average. Starting and stopping the measurement process was a manual operation, too.
The new data acquisition system also uses professional lab equipment and collects data at 40 samples per second, which is four times faster than even NVIDIA’s PCAT. Every single data point is recorded digitally and stashed away for analysis. Just like before, all our graphics card power measurement is “card only”, not the “whole system” or “GPU chip only” (the number displayed in the AMD Radeon Settings control panel).
Having all data recorded means we can finally chart power consumption over time, which makes for a nice overview. Below is an example data set for the RTX 3080.
The “Performance per Watt” chart has been simplified to “Energy Efficiency” and is now based on the actual power and FPS achieved during our “Gaming” power consumption testing run (Cyberpunk 2077 at 1440p, see below).
The individual power tests have also been refined:
“Idle” testing is now measuring at 1440p, whereas it used 1080p previously. This is to follow the increasing adoption rates of high-res monitors.
“Multi-monitor” is now 2560×1440 over DP + 1920×1080 over HDMI—to test how well power management works with mixed resolutions over mixed outputs.
“Video Playback” records power usage of a 4K30 FPS video that’s encoded with H.264 AVC at 64 Mbps bitrate—similar enough to most streaming services. I considered using something like madVR to further improve video quality, but rejected it because I felt it to be too niche.
“Gaming” power consumption is now using Cyberpunk 2077 at 1440p with Ultra settings—this definitely won’t be CPU bottlenecked. Raytracing is off, and we made sure to heat up the card properly before taking data. This is very important for all GPU benchmarking—in the first seconds, you will get unrealistic boost rates, and the lower temperature has the silicon operating at higher efficiency, which screws with the power consumption numbers.
“Maximum” uses Furmark at 1080p, which pushes all cards into its power limiter—another important data point.
Somewhat as a bonus, and I really wasn’t sure if it’s as useful, I added another run of Cyberpunk at 1080p, capped to 60 FPS, to simulate a “V-Sync” usage scenario. Running at V-Sync not only removes tearing, but also reduces the power consumption of the graphics card, which is perfect for slower single-player titles where you don’t need the highest FPS and would rather conserve some energy and have less heat dumped into your room. Just to clarify, we’re technically running a 60 FPS soft cap so that weaker cards that can’t hit 60 FPS (GTX 1650S and GTX 1660) won’t run 60/30/20 FPS V-Sync, but go as high as able.
Last but not least, a “Spikes” measurement was added, which reports the highest 20 ms spike recorded in this whole test sequence. This spike usually appears at the start of Furmark, before the card’s power limiting circuitry can react to the new conditions. On RX 6900 XT, I measured well above 600 W, which can trigger the protections of certain power supplies, resulting in the machine suddenly turning off. This happened to me several times with a different PSU than the Seasonic, so it’s not a theoretical test.
Radeon VII Fail
Since we’re running with Resizable BAR enabled, we also have to boot with UEFI instead of CSM. When it was time to retest the Radeon VII, I got no POST, and it seemed the card was dead. Since there’s plenty of drama around Radeon VII cards suddenly dying, I already started looking for a replacement, but wanted to give it another chance in another machine, which had it working perfectly fine. WTF?
After some googling, I found our article detailing the lack of UEFI support on the Radeon VII. So that was the problem, the card simply didn’t have the BIOS update AMD released after our article. Well, FML, the page with the BIOS update no longer exists on AMD’s website.
Really? Someone on their web team made the decision to just delete the pages that contain an important fix to get the product working, a product that’s not even two years old? (launched Feb 7 2019, page was removed no later than Nov 8 2020).
Luckily, I found the updated BIOS in our VGA BIOS collection, and the card is working perfectly now.
Performance results are on the next page. If you have more questions, please do let us know in the comments section of this article.
The Galaxy A52 5G will likely be announced next week at Samsung’s second Unpacked event of the year, but thanks to an early unboxing video spotted by GSMArena, we won’t have to wait that long to see it in action.
The upcoming midrange device has been subject to plenty of leaks over the past few weeks, and the video from YouTuber Moboaesthetics seems to confirm many of them. The device in the video matches up with previously leaked renders and appears to be fully functional.
Specs featured in the lengthy video include a 64-megapixel main camera with OIS, a 120Hz display, IP67 dust and water resistance, under-screen fingerprint sensor, and a 4,500mAh battery. Unlike the S21 series, there’s a microSD card slot for memory expansion.
The A52 5G and also-rumored A72 5G are both expected to be accompanied by cheaper non-5G variants with other slightly downgraded specs, like a 90Hz screen rather than 120Hz. All told, they’re shaping up to look like highly competitive midrange phones.
Samsung has not confirmed exactly what it’s announcing at next week’s event, but given the appearance of this video and the recent leaks, it seems certain that the A52 5G will be making its debut soon.
Kingston Digital is the flash memory business unit of Kingston Technology Company, Inc., and has been the source for several retail products we have covered in the past, including internal NVME SSDs and encrypted USB drives. What we do not necessarily recognize is that a lot of flash memory sales are in the form of eMMC and memory cards, with the latter having become invaluable for content creators, as well as portable storage to carry around or use in mobile devices. Today, we take a look at a brand-new device from Kingston Digital that aims to streamline the workflow of content creators, and it is quite aptly named. Thanks again to the company for sending a review sample to TechPowerUp!
The Kingston Workflow Station is a hub that is part of a new family of products from the company. It includes a base station with four receptacles that can be occupied by different reader hubs, including USB (Type-A and Type-C), full-size SD cards, and microSD cards. The station comes with the USB reader hub—the others are optional extras. Kingston sent along the whole package, so we will take a look at everything, but begin with the specifications for these products in the tables below.
Specifications
Kingston Workflow Station Dock and USB miniHub
Interface:
Dock: USB 3.2 Gen 2; USB miniHub: USB 3.2 Gen 1
Connector:
USB Type-C for both
Supported USB Inputs:
USB miniHub: USB Type-A, USB Type-C
Dimensions:
Dock: 160.27×70.27×55.77 mm; USB miniHub: 62.87 x 16.87 x 50 mm
Weight:
Dock: 292 g; USB miniHub: 30 g
Operating Temperature:
0–60 °C
Storage Temperature :
-25–85 °C
Compatible OS:
Windows 10, 8.1, 8, Mac OS (v.10.10.x +)
Warranty:
Two years with free technical support
Kingston Workflow SD Reader
Interface:
USB 3.2 Gen 1
Connector:
USB Type-C
Supported Cards:
Supports UHS-II SD cards, backwards compatible with UHS-I SD cards
Dimensions:
62.87 x 16.87 x 50 mm
Weight:
31 g
Operating Temperature:
0–60 °C
Storage Temperature :
-25–85 °C
Compatible OS:
Windows 10, 8.1, 8, Mac OS (v.10.10.x +)
Warranty:
Two years with free technical support
Kingston Workflow microSD Reader
Interface:
USB 3.2 Gen 1
Connector:
USB Type-C
Supported Cards:
Supports UHS-II microSD cards, backwards compatible with UHS-I microSD cards
Not satisfied with the temperatures of the memory on his GeForce RTX 3090 Founders Edition, a determined YouTuber (CryptoAtHome) replaced the factory thermal pads for aftermarket ones. The results are impressive as he managed to improve the temperatures by up to 25 degress Celsius — even while doing Ethereum mining.
Even though the GeForce RTX 3080 and GeForce RTX 3090 are two of the best graphics cards, their memory chips are notorious for running a bit hot if you stress the GPU long enough. Evidently, heat has been a problem from the beginning. An early investigation into the GeForce RTX 3080 already showed the memory hitting dangerous temperatures that surpassed 100C. In our own tests, the memory inside the GeForce RTX 3080 and RTX 3090 peaked at temperatures of 94 degrees Celsius and 104 degrees Celsius, respectively.
Micron rates its GDDR6X chips for operational temperatures up to 95C. Running the memory out of spec during prolonged durations is a recipe for disaster. Cryptocurrency mining takes an even bigger toll on the graphics card and was probably the primary motivation for the YouTuber to swap the thermal pads to improve its thermals.
Before surgery, the YouTuber’s GeForce RTX 3090 Founders Edition was pushing a hash rate up to 82 MH/s mining Ethereum. The performance is a bit underwhelming since the GeForce RTX 3090 can easily reach 100 MH/s, and aftermarket models with better GDDR6X cooling can put hash rates up to 125 MH/s. Even though the YouTuber dropped the memory speed to 18Gbps and cranked the fan speed up to 88%, his GeForce RTX 3090 Founders Edition’s memory was still hitting 110 degrees Celsius.
Image 1 of 2
Image 2 of 2
The YouTuber replaced the factory thermal pads with Thermalright’s Odyssey Thermal Pad 85x45x1.5mm. Admittedly, the thermal pads aren’t the best aftermarket offering that money can buy, as their thermal conductivity rating is only 12.8 W/mk. However, they appear to have done wonders for the memory chips inside the GeForce RTX 3090 Founders Edition.
After replacing the thermal pads, the YouTuber was able to restore the memory speed to 10,577 MHz (21.15 Gbps) and lowered the fan speed to 70% to pump out 100 MH/s. The graphics card’s memory was dancing around the 84C–86C range during an entire day of cryptocurrency mining.
The Thermalright Odyssey Thermal Pad 85x45x1.5mm retails for $14.99 a piece on Amazon. Although the YouTuber bought four of them, he only needed three to completely substitute all the thermal pads on the GeForce RTX 3090 Founders Edition. It’s a pretty good investment no matter which way you look at it. For $30, one could shave off as much as 25C off the memory’s operating temperatures.
The memory’s thermals shouldn’t be as big of a concern if you’re not into cryptocurrency mining. We can’t generalize, but we expect the majority of custom GeForce RTX 3080 and RTX 3090 graphics cards on the market to come with better memory cooling solutions than Nvidia’s wacky Founders Edition design. If you suspect that GeForce RTX 3080 or RTX 3090 is suffering from thermal throttling, the latest version of HWiNFO64 now shows the temperature for the GDDR6X memory chips.
If you aren’t looking for the best of the best like Samsung’s 980 Pro but still want solid performance for large files or graphics-heavy games at a more affordable price point, Samsung’s 980 is worth your consideration.
For
Competitive performance
Attractive design
AES 256-bit hardware encryption
Software package
980 Pro-like endurance and 5-year warranty
Against
Slow write speeds after the SLC cache fills
Features and Specifications
Samsung’s SSDs are widely regarded as among the most reliable and best-performing in the market, and today the company hopes to extend that reputation with the introduction of the 980 NVMe SSD. Samsung’s 980 is designed for everyday PC users and gamers, although with performance ratings six times that of a standard SATA SSD, it possibly appeals to lower-budget content creators, too.
Samsung’s 980 also stands out with much more affordable pricing than the 980 Pro and 970 Evo Plus, a benefit borne of its DRAMless design that the company claims makes it the highest-performing DRAMless SSD on the market. Powered by the company’s V6 V-NAND and an efficient DRAMless controller that first debuted in the Portable SSD T7 Touch, this mix of hardware promises fast PCIe Gen3 performance and respectable endurance ratings.
Specifications
Product
980 250GB
980 500GB
980 1TB
Pricing
$49.99
$69.99
$129.99
Capacity (User / Raw)
250GB / 256GB
500GB / 512GB
1000GB / 1024GB
Form Factor
M.2 2280
M.2 2280
M.2 2280
Interface / Protocol
PCIe 3.0 x4 / NVMe 1.4
PCIe 3.0 x4 / NVMe 1.4
PCIe 3.0 x4 / NVMe 1.4
Controller
Samsung Pablo
Samsung Pablo
Samsung Pablo
DRAM
DRAMless / HMB
DRAMless / HMB
DRAMless / HMB
Memory
Samsung 128L V-NAND TLC
Samsung 128L V-NAND TLC
Samsung 128L V-NAND TLC
Sequential Read
2,900 MBps
3,100 MBps
3,500 MBps
Sequential Write
1,300 MBps
2,600 MBps
3,000 MBps
Random Read (QD1)
17,000 IOPS
17,000 IOPS
17,000 IOPS
Random Write (QD1)
53,000 IOPS
54,000 IOPS
54,000 IOPS
Random Read
230,000 IOPS
400,000 IOPS
500,000 IOPS
Random Write
320,000 IOPS
470,000 IOPS
480,000 IOPS
Security
AES 256-bit encryption
AES 256-bit encryption
AES 256-bit encryption
Endurance (TBW)
150 TB
300 TB
600 TB
Part Number
MZ-V8V250BW
MZ-V8V500BW
MZ-V8V1T0BW
Warranty
5-Years
5-Years
5-Years
Samsung targets the 980 at lower price points with capacities that include 250GB, 500GB, and 1TB models. With prices of $50, $70, and $130, the 980 hits the market with affordable price points at each capacity. That stands in contrast to the rest of the company’s SSD families, many of which span from 2TB to 8TB and cost hundreds of dollars.
Samsung’s 980 comes with TurboWrite 2.0, meaning it features a massive SLC buffer that’s larger than the cache on the 970 Evo and 970 Evo Plus. This speedy buffer absorbs data at a faster rate before write speeds degrade as the workload spills into the native TLC flash. However, the 980’s TurboWrite 2.0 implementation is a bit different than the 980 Pro’s; instead of a hybrid arrangement with both static and dynamic TLC caches, the 980 comes with just a dynamic SLC. This enables more cache capacity for the 500GB and 1TB models than the 980 Pro, but due to the 980’s lower-end SSD controller, the SSD isn’t quite as fast when the cache is full.
970 Evo Plus – Intelligent TurboWrite 1.0
980 – Intelligent TurboWrite 2.0
Capacity
Default
Intelligent
Total
Intelligent / Total
250GB
4GB
9GB
13GB
45GB
500GB
4GB
18GB
22GB
122GB
1TB
6GB
36GB
42GB
160GB
Interfacing with the host over a PCIe 3.0 x4 link, the 980 NVMe SSD can up to 3.5/3 GBps of sequential read/write throughput and even sustain up to 500,000/480,000 random read/write IOPS at its highest capacity. But, while peak figures are eye-catchers, the real key to application performance lies in the QD1 random performance rating. Samsung rates the drive with up to 17,000/54,000 read/write IOPS at QD1, which promises responsive performance in everyday desktop PC workloads.
Like the 980 Pro and 970 Evo Plus, the new 980 comes factory over-provisioned by roughly 9% and is backed by a five-year warranty or up to the respective TBW rating, which varies based on the capacity of the drive (150 TB per 250GB). The 980 also boasts the same AES 256-bit Full Disk Encryption feature set that’s compliant with TCG/Opal v2.0 and Encrypted Drive (IEEE1667) standards.
Software and Accessories
Samsung’s Magician application is one of the best pieces of storage management software available, and it’s getting better with its next iteration. Magician 6.3 comes with the same capabilities as prior versions but also brings the debut of Full Power Mode support. Like the WD Black’s SSD Dashboard Game Mode, this feature allows the 980 to operate at peak performance by disabling the lower power states, thus reducing the latency associated with transitioning between power states. Unforunately Magician 6.3 isn’t available for today’s review, but Samsung says it will be available within the next few weeks. The company also provides additional software to quickly clone your old data to your new Samsung SSD.
A Closer Look
Image 1 of 3
Image 2 of 3
Image 3 of 3
Samsung’s 980 comes in an M.2 2280 single-sided form factor with a black PCB. To keep the SSD cool, Samsung incorporated a few thermal solutions into the design. The heat spreader label on the back of the drive improves thermal dissipation to help combat excess heat, and the Dynamic Thermal Guard technology invokes thermal throttling if needed. The 980 also supports Active State Power Management (ASPM), Autonomous Power State Transition (APST), and the L1.2 ultra-low power mode to regulate overall power consumption. The controller also comes with a nickel-coating that the company claims reduces its operating temperatures by five degrees Celsius.
Speaking of which, Samsung’s Pablo, an NVMe 1.4 compliant SSD controller, powers the 980. The company wasn’t too forthcoming with deeper details on the controller. We believe it to be a multi-core Arm architecture that may be manufactured using its 14nm process, especially since the 980 Pro was so proudly touted for being manufactured on the company’s 8nm process node.
The controller features half the NAND channels of the controllers that power the company’s 970 Evo Plus and 980 Pro, which, along with the lack of a memory controller, helps save on cost due to less complex logic. Naturally, that comes at the expense of performance. However, to mitigate these performance bottlenecks, the DRAMless architecture uses host memory buffer (HMB) tech that leverages the host system’s DRAM instead of an onboard DRAM chip to host the FTL mapping table.
This technology allows the controller to leverage a small portion of the host system’s DRAM memory by using the Direct Memory Access functionality that’s baked into the PCI-Express interface. The company programmed the SSD to use 64MB of system memory for the 980’s needs, which is similar to other HMB drives on the market.
Each capacity of Samsung’s 980 NVMe SSD comes with a single NAND package containing up to sixteen 512Gb dies. Samsung’s V6 V-NAND TLC has a 2-plane architecture. While this is only half the number of planes compared to competing types of flash, Samsung says it has engineered the silicon to still provide speedy programming and read times. For further detailed reading, we covered this flash more extensively in our review of the Samsung 980 Pro.
RHA’s latest premium true wireless buds are well-built and comfortable but ultimately play it too safe sonically
For
Pleasantly full-bodied
Well-built, comfortable design
Decent noise-cancelling
Against
Lacks punch and rhythmic talent
Treble not refined
Charging case is fiddly
RHA is one of many headphone manufacturers offering a focused true wireless earbuds proposition that consists of one premium pair with active noise-cancelling and a more affordable pair without it.
The company’s naming choices leave visitors to its website in no doubt as to which is which in its arsenal. The RHA TrueControl ANC we have on test here sit above the RHA TrueConnect 2, justifying their flagship status with not only noise-cancellation but also Bluetooth 5.0 with aptX connectivity, dedicated app support and an IPX4-rated level of water and sweat resistance that means they should survive water splashes.
Build
The TrueControl ANC’s battery life of 20 hours – five hours from the buds, plus 15 hours from the charging case – isn’t superior to its sibling, though. That isn’t perhaps wholly surprising considering noise cancellation is rather battery-draining, but it is still somewhat disappointing in light of the competition. Noise-cancelling rivals, such as the Apple AirPods Pro, Sony WF-1000XM3 and Sennheiser Momentum True Wireless 2, all claim 24 hours or more.
We’re pleased to see fast charging support (14 minutes provides an hour of playback) as well as broad wireless charging compatibility on the menu. As is typical, a wireless charger isn’t provided, but out of the box the earbuds’ charging case can be replenished via the supplied USB-C cable.
The charging case reminds us of our time with the original RHA TrueConnect earbuds, which featured a similar case that we called “neat, but somewhat fiddly”. The aluminium case twists open to reveal the earbuds securely embedded into deep magnetic divots, but the slot only opens by a couple of finger-widths, and so it isn’t always easy to pluck them out.
That said, there are a swathe of rival designs on the market that vary vastly in quality, and this makes us appreciate the rare premium quality of the TrueControl ANC case’s solid build. It feels made to last and hardy enough to survive a tumble out of a hand, bag or pocket.
The earbuds have a matching air of quality about them too. They look and feel nicely finished, and we’ve no complaints with the responsive circular touchpad, which in the dedicated RHA Connect app can be set to skip tracks, adjust volume and cycle through noise-cancelling modes (on, off or ‘ambient’) with swipe forward/backward or tapping motions.
The app is also where you can adjust EQ, see battery levels and activate wear detection features – ‘auto-pause’ pauses music when an earbud is removed from your ear, while ‘auto-play’ resumes play when it is reinserted. Both work as promised during our test.
Comfort
The buds join the likes of the Sony WF-1000XM3 as some of the bulkiest earbud designs out there, but that isn’t a reason to avoid them – in fact, they’re one of the most comfortable and secure-fitting we’ve come across.
They’re easy to lock in place without much force or twisting, in part thanks to a notch that easily nestles into your ears. And multiple sizes of silicone and memory foam tips ensure there’s something for everyone.
When in place, the TrueControl ANC make your ears feel a little full – you won’t end up forgetting they are in – but despite the size of the earbud housings they feel relatively lightweight. Not even a mild attempt at headbanging during Judas Priest’s Hell Patrol manages to dislodge them.
Sound
The RHA’s sonic character plays into the hands of such a track: it’s big and full, warm and smooth, with an abundant low-end and rich mids that are able to get stuck into the meaty electric riffs and double-kick drumming. There’s a fair amount of detail in the mix, too.
Switch the noise-cancelling on and it doesn’t affect the sonics as much as we’ve heard with some other earbuds – all in all, it’s pretty satisfying. Their best efforts to reduce background TV noise and everyday road traffic are laudable, although as is to be expected from this kind of design they won’t cloak you in isolation to the extent that heavy traffic or engine noise is completely muted. You’re still likely to be disturbed when playing at low volume or mellow instrumental tracks, too.
The TrueControl ANC’s ‘ambient’ mode works adequately, amplifying your surroundings so you can conveniently hear conversations or announcements without having to remove the buds from your ears.
Our main issue with these RHAs is their inability to deliver the more mature aspects of sound as well as the best-in-class competition can. Compared with the slightly more affordable Sony WF-1000XM3, the TrueControl ANC lack the dynamic punch and rhythmic prowess to truly engage you in anything particularly musical. Dynamically, they’re fairly restrained, and the fact their rich balance doesn’t hugely favour treble doesn’t help them sound any less subdued either.
What treble there is lacks refinement, too, and this is highlighted when we play Soul Push’s Good Man. Whereas the grooves underpinning the track sound upbeat, crisp and open through the Sonys, the RHA’s rendition isn’t as spirited and musically cohesive and is less interesting to listen to.
Verdict
The RHA TrueControl ANC offer a comfortable listen – one that can be easily endured for hours without it grating. However, it’s not all that compelling, especially at lower volumes where they all too easily settle for offering background listening.
Despite their neat, comfortable earbud design and decent noise-cancelling, they need to offer more in the sound department at this premium price to merit a place on people’s shortlists.
SCORES
Sound 3
Comfort 5
Build 4
MORE:
Read our guide to the best true wireless earbuds and best AirPods alternatives
Samsung has announced its newest SSD, a follow-up to the 970 Evo called the 980. The drive is a NVMe M.2 PCIe 3.0 drive, and it’s an affordable one, too. It costs up to $129.99 for the 1TB version and as little as $49.99 for the 250GB model.
There’s a reason for the low price — it’s Samsung’s first-ever DRAM-less NVMe SSD, a cost-cutting measure that many other storage manufacturers have already dabbled with to varying degrees of success. The 980 lacks fast dynamic random access memory typically used for mapping the contents of an SSD, which would help it quickly and efficiently serve up your data.
Yet despite removing the feature, Samsung is touting some impressive performance compared to other DRAM-less options because this drive takes advantage of the Host Memory Buffer feature in the NVMe specification. In Samsung’s case, it’s tapping up to 64MB of your CPU’s DRAM via PCIe to pick up the slack on behalf of the SSD. The result isn’t as fast as an SSD that has its own DRAM, but the Host Memory Buffer feature helps it perform much better than a model that lacks it entirely — while you reap some cost savings. Samsung says that this SSD can achieve speeds up to six times that of an SATA-based SSD.
Also helping deliver those fast speeds is Samsung’s Intelligent TurboWrite 2.0 feature, which multiplies the maximum allocated buffer region within the 980 to as much as 160GB, up from just 42GB in the 970 Evo. This feature simulates fast single-layer cell (SLC) performance in the 980, despite the fact it uses 3-bit multilayer cell (MLC) memory, and it’s aimed at delivering sustained performance while transferring large files.
Samsung claims the 1TB version of the 980 can provide up to 3,500MB/s sequential read and 3,000MB/s write speeds, which is roughly on par with its fast (and more expensive) 970 Evo Plus SSD, besting the 970 Evo’s top sequential write speed. It’s a far cry from Samsung’s 980 Pro, though, which boasts sequential read and write speeds of up to 7,000MB/s and 5,000MB/s, respectively, when connected to a PCIe 4.0-ready motherboard.
As usual, there’s a steep fall-off in performance for lesser capacities: the low-end 250GB model claims up to 2,900MB/s sequential read and 1,300MB/s sequential write speeds, for instance. One of the other big highlights here across the lineup is that, even without DRAM, Samsung claims the random read and write input and output speeds during intensive tasks are similar to the 970 Evo and not far off from the 970 Evo Plus.
So, even while omitting a component that helps an SSD go quickly, Samsung’s 980 still seems very fast. In case you’re curious, Samsung’s test systems that provided these benchmarks run an Intel Core i7-6700K, the Ryzen 7 3700X, and 8GB of 2,133MHz DDR4 RAM.
At 26.5 grams and the size of my thumb, Insta360’s latest action camera, the Go 2, looks like an oversized Tic Tac with an eyeball. It’s the second generation in the Go lineup, which is Insta360’s only non-360-degree camera line. Where the first-generation Go left a lot to be desired, particularly in the image quality department, the $299 Go 2 comes with a new charging case, larger sensor, and improved image quality, making a strong case for a mobile-first action camera.
The case has a 1/4-inch thread for support mounting and a USB-C port for charging.
The most noticeable changes to this tiny camera come in the hardware department. The Go 2’s camera component has a new removable lens cover and less slippery matte plastic housing. The case plays a more active role in the use of the camera, becoming a tripod, remote, external battery, and charger all in one. It is slightly larger than the AirPods Pro case and has a 1/4-inch thread for support mounting and a USB-C port for charging. The standalone camera can run for 30 minutes on a single charge or 150 minutes while in the case.
While the case is not waterproof, the Go 2’s camera is IPX8 water resistant for use up to 13 feet underwater. In the box, Insta360 also includes three camera mounts: a pivot stand, a hat brim clip, and a pendant for wearing around your neck. All of these mounts utilize a magnet to keep the Go 2 attached to them.
The use of the case as more than just a place to store the camera is one of my favorite innovations in the Go 2. All of the mounts, remotes, and other accessories you have to end up buying for an action camera really add up. So it’s great to see essential features, such as a tripod, being built into the camera’s hardware.
More important than hardware, though, is image quality. With many smartphone cameras producing sharp, stabilized 4K 60fps video and punchy, crisp photos, it’s absolutely necessary for dedicated cameras outside of our phones to up the game. The POV ultrawide look of the Go 2’s video and the unique mounting abilities allow me to create video different enough that I could see myself carrying the Go 2 around in addition to my phone. I simply cannot produce a point-of-view angle, like that of the Go 2, with my smartphone’s camera.
On a phone, the camera’s 9-megapixel photos are crisp, full of contrast, and highly saturated without looking unrealistic. Put that image on a desktop, and it begins to look a bit grainy, where the sensor’s lower megapixel count begins to show, but the image is certainly usable.
Although the Go 2’s video resolution maxes out at 1440p and 50fps, the 120-degree field of view and saturated color science creates an image far more unique than what you get from a phone’s camera. When viewed on a small screen, the video is sharp and smooth with bright colors and lots of contrast. I was impressed with just how true to life the footage looked in perfect lighting conditions, but when I brought it over to the large screen on my laptop, the footage did look a bit noisier. I also wish video taken at night had less grain and noise. The amount of smoothing applied to low-light images doesn’t help either. Insta360 is not alone here: this is a problem even in the more expensive, robust action cameras such as the GoPro Hero 9. It is absolutely a problem I would like to see these companies spend more time fixing. (I’ve been using a pre-production unit in my testing, but Insta360 did not indicate to me that it was going to change anything when it comes to features or performance on the final version.)
There are four preset field-of-view options in the Go 2’s Pro Video mode that range from narrow to ultrawide. Despite the options, I typically just used the ultrawide view and reframed in the Go 2’s mobile app. The camera also utilizes a built-in 6-axis gyroscope along with Insta360’s FlowState stabilization algorithm for horizontal leveling, no matter the camera’s orientation, which produces extremely stable video without a crop to your image. I continue to be impressed with the stability GoPro, DJI, and Insta360 have been able to achieve in their action cameras, and the Go 2 is no exception.
Video Samples from the Insta360 Go 2.
Operating the Go 2 is a unique experience that takes a bit of getting used to. Like the first generation, there are no visible buttons on the camera component. To operate the camera, outside of its case, you push down on the Insta360 logo located under the lens, which then creates a vibration to signal it has been pressed. From powered off, a single press will start recording basic video, a double press takes a photo, and a one-second press will power the camera on to a ready state. If the camera is already on, a single press will start and stop FlowState stabilized video, a double press will begin a Hyperlapse timelapse video, and a two-second press will put the camera to sleep.
Like using touch controls on wireless earbuds, or any tech without a screen, there is a learning curve. It took me about three sessions to know what the LED light indications and different vibrations meant. I felt a lot more comfortable using the Go 2 in its case where its small black-and-white screen displayed which mode the camera was in, what resolution it was filming at, and how much storage was left on the 32GB of internal memory.
Insta360’s mobile app can also be used to control the camera via Wi-Fi and display a live view from the camera. The app also has capable editing software that allows for reframing, trimming, and exporting of clips. A Flashcut feature uses AI editing tools to trim and stitch clips from the Go 2 into flashy edits with punchy music and over-the-top transitions, such as screen wipes and quick zooms. It’s very fun to play with but a bit loud for my taste. For someone not familiar with video editing, this could be very useful though.
The Go 2 is available starting today for $299. For a company deeply focused on mobile-first editing for posting to social accounts, the Insta360 Go 2 makes perfect sense: a small portable camera whose footage will likely never make it to a desktop-editing software or a screen larger than a phone. To my knowledge, there isn’t a smaller camera on the market that can shoot 120 degrees with this level of stabilization or this quality, which looks great on the device you’re likely to view it on: your phone.
And for the mobile-first vlogger or avid social media user, that image quality is more than enough, the camera is small enough to mount anywhere, and its lack of confusing controls is perfect for the person who wouldn’t exactly know what to do with lots of options anyway. But for me, I’m most excited to see the bump in image quality. The better image processing and a larger sensor have allowed this camera to take a huge leap forward, even if, on paper, the difference is only going from 1080p to 1440p. This is starting to feel like a camera I might feel comfortable trusting with my more daring moments in a size that won’t feel too big to carry around.
SK Hynix has started volume production of the industry’s highest-capacity LPDDR5 memory package. The new device features a 18GB capacity and is designed primarily for high-end smartphones as well as laptops that are powered by system-on-chips which support LPDDR5.
SK Hynix’s 18GB LPDDR5 module integrates multiple memory devices and supports a data transfer rate of 6400 Mbps, the highest speed bin supported by the LPDDR5 specification. The new LPDDR5 memory package sees a 11.7% increase in capacity when compared to the previous 16GB modules.
SK Hynix does not disclose which fabrication process it uses to manufacture LPDDR5 memory devices for the 18GB LPDDR5 SDRAM package, but it is highly likely that it uses one of its latest technologies.
One of the first devices to use SK Hynix’s 18GB LPDDR5-6400 memory module is the Asus ROG 3 gaming smartphone. Eventually, the same package can be integrated into laptops or tablets with 18GB or even 36GB of memory.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.