Riot is bringing its tactical shooter Valorant to mobile devices. There aren’t a lot of details yet — such as when it will launch, on what hardware, or how it will differ from the main game — but Riot says the new version will simply be called Valorant Mobile.
The developer claims that the PC version of the game, which launched last year, currently averages 14 million monthly players. Valorant is also coming off of its biggest competitive tournament to date, with an event in Iceland, and Riot says that more than 1 million concurrent viewers tuned in to the finals on May 31st.
The news shouldn’t be too surprising. Earlier this year, Riot made a similar move with League of Legends, launching a mobile-focused spinoff called Wild Rift. Meanwhile, some of the most popular shooters in the world have moved to smartphones as well; PUBG Mobile and Call of Duty Mobile are both huge hits, and a smartphone iteration of Apex Legends is also on the way.
It also sounds like Riot is looking to build out Valorant in other ways. Without getting into details, aside from the mobile version, the developer says that it is “preparing to expand the franchise in order to bring Valorant to more players around the world.” Again, this would be following the League of Legends playbook, which currently has expanded into everything from comic books to digital card games to an upcoming animated series on Netflix.
The Spectrix D50 Xtreme DDR4-5000 is one of those luxury memory kits that you don’t necessarily need inside your system. However, you’d purchase it in a heartbeat if you had the funds.
For
+ Good performance
+ Gorgeous aesthetics
Against
– Costs an arm and a leg
– XMP requires 1.6V
When a product has the word “Xtreme” in its name, you can tell that it’s not tailored towards the average consumer. Adata’s XPG Spectrix D50 Xtreme memory is that kind of product. A simple glance at the memory’s specifications is more than enough to tell you that Adata isn’t marketing the Spectrix D50 Xtreme towards average joes. Unlike the vanilla Spectrix D50, the Xtreme version only comes in DDR4-4800 and DDR4-5000 flavors with a limited 16GB (2x8GB) capacity. The memory will likely not be on many radars unless you’re a very hardcore enthusiast.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Adata borrowed the design from Spectrix D50 and took it to another level for the Spectrix D50 Xtreme. The heat spreader retains the elegant look with geometric lines. The difference is that the Xtreme variant features a polished, mirror-like heat spreader. The reflective finish looks stunning, but it’s also a fingerprint and dust magnet, which is why Adata includes a microfiber cloth to tidy up.
The memory module measures 43.9mm (1.73 inches) so compatibility with big CPU air coolers is good. The Spectrix D50 Xtreme still has that RGB diffuser on the top of the memory module. Adata provides its own XPG RGB Sync application to control the lighting or if you prefer, you can use your motherboard’s software. The Spectrix D50 Xtreme’s RGB illumination is compatible with the ecosystems from Asus, Gigabyte, MSI and ASRock.
Each Spectrix D50 Xtreme memory module is 8GB big and sticks to a conventional single-rank design. It features a black, eight-layer PCB and Hynix H5AN8G8NDJR-VKC (D-die) integrated circuits (ICs).
The default data rate and timings for the Spectrix D50 Xtreme are DDR4-2666 and 19-19-19-43, respectively. Adata equipped the memory with two XMP profiles with identical 19-28-28-46 timings. The primary profile corresponds to DDR4-5000, while the secondary profile sets the memory to DDR4-4800. Both data rates require a 1.6V DRAM voltage to function properly. For more on timings and frequency considerations, see our PC Memory 101 feature, as well as our How to Shop for RAM story.
Comparison Hardware
Memory Kit
Part Number
Capacity
Data Rate
Primary Timings
Voltage
Warranty
Crucial Ballistix Max
BLM2K8G51C19U4B
2 x 8GB
DDR4-5100 (XMP)
19-26-26-48 (2T)
1.50
Lifetime
Adata XPG Spectrix D50 Xtreme
AX4U500038G19M-DGM50X
2 x 8GB
DDR4-5000 (XMP)
19-28-28-46 (2T)
1.60
Lifetime
Thermaltake ToughRAM RGB
R009D408GX2-4600C19A
2 x 8GB
DDR4-4600 (XMP)
19-26-26-45 (2T)
1.50
Lifetime
Predator Apollo RGB
BL.9BWWR.255
2 x 8GB
DDR4-4500 (XMP)
19-19-19-39 (2T)
1.45
Lifetime
Patriot Viper 4 Blackout
PVB416G440C8K
2 x 8GB
DDR4-4400 (XMP)
18-26-26-46 (2T)
1.45
Lifetime
TeamGroup T-Force Dark Z FPS
TDZFD416G4000HC16CDC01
2 x 8GB
DDR4-4000 (XMP)
16-18-18-38 (2T)
1.45
Lifetime
TeamGroup T-Force Xtreem ARGB
TF10D416G3600HC14CDC01
2 x 8GB
DDR4-3600 (XMP)
14-15-15-35 (2T)
1.45
Lifetime
Our Intel platform simply can’t handle the Spectrix D50 Xtreme DDR4-5000 memory kit. Neither our Core i7-10700K or Core i9-10900K sample has a strong IMC (integrated memory controller) for a memory kit.
The Ryzen 9 5900X, on the other hand, had no problems with the memory. The AMD test system leverages a Gigabyte B550 Aorus Master with the F13j firmware and aMSI GeForce RTX 2080 Ti Gaming Trio to run our RAM benchmarks.
Unfortunately, we ran into a small problem that prevented us from testing the Spectrix D50 Xtreme at its advertised frequency. One of the limitations with B550 motherboards is the inability to set memory timings above 27. The Spectrix D50 Xtreme requires 19-28-28-46 to run at DDR4-5000 properly. Despite brute-forcing the DRAM voltage, we simply couldn’t get the Spectrix D50 Xtreme to run at 19-27-27-46. The only stable data rate with the aforementioned timings was DDR4-4866, which is what we used for testing.
AMD Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
There’s always a performance penalty when you break that 1:1 ratio with the Infinity Fabric Clock (FCLK) and memory clock on Ryzen processors. The Spectrix D50 Xtreme was just a hairline from surpassing the Xtreem ARGB memory kit where DDR4-3600 is basically the sweet spot for Ryzen.
It’s important to bear in mind that the Spectrix D50 Xtreme was running at DDR4-4866. As small as it may seem, that 134 MHz difference should put Adata’s offering really close to Crucial’s Ballistix Max DDR4-5100, which is the highest-specced memory kit that has passed through our labs so far.
Overclocking and Latency Tuning
Due to the motherboard limitation, we couldn’t pursue overclocking on the Spectrix D50 Xtreme. However, in our experience, high-speed memory kits typically don’t have much gas left in the tank. Furthermore, the Spectrix D50 Xtreme already requires 1.6V to hit DDR4-5000 so it’s unlikely that we would have gotten anywhere without pushing insame amounts of volts into the memory
Lowest Stable Timings
Memory Kit
DDR4-4400 (1.45V)
DDR4-4500 (1.50V)
DDR4-4600 (1.55V)
DDR4-4666 (1.56V)
DDR4-4866 (1.60V)
DDR4-5100 (1.60V)
Crucial Ballistix Max DDR4-5100 C19
N/A
N/A
N/A
N/A
N/A
17-25-25-48 (2T)
Adata XPG Spectrix D50 Xtreme DDR4-5000 CL19
N/A
N/A
N/A
N/A
19-27-27-46 (2T)
N/A
Thermaltake ToughRAM RGB DDR4-4600 C19
N/A
N/A
18-24-24-44 (2T)
20-26-26-45 (2T)
N/A
N/A
Patriot Viper 4 Blackout DDR4-4400 C18
17-25-25-45 (2T)
21-26-26-46 (2T)
N/A
N/A
N/A
N/A
At DDR4-4866, the Spectrix D50 Xtreme was cool operating with 19-27-27-46 timings. However, it wouldn’t go lower regardless of the voltage that we crank into it. We’ll revisit the overclocking portion of the review once we source a more capable processor and motherboard for the job.
Bottom Line
The Spectrix D50 Xtreme DDR4-5000 C19 won’t offer you the best bang for your buck by any means. However, the memory will make your system look good and give you some bragging rights along the way. Just make sure you have a processor and motherboard that can tame the memory before pulling the trigger on a memory kit of this caliber.
With that said, the Spectrix D50 Xtreme DDR4-5000 C19 doesn’t come cheap. The memory retails for $849.99 on Amazon. Not like there are tons of DDR4-5000 memory kits out there, but the Spectrix D50 Xtreme is actually the cheapest one out of the lot. For the more budget-conscious consumers, however, you should probably stick to a DDR4-3600 or even DDR4-3800 memory kit with the lowest timings possible. The Spectrix D50 Xtreme is more luxury than necessity.
The EVGA GeForce RTX 3080 Ti FTW3 Ultra is the company’s premium offering based on NVIDIA’s swanky new RTX 3080 Ti graphics card, which the company hopes will restore its leadership in the high-end gaming graphics segment that felt disputed by the Radeon RX 6900 XT. Along with its sibling, the RTX 3070 Ti, the new graphics cards are a response to AMD’s return to competitiveness in the high-end graphics segment. It has the same mission as the RTX 3080—to offer maxed out gaming at 4K Ultra HD resolution, with raytracing, making it NVIDIA’s new flagship gaming product. The RTX 3090 is still positioned higher, but with its 24 GB memory, is branded as a TITAN-like halo product, capable of certain professional-visualization applications, when paired with NVIDIA’s Studio drivers.
The GeForce RTX 3080 Ti features a lot more CUDA cores than the RTX 3080—10,240 vs. 8,796, and maxes out the 384-bit wide memory interface of the GA102 silicon, much like the RTX 3090. The memory amount, however, is 12 GB, and runs at 19 Gbps data-rate. The RTX 3080 Ti is based on the Ampere graphics architecture, which debuts the 2nd generation of NVIDIA’s path-breaking RTX real-time raytracing technology. It combines new 3rd generation Tensor cores that leverage the sparsity phenomenon to accelerate AI inference performance by an order of magnitude over the previous gen; new 2nd generation RT cores which support even more hardware-accelerated raytracing effects; and the new faster Ampere CUDA core.
The EVGA RTX 3080 Ti FTW3 Ultra features the same top-tier iCX3 cooling solution as the top RTX 3090 FTW3, with a smart cooling that relies on several onboard thermal sensors besides what the GPU and memory come with; a meaty heatsink ventilated by a trio of fans, and plenty of RGB LED lighting to add life to your high-end gaming PC build. The PCB has several air guides that let airflow from the fans to pass through, improving ventilation. EVGA is pricing the RTX 3080 Ti FTW3 Ultra at $1340, a pretty premium over the $1,200 baseline price of the RTX 3080 Ti.
We recently noticed that Alienware’s just-announced X15 and X17 thin andvaguely light gaming laptops are conspicuously missing a port — and it’s not because they’re thin-and-light, it turns out. Alienware has just confirmed to The Verge that it has discontinued the Alienware Graphics Amplifier external GPU, and so these laptops won’t need that proprietary port anymore. The company isn’t saying whether it’ll offer a future eGPU, but pointed us to off-the-shelf Thunderbolt ones instead.
The Alienware Graphics Amp was first introduced in 2014 for $299 and designed to be a companion to the company’s midrange Alienware 13, giving it the vast majority of the power of a desktop graphics card plus four extra full-size USB ports when docked. I liked the combo well enough. But over the years, Alienware added the port to practically every laptop (and some of its more compact desktops, like the Alienware X51 mini-tower and Alienware Alpha R2 console-sized PC) it released, including the company’s flagship Area-51M which was designed to have built-in upgrades of its own.
With an included 460W power supply devoted entirely to the GPU, and a price that dipped to $199 and occasionally $150, the Amp managed to stay competitive for quite a while in the fairly niche market of eGPUs, which generally use manufacturer-agnostic Thunderbolt 3 ports instead of proprietary cables (and can often charge your laptop as well).
It’s not clear when Alienware discontinued the Amp. The Wayback Machine shows it was still live as of November 2020, and Dell last updated its support page in April 2021 — without adding compatibility for the latest wave of Nvidia and AMD graphics cards.
The new Alienware M15 R5 and M15 R6 also omit the Graphics Amplifier port. It’ll be interesting to see if this is the end for Alienware’s dreams of upgradable laptops; certainly the Amp lasted a lot longer than the idea of offering new chips for the giant Area-51m laptop.
Asrock announced a new motherboard lineup called the Riptide series, and with it comes two brand new boards for AMD Ryzen CPUs, the X570S PG Riptide and the B550 PG Riptide. These motherboards are an off-shoot of the Phantom Gaming series, designed for gamers who use their systems for everyday use.
The most striking feature of both Riptide motherboards is the addition of a GPU anti-sag bracket built right into the motherboard itself. The bracket is installed right next to the chipset and SATA ports, and will prevent your graphics card from sagging in the front, where there’s the least amount of support.
The bracket is a nice feature to have, with how large graphics cards are getting these days. Many of the latest triple-slot graphics cards weigh around 1.5kg or more, including some of ASRock’s own models. Due to the bracket being situated behind the graphics card, this should give PC builders a very clean look, with the bracket tucked behind the graphics card and out of sight.
Speaking of aesthetics, both boards are very stealthy with a silver and black appearance. The only sight of color is the bright orange and purple Asrock logo on the chipset, which can easily be hidden with a large enough graphics card.
Other features include a 10 phase power delivery system, up to 4933MHz DDR4 memory support, and a special feature ASRock is calling ‘Lightning Gaming Ports.’ These ports are designed to give gamers the lowest latency possible for their keyboard and mouse.
We don’t know what kind of magic ASRock is doing to improve latency on these specific USB ports, but we believe it ensures you are plugging your mouse and keyboard directly into the USB ports wired to the CPU. This will allow for the lowest latency possible, as most of your USB ports are usually connected through the chipset.
Another interesting note is this is Asrock’s first-ever X570S motherboard, and it’s coming to the Riptide series. But we expect ASRock’s other lineups will get the X570S treatment soon. The biggest feature coming to X570S is the ability to run the chipset without a fan. This is great for reliability and acoustics, and something we’re excited to see coming back to AMD’s flagship chipset.
With Nvidia announcing the all-new RTX 3080 Ti and RTX 3070 Ti at Computex this year, AIB partners have wasted no time in announcing custom variants of the two GPUs. There are seven AIB partners so far that have listed custom variants of the RTX 3080 Ti and RTX 3070 Ti, with more to come.
The RTX 3080 Ti is Nvidia’s new gaming flagship for the Ampere generation, featuring 10,240 CUDA cores, 12GB of GDDR6X, a 1,395MHz base clock, and 1,695 Boost Clock. It’s just a hair slower than the RTX 3090, with the biggest tradeoff (between the two SKUs) being the VRAM capacity, which is shaved down from 24GB to 12GB.
The RTX 3070 Ti, is Nvidia’s new mid-range SKU that will slot in-between the RTX 3070 and RTX 3080. The 3070 Ti features 6,144 CUDA cores, 8GB of GDDR6X at 19Gbps, a base clock of 1,440 MHz, and a Boost frequency of 1,710MHz. Expect performance to lean more towards an RTX 3070 rather than the more powerful 3080, as the 3070 Ti uses the GA104 core, though the 35% boost in memory bandwidth should help.
Asus
Image 1 of 3
Image 2 of 3
Image 3 of 3
Asus is bringing out three custom models for the RTX 3080 Ti as well as two custom SKUs for the lower end RTX 3070 Ti. At the top end will be the ROG Strix LC RTX 3080 Ti, featuring a 240mm AIO cooler to keep temperatures as cool as possible, the card is also decked out in a brushed metal finish, with the Strix language design, as well as a fully lit RGB shroud and fans.
For air cooling, Asus is dishing out the ROG Strix treatment to the RTX 3080 Ti and RTX 3070 Ti. For the RTX 3080 Ti ROG Strix, the cooler looks identical to the RTX 3090 variant, with a large triple-slot design, and triple 8-pin power connectors. Styling hasn’t changed either, with a fully lit RGB light bar on the side, and brushed aluminum finish all around the card.
Asus’ lowest-end offering, for now, will be the TUF series, which you will see on both the RTX 3080 Ti and RTX 3070 Ti. Similar to the ROG models, the RTX 3080 Ti TUF is identical in looks to the RTX 3090 TUF. So we wouldn’t be surprised if Asus simply installed the RTX 3090 cooler onto the RTX 3080 Ti cards since both the 3090 and 3080 Ti share the exact same GPU core.
Unfortunately, we don’t have pictures of the custom Asus RTX 3070 TI SKUs at this moment, however, we guess that the cards will be using a beefed-up cooler from the RTX 3070 class of cards, given the RTX 3070 Ti uses the GA104 core instead of GA102. We also don’t know what frequencies these cards will have but be sure these custom RTX 3080 Tis and RTX 3070 Tis will have higher frequencies than the reference specification.
Gigabyte
Image 1 of 3
Image 2 of 3
Image 3 of 3
Gigabyte’s offerings are very minimal for now, with the company currently offering the RTX 3080 Ti and RTX 3070 Ti in the Gaming OC SKU. The Gaming series in Gigabyte’s lineup represents the more budget-friendly level of SKUs rather than its top-end Aorus branded cards.
The RTX 3080 Ti Gaming OC design is identical to that of the RTX 3090 Gaming OC, with no changes to the shroud or cooler (what we can see of the cooler) at all. The card features a matte black finish with silver accents to add some extra styling to the shroud. The 3080 Ti Gaming OC features a factory overclock of 1710MHz.
Surprisingly the RTX 3070 Ti Gaming OC appears to have either a brand new cooler or an altered variant of the RTX 3070 Gaming OC. The heatsink has a different design with two heatsinks joined together by copper heat pipes, rather than three separate heatsinks found on the vanilla RTX 3070 variant, connected by two sets of copper heat pipes.
The RTX 3070 Ti Gaming OC also features a large copper base plate that covers the GPU and all the GDDR6X memory modules. This is a big upgrade compared to the RTX 3070 Gaming OC which only has four copper heat pipes making direct contact with the GPU, paired with a metal base plate covering the memory modules.
Aesthetically, the card has also been noticeably altered. The Gigabyte logo that was at the rear of all Gaming OC cards is now near the front, and the “GEFORCE RTX” logo has its own silver badge on the top of the card. The silver accents on the shroud have also been switched, with silver accents to the top front and bottom rear of the card. With other Gaming OC cards, this was reversed. The RTX 3070 Ti also features a factory overclock of 1830MHz.
EVGA
Image 1 of 2
Image 2 of 2
So far, EVGA has the most custom SKUs announced for the RTX 3080 Ti and RTX 3070 Ti, with 8 custom models confirmed.
The RTX 3080 Ti alone will come in six flavors, the FTW3, FTW3 Hybrid and FTW3 Hydro Copper. The FTW models represent EVGA’s flagships, so expect robust power delivery and excellent performance from these models.
The remaining three consist of the XC3, XC3 Hybrid and XC3 Hydro Copper. These are EVGA’s budget and mid-range offerings, which should offer the best overall price to performance.
The RTX 3070 Ti will only come in two flavors for now, the FTW3 and XC3. Unfortunately, we don’t have specs or detailed pictures of any of EVGA’s SKUs at this time.
MSI
Image 1 of 3
Image 2 of 3
Image 3 of 3
Similar to EVGA, MSI is announcing a ton of SKUs for both RTX 3080 Ti and RTX 3070 Ti. The models will consist of the Suprim, Gaming Trio, and Ventus variants. Each variant also gets a vanilla and factory overclocked model.
Overall the RTX 3080 Ti Suprim, Gaming Trio, and Ventus are identical to the RTX 3090 models with very very minor changes to the aesthetics of the card. The Suprim will be the top tier model, the Gaming Trio represents the mid-tier, and the Ventus is your ‘budget’ friendly RTX 3080 Ti.
The RTX 3070 Ti will also receive Suprim, Gaming Trio, and Ventus variants, but unfortunately, product pages for those cards are not available at this time.
The same goes for clock speed specifications on all of MSI’s RTX 3080 Ti and RTX 3070 Ti SKU, so we’ll have to wait until those become available.
Zotac
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Zotac will feature five different SKUs for the RTX 3080 Ti and RTX 3070 Ti combined, consisting of the Trinity and Holo series. The RTX 3080 Tis are mostly identical in every way to the RTX 3090s, especially when it comes to the Trinity, where Zotac appears to have put the RTX 3090 cooler directly onto the RTX 3080 Ti.
For the RTX 3080 Ti Holo, there are a few things to note. The RTX 3080 Ti only has a single Holo SKU, while the RTX 3090 had two, the Core Holo and Extreme Holo. The RTX 3080 Ti holo seems to be its own SKU, with a slightly different aesthetic than any of the RTX 3090 Holos. The RTX 3080 Ti Holo features an elegant-looking RGB lightbar on the card’s side that goes from the top to almost the bottom of the card, with a grey color theme for the whole shroud.
The RTX 3080 Ti Trinity will receive a 1665MHz Boost clock (reference spec), the Trinity OC variant features a 1695MHz boost clock, and the Holo features the highest clock at 1710MHz.
The RTX 3070 Ti will also come in the Trinity and Holo flavors but will come with the same triple-fan cooling configuration as the RTX 3080 Zotac Trinity and Holo. This is very different from the vanilla RTX 3070 which maxes out at a twin-fan design.
We are not sure if the RTX 3070 Ti uses the RTX 3080 coolers from the Trinty and Holo series, but aesthetically they look nearly identical, making us believe this is probably true.
The RTX 3070 Ti Holo will come with a 1830MHz boost clock. and the Trinity will have an 1870MHz boost.
Colorful
Image 1 of 3
Image 2 of 3
Image 3 of 3
Colorful has the fewest amount of cards out of all the AIB partners so far, with only three SKUs announced, and only one of those being for the RTX 3080 Ti.
The only RTX 3080 Ti SKU Colorful has announced is the Vulkan OC-V, featuring a triple fan heatsink and a black and metal finish. Giving the card a very stealthy or ‘batman’ like appearance. The card will feature a base clock of 1365MHz along with a 1710MHz Boost Clock.
The first RTX 3070 Ti SKU announced is the 3070 Ti Advanced OC-V, a big chunky card measuring beyond two slots in thickness, and coming in with a rather unique color design consisting of a silver shroud, accented by purple and black, along with a red ringlit RGB fan in the middle. The card will come with a 1575MHz base clock and a 1830MHz boost clock.
Finally, the last SKU announced is the RTX 3070 Ti NB 8G-V, which appears to be the company’s budget-friendly 3070 Ti. The card features a dual-slot cooler, with a very boxy appearance. The shroud is covered in a matte black finish, accented by both glossy black and matte red finishes. The card will come with a 1575MHz base clock and a 1770Mhz boost clock.
PNY
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Last but not least is PNY with four new SKUs planned for the RTX 3080 Ti and RTX 3070 Ti for now. The RTX 3070 Ti and RTX 3080 Ti will both come in Revel and Uprising editions. What we have pictured are the RTX 3080 Ti Revel Epic X, 3080 Ti Uprising Epic X and the RTX 3070 Ti Revel Epic X.
The RTX 3080 Ti Revel Epic X carries a two-toned design to the shroud, with a matte black covering the actual shroud, as well as a uniquely designed metal fan protector with a silver finish. Between the fans lies rings of RGB lighting. The same apparently goes for the RTX 3070 Ti as well, but the 3070 Ti is slightly smaller.
The RTX 3080 Ti Uprising Epic X features a grey finish with RGB accents near the middle of the fan. From what we can tell from the pictures, the card is absolutely gigantic, with a very wide heatsink, along with a length that is hard to describe. For perspective, the heatsink stretches out from the main PCB a good 4 inches, and the PCB isn’t compact at all. So this card is going to be a challenge for some PC cases.
Remember when Elon Musk claimed you’d be able to play The Witcher 3 and Cyberpunk 2077 on a 10 teraflop gaming rig he’s stuffing into the new Tesla Model S and X? AMD is officially providing the guts — during its Computex 2021 keynote, the chipmaker just revealed that the new Tesla infotainment system consists of an AMD Ryzen processor paired with an AMD RDNA 2 GPU.
“So we actually have an AMD Ryzen APU powering the infotainment system in both cars as well as a discrete RDNA2-based GPU that kicks in when running AAA games, providing up to 10 teraflops of compute power…. we look forward to giving gamers a great platform for AAA gaming,” says AMD CEO Lisa Su.
And if you combine that information with another piece of news AMD revealed today, plus a earlier leak in January, we may now have a passing idea of how powerful that “10 teraflop” infotainment system could theoretically be: likely a little less than Sony’s PS5.
You see, leaker Patrick Schur dug up a Tesla block diagram in January that singled out an AMD Navi 23 GPU specifically for Tesla’s new vehicles, and today AMD announced the new Radeon 6800M, 6700M and 6600M laptop graphics chips — the weakest of which just so happens to use Navi 23, AnandTech reports.
As we learned today, that Radeon 6600M chip comes with 28CUs and 1792 shader units— compared to the 36CUs and an estimated 2304 shader units worth of RDNA 2 GPU in Sony’s PlayStation 5, which also claims to be a 10-teraflop gaming rig. While it’s not quite apples-to-apples, it’s largely the same technology beneath, and a smaller number of cores on the same GPU architecture suggests we should expect slightly less performance from a Tesla compared to Sony’s console. (The higher-end Radeon 6700M / Navi 22 has the same number of CUs as the PS5, for what it’s worth.)
Performance depends on the software platform, though, as we’ve seen with the 10-teraflop PS5 and the 12-teraflop Xbox Series X — and a recent job posting by Tesla suggests game developers may actually be building for Linux if they want to target the new Tesla in-car gaming rigs.
Linux isn’t necessarily a benefit when it comes to gaming performance, though. Google’s Stadia cloud gaming also boasted 10 teraflops of performance from its AMD GPUs, but ports of games from Bungie and Square Enix didn’t look nearly as good as they did on weaker Xbox and PC hardware at the service’s launch.
The most important question is probably still the one I asked back in January, though: Who is going to sit in their $80,000 sports car and play a triple-A video game?
Tesla CEO Elon Musk tweeted on Saturday that the Model S Plaid, which includes the new AMD system, will start deliveries on June 10th.
AMD CEO Lisa Su revealed two key new processors during the company’s Computex 2021 keynote. The $359 Ryzen 7 5700G and $259 Ryzen 5 5600G APU, both of which come to market August 5, 2021, will plug two glaring gaps in the company’s Ryzen 5000 product stack that currently leads our list of Best CPUs.
The new Cezanne chips mark the first new APUs for desktop PCs that you’ll be able to buy at retail since AMD launched the Zen+ “Picasso” models back in 2019. AMD did bring a refresh of those chips to market as the oft-maligned Ryzen Pro “Renoir” series, but in a disappointment to enthusiasts, those chips were destined for professional users and thus not available at retail.
In fact, AMD actually brought the very chips it’s announcing today to OEM systems a few months ago, meaning we already know most of the details about the silicon. The Cezanne APUs, which come with Zen 3 execution cores paired with the Radeon Vega graphics engine, feel like they’re a bit late to retail. The company’s first salvo of Ryzen 5000 processors delivered a stunning blow to Intel as it took the unequivocal lead in desktop PCs, but AMD’s pivot to premium pricing left it exposed with two massive gaps in its product stack. Unfortunately for AMD, Intel’s Rocket Lake blasted in a few months ago and plugged those gaps.
Now AMD’s retort comes as retail availability of a few of the Cezanne chips, though it’s noteworthy the company is still holding back several of its lower-end models from the retail market. Given the ongoing graphics card shortages, these newly revamped APUs are a welcome sight for the gaming market and serve as AMD’s “non-X” chips that traditionally offer more attractive price points at a given core count. That is if AMD can keep them in stock, of course. Let’s take a closer look.
AMD Ryzen 5000 ‘Cezanne’ G-Series Specifications
The Ryzen 5000G lineup spans from four to eight cores, but AMD is only bringing the eight-core 16-thread Ryzen 7 5700G and six-core 12-thread Ryzen 5 5600G to retail, while the Ryzen 3 5300G remains relegated to the OEM-only market (at least for now). AMD also isn’t bringing the 35W GE-Series models to retail, either, as it continues to focus on premium chips during the ongoing global semiconductor shortage.
AMD Ryzen 5000 G-Series 65W Cezanne APUs
CPU
Price
Cores/Threads
Base / Boost Freq.
Graphics Cores
Graphics Frequency
TDP
Cache
Ryzen 7 5800X
$449
8 / 16
3.8 / 4.7 GHz
N/a
N/a
105W
32MB (1×32)
Core i7-11700K (KF)
$374 – $349
8 / 16
3.6 / 5.0
UHD Graphics 750 Xe 32EU
125W
16MB
Ryzen 7 5700G
$359
8 / 16
3.8 / 4.6
RX Vega 8
2100 MHz
65W
20 MB
Ryzen 5 5600X
$299
6 / 12
3.7 / 4.6 GHz
N/a
N/a
65W
32MB (1×32)
Core i5-11600K (KF)
$262 (K) – $237(KF)
6 / 12
3.9 / 4.9
UHD Graphics 750 Xe 32EU
125W
12MB
Ryzen 5 5600G
$259
6 / 12
3.9 / 4.4
RX Vega 7
1900 MHz
65W
19 MB
Ryzen 5 3600
$200
6 / 12
Core i5-11400 (F)
$182 – $157
6 / 12
2.6 / 4.2
UHD Graphics 750 Xe 24EU
65W
12MB
Ryzen 3 5300G
N/a
4 / 8
4.0 / 4.2
RX Vega 6
1700 MHz
65W
10 MB
The 65W eight-core 16-thread Ryzen 7 5700G comes with a 3.8 GHz base, 4.6 GHz boost, and eight Radeon Vega CUs that operate at 2.0 GHz.
The Ryzen 7 5700G addresses the ~$350 price point to plug the sizeable gap between the $449 Ryzen 9 5800X and $299 Ryzen 5 5600X. That big gap left Intel’s Core i7-11700K with plenty of room to operate, but AMD says the new 5700G will plug that gap with CPU performance that slots in perfectly between the other Ryzen 5000 parts, not to mention the strengths borne of the integrated Vega graphics engine.
The 65W six-core 12-thread Ryzen 5 5600G comes with a 3.9 GHz base, 4.4 GHz boost, and seven Radeon Vega CUs that operate at 1.9 GHz.
The 5600G slots in at $259 to plug the gap between the $299 Ryzen 5 5600X and, well, the remainder of AMD’s sub-$299 product stack. AMD’s Ryzen 5 3600 is the only real relevant contender in this price range, and it launched two years ago with the Zen 2 architecture. The 3600 isn’t competitive with Intel’s Rocket Lake Core i5-11600K or -11400, leaving Intel plenty of room to roam uncontested in the budget market (as you can see in our Core i5-11400 review).
Based on suggested pricing, the 5600G contends with the Core i5-11600K and doesn’t do much to address the current value budget champ, the Intel Core i5-11400. That’s largely because AMD has decided not to include the 65W Ryzen 3 5300G, which it ships into the OEM market, in this round of chip releases. It also has yet to release the GE-series chips listed in the table below. AMD hasn’t indicated when the Ryzen 3 or GE-Series Cezanne chips will come to market.
AMD Ryzen 5000 GE-Series 35W Cezanne APUs
CPU
Cores/Threads
Frequency (Up to) Boost / Base
Graphics Cores
Graphics Frequency
TDP
Cache
Ryzen 7 5700GE
8 / 16
3.2 / 4.6
RX Vega 8
2000 MHz
35W
20 MB
Ryzen 5 5600GE
6 / 12
3.4 / 4.4
RX Vega 7
1900 MHz
35W
19 MB
Ryzen 3 5300GE
4 / 8
3.6 / 4.2
RX Vega 6
1700 MHz
35W
10 MB
Of course, integrated graphics are the big attraction for APUs. AMD continues to pair its APUs with the Vega graphics architecture, just as it did with the 4000-series APUs. AMD reworked the architecture for its last go-round — the revamped RX Vega graphics delivered up to ~60% percent more performance per compute unit (CU) than its predecessors, which equated to more graphics performance from fewer CU. We aren’t sure if AMD has made a similar adjustment this time around, but we’re sure to learn more as we get closer to launch.
As with all Ryzen 5000 processors, Cezanne fully supports overclocking, which includes memory, graphics and CPU cores. AMD also says that the auto-overclocking Precision Boost Overdrive (PBO) and adaptive offset features are also supported. The Cezanne chips drop into the same motherboards as the current-gen Ryzen 5000 processors, so X570, B550, X470 and B450 are all supported. As with the other Ryzen models, memory support weighs in at DDR4-3200, though that does vary by DIMM population rules.
The new APUs hail from the Ryzen 5000 Mobile family (deep dive here), so they have physically identical silicon that has been transitioned from the FP6 BGA-mounted arrangement found in laptops to the AM4 socket on desktop PC motherboards. AMD then simply tunes the silicon for the more forgiving power limits and thermal conditions of the desktop, meaning that it can uncork the power settings and be more aggressive with boosting activity while being less aggressive with power-sharing/shifting between the CPU and GPU units.
The Zen 3 architecture grants higher L3 cache capacities than we’ve seen with AMD’s past APUs. For instance, the eight-core 16-thread Ryzen 7 5700G now has 20MB of L3 cache compared to its eight-core predecessor that came with 12MB. These are the natural byproducts of the Zen 3 architecture and should benefit general iGPU performance, too.
However, in contrast to the existing Ryzen 5000 chips for the PC, the APUs come as a single monolithic die. That results in a less cache than we see with the chips without integrated graphics, like the eight-core Ryzen 5 5600X. The 5600X comes with 32MB of L3 cache, which is significantly more than the 16MB of L3 cache found on the eight-core Ryzen 7 5700G. We’ll be sure to poke and prod at the cache when the silicon lands in our labs.
Additionally, the 5000G chips have the same I/O controller on the SoC as the mobile parts, so the chips are limited to 24 lanes of PCIe 3.0, as opposed to the 24 lanes of PCIe 4.0 found on the other Ryzen 5000 parts. This comes as the tradeoff of bringing the mobile architecture to the desktop PC, with AMD’s initial decision to stick with PCIe 3.0 for its mobile parts largely being driven by battery life concerns.
AMD Ryzen 5 5700G Gaming and Productivity Benchmarks
AMD shared a surprisingly slim selection of its own benchmarks to compare the Ryzen 5 5700G with Intel’s Core i7-11700. AMD’s test notes are also lacking. As with all vendor-provided benchmarks, you should view these with the requisite amount of skepticism.
As expected, AMD’s benchmarks show notable performance advantages across the board, especially when gaming on the 5700G’s Radeon Vega 8 graphics compared to the -11700’s UHD Graphics 650 with the Xe architecture. AMD’s last batch of 5000G comparative benchmarks were much more expansive when it compared Cezanne to the Comet Lake chips, but the Rocket Lake comparisons are far more limited. We’ll suss all that out in the review.
Ryzen 5000G Pro Series Desktop Processors
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
AMD also released its Ryzen 5000G Pro series today. As you can see in the slides above, aside from a few extra professional features, they’re identical to the client chips.
Thoughts
Overall the Cezanne desktop APUs look promising, and AMD’s pricing goes a long way to addressing the notable price gaps that come from its lack of value “non-X” chips with the Ryzen 5000 generation, an exclusion that has received plenty of criticism from the enthusiast community.
AMD’s timing for desktop APUs could be a bit better — Intel’s value Rocket Lake chips have been on the market for several months, and the continuing chip shortage coupled with cryptomining has destroyed any chance of scoring a reasonably priced GPU, at least for now. That means a chip with competitive 1080p gaming performance will be a hit with enthusiasts looking to wait out the current GPU crisis.
That said, we’re still seeing a complete lack of AMD’s cheap chips on the market, so the company’s decision to keep the Ryzen 3 and 35W GE-Series models off the retail market is disappointing. It makes good business sense given the state of the market (AMD sells every single high-end chip it punches out), but we’d like to see some improvement on the lower end of the market.
The Ryzen 5000G chips come to market on August 5, 2021. As you can imagine, we’ll have the full story when reviews arrive near that same time.
AMD CEO Dr. Lisa Su is set to deliver the Computex 2021 “AMD Accelerating – The High-Performance Computing Ecosystem” keynote tonight, May 31, 2021.
Due to the global pandemic, Computex is an all-virtual event, but the activities, such as the AMD keynote, are scheduled to occur on Taipei time. As such, you can watch AMD’s keynote live here at 7pm PT, 10pm ET in the video embedded below. We’ll also have all of our normal coverage after the event.
AMD hasn’t given us any solid clues about what we can expect to see tonight. However, we do know that recent rumors have suggested a new lineup of Radeon RX 6000 mobile GPUs, and perhaps we could hear about the company’s obviously pending Threadripper lineup. We could also see new CPU and GPU roadmaps, so anything is possible.
Here’s the press release announcing the keynote:
TAIPEI, Taiwan–(BUSINESS WIRE)–TAITRA (Taiwan External Trade and Development Council) announced today that Dr. Lisa Su, President and CEO of AMD, is invited back to deliver a keynote address at COMPUTEX 2021. This digital keynote will be on Tuesday, June 1, at 10:00 AM Taipei time, with the keynote theme “AMD Accelerating – The High-Performance Computing Ecosystem.”
COMPUTEX displays will be digital this year, with keynotes and forums running on hybrid. “It has been a year unlike others. Technology has gotten us through some of the most challenging times,” said James Huang, TAITRA Chairman. “We will continue to transform our exhibition models and practices to meet the evolving needs of our exhibitors, visitors, and media, without losing the most essential element of a trade show – connection.”
Dr. Lisa Su is proud to join COMPUTEX once again in 2021. “The past year has shown us the important role high-performance computing plays in our daily lives – from the way we work to the way we learn and play. At this year’s COMPUTEX, AMD will share how we accelerate innovation with our ecosystem partners to deliver a leadership product portfolio,” said Dr. Lisa Su.
At the COMPUTEX | AMD CEO Keynote, Dr. Lisa Su will share the AMD vision for the future of computing, including details of the growing adoption of the AMD high-performance computing and graphics solutions, built for PC enthusiasts and gamers.
AMD is a leading player in creating world-class high-performance computing solutions, under the leadership of Dr. Lisa Su. Their technology sparks and creates ideas that transform our lives. “AMD is a star that continues to accelerate in the tech industry, and we are very excited that Dr. Lisa Su is joining COMPUTEX 2021. We can expect and look forward to exciting news that Dr. Su is bringing to COMPUTEX,” said James Huang.
And with that, pull up a seat for the show, and stay tuned for our coverage afterward.
EA just released its chaotic and fun new multiplayer dodgeball game, Knockout City, on May 21st, and for the launch, had put together a special promotion: the full game would be free to play until May 30th, after which you could pay $19.99 on the platform of your choice if you wanted to keep playing. (It’s also included with an EA Play or Xbox Game Pass Ultimate subscription.)
But on Sunday, EA and developer Velan Studios announced that even if you missed that window to try Knockout City without paying for it, you can still check it out without dropping a dime: the game is now free to try up until you level up your “Street Rank” (Knockout City’s take on a Fortnite-like battle pass) to 25.
Block Party is over, but new players to Knockout City can still start brawlin’ for free! If your friends are just joining us, they’ll be able to play for free and level up to Street Rank 25 before purchasing the game. That’s also enough game time to teach them to pass the ball. pic.twitter.com/aWjPjKS0ES
— Knockout City (@knockoutcity) May 30, 2021
That level cap should give you few hours of playtime to try Knockout City for yourself and decide if you want to pay full price for it. And I recommend you give the game a whirl — I was impressed with just how well it captures the feeling of actually playing dodgeball.
Knockout City is available on PC, PS4, PS5, Xbox One, Xbox Series X / S, and Nintendo Switch, and it offers crossplay multiplayer, meaning you can play with your friends no matter what platform they’re on.
In a world where the vast majority of all-in-one and small form-factor PCs rely on proprietary motherboards, the Thin Mini-ITX form-factor is not particularly widespread, making it difficult for PC shops and DIY enthusiasts to build AIO and SFF computers. However, Thin-Mini-ITX motherboards are not going the way of the dodo, and ASRock’s recently announced AM4 X300TM-ITX is a good example of continued interest in the platform.
The ASRock X300TM-ITX platform combines compatibility with AMD’s Ryzen APUs (up to Zen 2-based Ryzen 4000-series) with an expansive feature set, including a USB 3.1 Gen 1 Type-C connector, a COM port, and an LVDS header, all of which are rather exotic for what are typically inexpensive Thin Mini-ITX motherboards.
Furthermore, the COM port and LVDS header make this platform useful for commercial systems that actually need these types of connectivity. ASRock doesn’t officially position the motherboard for business or commercial PCs, but it does support AMD Ryzen Pro APUs, so you can certainly use it to build a PC with Pro-class features.
As the name suggests, ASRock’s X300TM-ITX motherboard is based on a rather dated AMD X300 chipset that was originally designed for entry-level systems aimed at overlockers, but it still supports the vast majority of AMD’s APUs with an (up to) 65W TDP (except the upcoming Ryzen 5000-series processors). The board also supports up to 64GB of DDR4-3200 memory across two SO-DIMM memory modules, an M.2-2280 slot for SSDs with a PCIe 3.0x or a SATA interface, and one SATA connector.
ASRock aims the X300TM-ITX motherboard at thin entry-level systems that don’t typically use discrete graphics cards, so it doesn’t have a PCIe x16 slot for an add-in card. Instead, the platform uses AMD’s integrated Radeon Vega GPUs. Meanwhile, the LVDS header supports resolutions of up to 1920×1080 at 60Hz, whereas the HDMI 2.1 connector supports HDCP 2.3. There is no word about DisplayPort support over the USB Type-C connector, and you should be aware that HDMI-to-DisplayPort adapters may not work with all displays.
ASRock’s X300TM-ITX has an M.2-2230 slot for a Wi-Fi card along with a GbE port. It also has USB Type-A connectors as well as a 3.5-mm audio input and output.
The platform is already mentioned on the manufacturer’s website, so it should be available for purchase soon. Unfortunately, ASRock didn’t touch on pricing in its press release.
The Razer Kraken V3 X will keep you satisfied with an excellent microphone and solid rich audio reproduction.
For
+ Lightweight
+ Solid audio reproduction and thump
+ Great, microphone
+ Succulently soft ear cups
Against
– ll-plastic design
Designed to compete with the best gaming headsets, without breaking the bank, Razer’s Kraken V3 X combines a comfortable ear cup design with strong audio output, an excellent microphone and software that greatly enhances the experience. This $69 set of USB cans are thumpy thanks to Razer’s patented Triforce 40mm drivers while offering a dash of RGB style in-the-ear cups.
Razer Kraken V3 X Specs
Driver Type
40mm neodymium magnet
Impedance
32 Ohms
Frequency Response
12 Hz – 28kHz
Microphone Type
Cardioid Hyperclear Unidirectional
Connectivity
USB Type-A (PC)
Weight
0.6 pounds (285g)
Cord Length
USB Type-A cable: 6 feet
Lighting
RGB on Earcups
Software
Razer Synapse and 7.1 Surround Sound
Design and Comfort of Razer Kraken V3 X
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Though it’s made from lightweight plastic, the Razer Kraken V3 X feels very sturdy. The unit’s Hybrid-Fabric memory foam ear cups are succulently soft and the headband is highly adjustable, fitting comfortably on my obnoxiously large head. When I plugged it in, the three-headed snake logo on each ear cup illuminated in RGB.
On the left earcup, you will find the flexible Razer Hyper Clear cardioid microphone, which is quite bendy, with a volume knob and a mute button. The Razer Kraken V3 X is fine to wear for long periods of time as they do not tend to get very hot or warm with long usage, unlike many other over-the-ear styled gaming headphones I have previously reviewed.
Audio Performance of Razer Kraken V3 X
The headset uses a pair of 40mm Triforce drivers that are designed by Razer and they pump out thunderous distortion-free bass and sweet sound throughout the audio spectrum. From sweet, warm, throaty lows, to angelic highs, the rich sound on the Razer Kraken V3 X surprised me.
First, I went to Youtube to listen to Busta Rhymes’ “Put Your Hands Where My Eyes Could See,” because the thick bold bassline would be an excellent test of the Kraken V3 X’s capabilities. The unit came through with flying colors as they pushed out clear, loud, thunderous bass that Thor Odinson would be proud of.
My favorite moment came while listening to Earth, Wind & Fire’s “September.” At the beginning of the song, the Razer Kraken V3 X reproduced the softer tones of the finger snaps and guitar melody sweetly. When the horn section takes over with its powerful rhythm, the Krakens proved they were audio titans.
The Razer Kraken V3 X also has plenty of gaming prowess. While playing Borderlands 2, the 7.1 spatial surround sound helped me hear some creeps off to my right and I was able to turn around swiftly with my sniper’s rifle and blow a villain’s head off before he could roast me with a flamethrower. The sound of explosions was exquisite when I shot out a barrel filled with chemicals, taking out three enemies.
After I was done with Borderlands 2, I decided to knock some heads and so I launched Batman Arkham Knight and again the spatial sound software helped me as I heard footsteps to my left and I bataranged a would-be attacker. I thoroughly enjoyed hearing the bone-crunching punches, and then my favorite sound, the thruster on the Batmobile firing, was bombastically reproduced, as it launched me across off a bridge and onto a rooftop.
To test the movie viewing experience, I watched Avengers Infinity War via Disney Plus. The audio captured the thunderous bass and every nuance so well that it sounded like it did when I watched this film in an IMAX theater.
During the scene where Starlord is feeling insecure about Thor’s presence and starts deepening his voice, I picked up the subtle difference in tone from the moment when Chris Pratt starts his impression. Every fight scene and explosion was so realistic. When Iron Man is battling Thanos and he roots his armor’s feet and then double punches Thanos and he slams against the debris, I literally could hear individual rocks fling off and land elsewhere.
Microphone on Razer Kraken V3 X
The Razer Kraken V3 X comes with Razer’s HyperClear cardioid microphone, which has a rated frequency response that ranges from 100Hz-10Hz with a sensitivity of -42dB. It’s very flexible and bendy and really does a nice job when recording audio.
I took part in an afternoon Google Meet, and everyone said that my voice came in loud and clear, my natural deep timbre was nicely picked up by the microphone and when I made an appearance on my friend’s baseball podcast, he commented that the mic had an excellent pickup and recorded very nicely.
Features and Software of Razer Kraken V3 X
Image 1 of 3
Image 2 of 3
Image 3 of 3
The Razer Kraken V3 X is a solid performer on its own but, I highly, recommend you download Razer’s Synapse software which will allow you to configure the RGB lighting effects, create lighting profiles, and adjust the volume.
The real winner here is Razer’s 7.1 Surround Sound download; it is the game changer and takes the sound quality up many notches. The normal audio performance as previously mentioned is solid. However, the truly thunderous, high-quality audio that makes these cans worth their weight, is when the unit is paired with the software. They go from sounding like $69 headphones to sounding like a pair of $200 headphones.
Bottom Line
For $69.99 you get an excellent pair of sounding headphones, especially if you remember to download Razer’s 7.1 surround sound software. Yes, they’re plastic, but they’re very stylish with the RGB lighting adding a little panache and flair. The Kraken V3 X is also super lightweight, the hybrid cloth and memory foam cups will cradle your ears in soft comfort.
With the excellent microphone performance, you will be able to bark orders out to your friends during games or even host a podcast with crystal clear audio. If you don’t mind spending a bit more money and want a headset with a 3.5mm jack, you should consider the HyperX Cloud Alpha, but if you want a high-quality, affordable USB gaming headset, the Razer Kraken V3 X is a great choice.
THX’s debut product is nicely made and well-featured, but it lets itself down in the sound department
For
Neat, versatile design
MQA support
Adds power, clarity and cleanliness
Against
Sonically basic
Outclassed by cheaper rivals
Next time you’re in a cinema, take a moment to appreciate THX. After all, the US firm will be in some way to thank for the audio presentation you’re experiencing.
THX was born out of George Lucas’s disappointment at the quality of audio systems in theatres showing his Star Wars movies. Members of his Lucasfilm team, including sound engineer Tomlinson Holman, were tasked with developing a certification program for audio standards, and the first film to meet those specifications was the 1983 release of Star Wars Episode VI: Return Of The Jedi.
Almost 20 years after becoming a separate company, THX is celebrating another milestone, with its first crack at the consumer electronics market in the THX Onyx, a DAC/headphone amplifier. The company’s Achromatic Audio Amplifier (THX AAA) technology sits at the heart of the THX Onyx, a compact, portable device designed to enhance the sound between your source device and wired headphones.
Features
The THX Onyx is one of the most discreet portable DACs we’ve seen. It has a thin metal body, longer and narrower than the average USB stick, at the end of a short, thick USB-C cable.
THX Onyx tech specs
3.5mm output Yes
USB-C output Yes
USB-A adapter Yes
With that connection, and the USB-A adapter supplied in the box, the Onyx works with any Windows 10 PC, Mac or Android device via either of those output sockets. iPhone and iPads require the slim Apple Lightning to USB Camera Adapter (not supplied), although it’s worth noting that, in this case, your headphones’ in-line remote functionality won’t work.
Neither method requires specific drivers or installation – simply stick it into your chosen device, select it as your device’s sound output (if necessary) and plug your wired headphones into the 3.5mm socket at the other end of the DAC.
THX says the Onyx produces a power output comparable to that of entry-level desktop headphone amps, or five times more powerful than similar USB DACs. The claim is that its feed-back and feed-forward error correction method reduces distortion and noise levels up to 40dB lower than conventional power amps.
This amplification design works alongside an ESS ES9281PRO DAC chip, which can handle files up to 32-bit/384kHz PCM and DSD128, as well as a Master Quality Authenticated (MQA) renderer for fully decoding and playing back MQA hard files and (MQA-encoded) Tidal Masters in their native quality – handy for Tidal HiFi subscribers who can access hi-res tracks in the Tidal catalogue.
Build
The Onyx’s metal casework doesn’t just house the amplifier, DAC chip and MQA renderer: it also has LED lights that indicate the file type and size being played. Blue denotes 44.1kHz or 48kHz PCM files and yellow signifies sample rates above that, while red and pink shine respectively when DSD and MQA signals are played. It’s a neat function, providing reassurance for those with hi-res music collections, and adding some visual interest to the design.
Apart from the LED lights and THX logo, the Onyx is as smart and discreet as the category demands, with both the casing and rubber cable feeling sturdy. THX has magnetised part of the casing and cable so that they can clasp together. It can be a balancing act when connected to the bottom of a phone, but a helpful method of cable management on a laptop or computer.
Sound
Whatever way you arrange the Onyx, it delivers sound much more powerfully than your source device – it’s cleaner and clearer, too. We use a range of earbuds and over-ear headphones, from reference models to more price-appropriate pairs, and various source devices, including Android phones and Apple MacBooks. Compared with the sound coming straight from the devices’ outputs, the THX amplifies the music, making it much bigger, more direct and more involving to listen to. A noisy and compressed sound, this is not.
There’s a hefty dose of clarity and degree of polish to the presentation that wasn’t there before as the THX certainly improves on the typically paltry output of such portable or desktop devices. However, we have concerns about its inability to enhance the source’s sound in every aspect – and as well as other similarly priced portable DACs can.
The five-star Audioquest DragonFly Red (£169, $200, AU$280) – the class-leading portable DAC at this price – provides a much wider window into a song, bringing musical details and instrumental textures to the surface that the THX overlooks.
The THX is second best when it comes to communicating the dynamics and timing, and therefore the rhythm and musicality of a track. Even the five-star Astell & Kern AK USB-C Dual DAC Cable and Audioquest DragonFly Black v1.5, both around half the Onyx’s price, fare better on these fronts.
We play Destroyer’s Savage Night At The Opera and, while the Onyx delivers Dan Bejar’s vocals with clarity and solidity, the DragonFly Red gets under his deadpan delivery more convincingly, while revealing more insight into, and tighter interplay between, the starry haze instrumentation. It’s a more mature presentation that makes the Onyx sound rather crude. And it’s this lack of transparency that makes its laudable efforts to support hi-res formats and MQA seem pretty futile.
Verdict
The THX Onyx has a logical design to serve a logical purpose, but the sonic execution lets down what is an otherwise well-considered product. It clears the first hurdle in amplifying device sound and bringing more clarity and cleanliness, but fails the all-important second by not delivering the level of detail or rhythmic quality required at this price. Suffice to say, you can do better.
Intel kicked off Computex 2021 by adding two new flagship 11th-Gen Tiger Lake U-series chips to its stable, including a new Core i7 model that’s the first laptop chip for the thin-and-light segment that boasts a 5.0 GHz boost speed. As you would expect, Intel also provided plenty of benchmarks to show off its latest silicon.
Intel also teased its upcoming Beast Canyon NUCs that are the first to accept full-size graphics cards, making them more akin to a small form factor PC than a NUC. These new machines will come with Tiger Lake processors. Additionally, the company shared a few details around its 5G Solution 5000, its new 5G silicon for Always Connected PCs that it developed in partnership with MediaTek and Fibocom. Let’s jump right in.
Intel 11th-Gen Tiger Lake U-Series Core i7-1195G7 and i5-1155G7
Intel’s two new U-series Tiger Lake chips, the Core i7-1195G7 and Core i5-1155G7, slot in as the new flagships for the Core i7 and Core i5 families. These two processors are UP3 models, meaning they operate in the 12-28W TDP range. These two new chips come with all the standard features of the Tiger Lake family, like the 10nm SuperFin process, Willow Cove cores, the Iris Xe graphics engine, and support for LPDDR4x-4266, PCIe 4.0, Thunderbolt 4 and Wi-Fi 6/6E.
Intel expects the full breadth of its Tiger Lake portfolio to span 250 designs by the holidays from the usual suspects, like Lenovo MSI, Acer and ASUS, with 60 of those designs with the new 1195G7 and 1155G7 chips.
Intel Tiger Lake UP3 Processors
PROCESSOR
CORES/THREADS
GRAPHICS (EUs)
OPERATING RANGE (W)
BASE CLOCK (GHZ)
SINGLE CORE TURBO FREQ (GHZ)
MAXIMUM ALL CORE FREQ (GHZ)
Cache (MB)
GRAPHICS MAX FREQ (GHZ)
MEMORY
Core i7-1195G7
4C / 8T
96
12 -28W
2.9
5.0
4.6
12
1.40
DDR4-3200, LPDDR4x-4266
Core i7-1185G7
4C / 8T
96
12 – 28W
3.0
4.8
4.3
12
1.35
DDR4-3200, LPDDR4x-4266
Core i7-1165G7
4C / 8T
96
12 – 28W
2.8
4.7
4.1
12
1.30
DDR4-3200, LPDDR4x-4266
Core i5-1155G7
4C / 8T
80
12 – 28W
2.5
4.5
4.3
8
1.35
DDR4-3200, LPDDR4x-4266
Core i5-1145G7
4C / 8T
80
12 – 28W
2.6
4.4
4.0
8
1.30
DDR4-3200, LPDDR4x-4266
Core i5-1135G7
4C / 8T
80
12 – 28W
2.4
4.2
3.8
8
1.30
DDR4-3200, LPDDR4x-4266
Core i3-1125G4*
4C / 8T
48
12 – 28W
2.0
3.7
3.3
8
1.25
DDR4-3200, LPDDR4x-3733
The four-core eight-thread Core i7-1195G7 brings the Tiger Lake UP3 chips up to a 5.0 GHz single-core boost, which Intel says is a first for the thin-and-light segment. Intel has also increased the maximum all-core boost rate up to 4.6 GHz, a 300 MHz improvement.
Intel points to additional tuning for the 10nm SuperFin process and tweaked platform design as driving the higher boost clock rates. Notably, the 1195G7’s base frequency declines by 100 MHz to 2.9 GHz, likely to keep the chip within the 12 to 28W threshold. As with the other G7 models, the chip comes with the Iris Xe graphics engine with 96 EUs, but those units operate at 1.4 GHz, a slight boost over the 1165G7’s 1.35 GHz.
The 1195G7’s 5.0 GHz boost clock rate also comes courtesy of Intel’s Turbo Boost Max Technology 3.0. This boosting tech works in tandem with the operating system scheduler to target the fastest core on the chip (‘favored core’) with single-threaded workloads, thus allowing most single-threaded work to operate 200 MHz faster than we see with the 1185G7. Notably, the new 1195G7 is the only Tiger Lake UP3 model to support this technology.
Surprisingly, Intel says the 1195G7 will ship in higher volumes than the lower-spec’d Core i7-1185G7. That runs counter to our normal expectations that faster processors fall higher on the binning distribution curve — faster chips are typically harder to produce and thus ship in lower volumes. The 1195G7’s obviously more forgiving binning could be the result of a combination of the lower base frequency, which loosens binning requirements, and the addition of Turbo Boost Max 3.0, which only requires a single physical core to hit the rated boost speed. Typically all cores are required to hit the boost clock speed, which makes binning more challenging.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The four-core eight-thread Core i5-1155G7 sees more modest improvements over its predecessor, with boost clocks jumping an additional 100 MHz to 4.5 GHz, and all-core clock rates improving by 300 MHz to 4.3 GHz. We also see the same 100 MHz decline in base clocks that we see with the 1195G7. This chip comes with the Iris Xe graphics engine with 80 EUs that operate at 1.35 GHz.
Intel’s Tiger Lake Core i7-1195G7 Gaming Benchmarks
Intel shared its own gaming benchmarks for the Core i7-1195G7, but as with all vendor-provided benchmarks, you should view them with skepticism. Intel didn’t share benchmarks for the new Core i5 model.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Intel put its Core i7-1195G7 up against the AMD Ryzen 7 5800U, but the chart lists an important caveat here — Intel’s system operates between 28 and 35W during these benchmarks, while AMD’s system runs at 15 to 25W. Intel conducted these tests on the integrated graphics for both chips, so we’re looking at Iris Xe with 96 EUs versus AMD’s Vega architecture with eight CUs.
Naturally, Intel’s higher power consumption leads to higher performance, thus giving the company the lead across a broad spate of triple-A 1080p games. However, this extra performance comes at the cost of higher power consumption and thus more heat generation. Intel also tested using its Reference Validation Platform with unknown cooling capabilities (we assume they are virtually unlimited) while testing the Ryzen 7 5800U in the HP Probook 455.
Intel also provided benchmarks with DirectX 12 Ultimate’s new Sampler Feedback feature. This new DX12 feature reduces memory usage while boosting performance, but it requires GPU hardware-based support in tandem with specific game engine optimizations. That means this new feature will not be widely available in leading triple-A titles for quite some time.
Intel was keen to point out that its Xe graphics architecture supports the feature, whereas AMD’s Vega graphics engine does not. ULMark has a new 3DMark Sampler Feedback benchmark under development, and Intel used the test release candidate to show that Iris Xe graphics offers up to 2.34X the performance of AMD’s Vega graphics with the feature enabled.
Intel’s Tiger Lake Core i7-1195G7 Application Benchmarks
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Here we can see Intel’s benchmarks for applications, too, but the same rules apply — we’ll need to see these benchmarks in our own test suite before we’re ready to claim any victors. Again, you’ll notice that Intel’s system operates at a much higher 28 to 35W power range on a validation platform while AMD’s system sips 15 to 25W in the HP Probook 455 G8.
As we’ve noticed lately, Intel now restricts its application benchmarks to features that it alone supports at the hardware level. That includes AVX-512 based benchmarks that leverage the company’s DL Boost suite that has extremely limited software support.
Intel’s benchmarks paint convincing wins across the board. However, be aware that the AI-accelerated workloads on the right side of the chart aren’t indicative of what you’ll see with the majority of productivity software. At least not yet. For now, unless you use these specific pieces of software very frequently in these specific tasks, these benchmarks aren’t very representative of the overall performance deltas you can expect in most software.
In contrast, the Intel QSV benchmarks do have some value. Intel’s Quick Sync Video is broadly supported, and the Iris Xe graphics engine supports hardware-accelerated 10-bit video encoding. That’s a feature that Intel rightly points out also isn’t supported with MX-series GPUs, either.
Intel’s support for hardware-accelerated 10-bit encoding does yield impressive results, at least in its benchmarks, showing a drastic ~8X reduction in a Handbrake 4K 10-bit HEVC to 1080P HEVC transcode. Again, bear in mind that this is with the Intel chip running at a much higher power level. Intel also shared a chart highlighting its broad support for various encoding/decoding options that AMD doesn’t support.
Intel Beast Canyon NUC
Image 1 of 2
Image 2 of 2
Intel briefly showed off its upcoming Beast Canyon NUC that will sport 65W H-Series Tiger Lake processors and be the first NUC to support full-length graphics cards (up to 12 inches long).
The eight-litre Beast Canyon certainly looks more like a small form factor system than what we would expect from the traditional definition of a NUC, and as you would expect, it comes bearing the Intel skull logo. Intel’s Chief Performance Strategist Ryan Shrout divulged that the system will come with an internal power supply. Given the size of the unit, that means there will likely be power restrictions for the GPU. We also know the system uses standard air cooling.
Intel is certainly finding plenty of new uses for its Tiger Lake silicon. The company recently listed new 10nm Tiger Lake chips for desktop PCs, including a 65W Core i9-11900KB and Core i7-11700KB, and told us that these chips would debut in small form factor enthusiast systems. Given that Intel specifically lists the H-series processors for Beast Canyon, it doesn’t appear these chips will come in the latest NUC. We’ll learn more about Beast Canyon as it works its way to release later this year.
Intel sold its modem business to Apple back in 2019, leaving a gap in its Always Connected PC (ACPC) initiative. In the interim, Intel has worked with MediaTek to design and certify new 5G modems with carriers around the world. The M.2 modules are ultimately produced by Fibocom. The resulting Intel 5G Solution 5000 is a 5G M.2 device that delivers up to five times the speed of the company’s Gigabit LTE solutions. The solution is compatible with both Tiger and Alder Lake platforms.
Intel claims that it leads the ACPC space with three out of four ACPCs shipping with LTE (more than five million units thus far). Intel’s 5G Solution 5000 is designed to extend that to the 5G arena with six designs from three OEMs (Acer, ASUS and HP) coming to market in 2021. The company says it will ramp to more than 30 designs next year.
Intel says that while it will not be the first to come to market with a 5G PC solution, it will be the first to deliver them in volume, but we’ll have to see how that plays out in the face of continued supply disruptions due to the pandemic.
YouTube channel Moore’s Law Is Dead has published what it claims to be one of the first images of Intel’s upcoming enthusiast-grade DG2-series graphics card based on the Xe-HPG architecture (possibly codenamed ‘Niagara Falls’). The board does look like a graphics card, but it doesn’t have any Intel logotypes (they might have been removed to protect the source) or any other clear indication this is a DG2 GPU, so we should view any gleaned information with some skepticism.
Intel’s upcoming DG2 lineup is projected to include at least two graphics cards with either 384 or 512 execution units (EUs) and up to 16GB of memory that communicates over a 256-bit interface. The YouTube channel has published an image of Intel’s alleged DG2 graphics card and shared some additional information about Intel’s possible plans. The report says that while Intel might formally introduce its DG2-series graphics cards in Q4 2021, the cards won’t be widely available until Q1 2022.
Performance-wise, the top-of-the-range DG2 is projected to be slightly slower than Nvidia’s GeForce RTX 3080. Still, Intel is reportedly pricing the product ‘aggressively’ and is looking at a ‘sweet spot’ in the $349 to $499 range to grab market share.
The picture of the board also gives us a few points to chew over. First, the board has DisplayPort and HDMI interfaces and houses memory chips, so this is definitely a graphics card. The memory chips are installed in a pattern previously attributed to Intel’s upcoming high-end graphics cards with Xe-HPG GPUs, so this may indeed be Intel’s DG2.
Secondly, Intel’s high-end Xe-HPG GPU has a rather sophisticated multi-phase (10+) voltage regulating module (VRM). The VRM consists of two blocks on both sides of the GPU with a power management controller located near the display outputs. Such a VRM may imply the complexity and dimension of the graphics processor. In any case, this is an early sample and not a commercial product. Since this is a development board, some elements might be installed on the PCB merely for testing purposes.
Another thing that catches the eye are the two eight-pin auxiliary PCIe power connectors that can deliver up to 300W of power to the GPU and its memory. Additionally, the card can draw another 75W from the motherboard. That means we’re looking at a power-hungry graphics card. It’s noteworthy that the power connectors face the front side of the PC, which increases the actual length of the card. In contrast, modern AMD and Nvidia graphics cards feature power connectors on their top edge near the back. It is noteworthy that previously leaked pictures of an alleged Intel DG2 card showed the board with power connectors on top. Since we don’t know how old either sample is, it’s impossible to draw any conclusions here.
Finally, just like the latest AMD Radeon and Nvidia GeForce graphics cards, the alleged Intel DG2 desktop board seems to be slightly taller than the bracket, which is logical as its developers needed to accommodate the sophisticated power supply circuitry somewhere. It still isn’t as tall as Nvidia’s GA102-based reference designs, though.
Keeping in mind that Intel’s higher-end Xe-HPG graphics cards seem to be quite a bit out on the horizon, even accurate information about their current state should be considered preliminary – hardware gets more mature, and plans tend to change during the design process.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.