Intel’s graphics chief Raja Koduri published the first ‘official’ image of Intel’s upcoming gaming graphics processor based on the Xe-HPG architecture. Koduri also confirmed that at least one of the company’s DG2 GPUs will feature 512 EUs.
“Xe-HPG (DG2) real candy – very productive time at the Folsom lab couple of weeks ago,” Raja Koduri wrote in a Twitter post.”[…] Lots of game and driver optimization work ahead for Lisa Pierce’s software team. They are all very excited and a little scared.”
Intel’s Xe-HPG architecture employs energy-efficient processing blocks from the Xe-LP architecture, high frequency optimizations developed for Xe-HP/Xe-HPC GPUs for data centers and supercomputers, high-bandwidth internal interconnections, hardware-accelerated ray-tracing support, and a GDDR6-powered memory subsystem. The DG2 family GPUs are set to be manufactured at TSMC and are projected to hit the market in late 2021 or early 2022.
At present, Intel’s highest-performance discrete GPU is based on the Xe-LP architecture and features 96 EUs. Therefore, an Xe-HPG-powered graphics processor with 512 EUs will offer significantly higher performance than Intel’s existing standalone GPU. Rumor has it that the performance of Intel’s graphics processor with 512 EUs will be close to that of Nvidia’s GeForce RTX 3080.
The Spectrix D50 Xtreme DDR4-5000 is one of those luxury memory kits that you don’t necessarily need inside your system. However, you’d purchase it in a heartbeat if you had the funds.
For
+ Good performance
+ Gorgeous aesthetics
Against
– Costs an arm and a leg
– XMP requires 1.6V
When a product has the word “Xtreme” in its name, you can tell that it’s not tailored towards the average consumer. Adata’s XPG Spectrix D50 Xtreme memory is that kind of product. A simple glance at the memory’s specifications is more than enough to tell you that Adata isn’t marketing the Spectrix D50 Xtreme towards average joes. Unlike the vanilla Spectrix D50, the Xtreme version only comes in DDR4-4800 and DDR4-5000 flavors with a limited 16GB (2x8GB) capacity. The memory will likely not be on many radars unless you’re a very hardcore enthusiast.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Adata borrowed the design from Spectrix D50 and took it to another level for the Spectrix D50 Xtreme. The heat spreader retains the elegant look with geometric lines. The difference is that the Xtreme variant features a polished, mirror-like heat spreader. The reflective finish looks stunning, but it’s also a fingerprint and dust magnet, which is why Adata includes a microfiber cloth to tidy up.
The memory module measures 43.9mm (1.73 inches) so compatibility with big CPU air coolers is good. The Spectrix D50 Xtreme still has that RGB diffuser on the top of the memory module. Adata provides its own XPG RGB Sync application to control the lighting or if you prefer, you can use your motherboard’s software. The Spectrix D50 Xtreme’s RGB illumination is compatible with the ecosystems from Asus, Gigabyte, MSI and ASRock.
Each Spectrix D50 Xtreme memory module is 8GB big and sticks to a conventional single-rank design. It features a black, eight-layer PCB and Hynix H5AN8G8NDJR-VKC (D-die) integrated circuits (ICs).
The default data rate and timings for the Spectrix D50 Xtreme are DDR4-2666 and 19-19-19-43, respectively. Adata equipped the memory with two XMP profiles with identical 19-28-28-46 timings. The primary profile corresponds to DDR4-5000, while the secondary profile sets the memory to DDR4-4800. Both data rates require a 1.6V DRAM voltage to function properly. For more on timings and frequency considerations, see our PC Memory 101 feature, as well as our How to Shop for RAM story.
Comparison Hardware
Memory Kit
Part Number
Capacity
Data Rate
Primary Timings
Voltage
Warranty
Crucial Ballistix Max
BLM2K8G51C19U4B
2 x 8GB
DDR4-5100 (XMP)
19-26-26-48 (2T)
1.50
Lifetime
Adata XPG Spectrix D50 Xtreme
AX4U500038G19M-DGM50X
2 x 8GB
DDR4-5000 (XMP)
19-28-28-46 (2T)
1.60
Lifetime
Thermaltake ToughRAM RGB
R009D408GX2-4600C19A
2 x 8GB
DDR4-4600 (XMP)
19-26-26-45 (2T)
1.50
Lifetime
Predator Apollo RGB
BL.9BWWR.255
2 x 8GB
DDR4-4500 (XMP)
19-19-19-39 (2T)
1.45
Lifetime
Patriot Viper 4 Blackout
PVB416G440C8K
2 x 8GB
DDR4-4400 (XMP)
18-26-26-46 (2T)
1.45
Lifetime
TeamGroup T-Force Dark Z FPS
TDZFD416G4000HC16CDC01
2 x 8GB
DDR4-4000 (XMP)
16-18-18-38 (2T)
1.45
Lifetime
TeamGroup T-Force Xtreem ARGB
TF10D416G3600HC14CDC01
2 x 8GB
DDR4-3600 (XMP)
14-15-15-35 (2T)
1.45
Lifetime
Our Intel platform simply can’t handle the Spectrix D50 Xtreme DDR4-5000 memory kit. Neither our Core i7-10700K or Core i9-10900K sample has a strong IMC (integrated memory controller) for a memory kit.
The Ryzen 9 5900X, on the other hand, had no problems with the memory. The AMD test system leverages a Gigabyte B550 Aorus Master with the F13j firmware and aMSI GeForce RTX 2080 Ti Gaming Trio to run our RAM benchmarks.
Unfortunately, we ran into a small problem that prevented us from testing the Spectrix D50 Xtreme at its advertised frequency. One of the limitations with B550 motherboards is the inability to set memory timings above 27. The Spectrix D50 Xtreme requires 19-28-28-46 to run at DDR4-5000 properly. Despite brute-forcing the DRAM voltage, we simply couldn’t get the Spectrix D50 Xtreme to run at 19-27-27-46. The only stable data rate with the aforementioned timings was DDR4-4866, which is what we used for testing.
AMD Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
There’s always a performance penalty when you break that 1:1 ratio with the Infinity Fabric Clock (FCLK) and memory clock on Ryzen processors. The Spectrix D50 Xtreme was just a hairline from surpassing the Xtreem ARGB memory kit where DDR4-3600 is basically the sweet spot for Ryzen.
It’s important to bear in mind that the Spectrix D50 Xtreme was running at DDR4-4866. As small as it may seem, that 134 MHz difference should put Adata’s offering really close to Crucial’s Ballistix Max DDR4-5100, which is the highest-specced memory kit that has passed through our labs so far.
Overclocking and Latency Tuning
Due to the motherboard limitation, we couldn’t pursue overclocking on the Spectrix D50 Xtreme. However, in our experience, high-speed memory kits typically don’t have much gas left in the tank. Furthermore, the Spectrix D50 Xtreme already requires 1.6V to hit DDR4-5000 so it’s unlikely that we would have gotten anywhere without pushing insame amounts of volts into the memory
Lowest Stable Timings
Memory Kit
DDR4-4400 (1.45V)
DDR4-4500 (1.50V)
DDR4-4600 (1.55V)
DDR4-4666 (1.56V)
DDR4-4866 (1.60V)
DDR4-5100 (1.60V)
Crucial Ballistix Max DDR4-5100 C19
N/A
N/A
N/A
N/A
N/A
17-25-25-48 (2T)
Adata XPG Spectrix D50 Xtreme DDR4-5000 CL19
N/A
N/A
N/A
N/A
19-27-27-46 (2T)
N/A
Thermaltake ToughRAM RGB DDR4-4600 C19
N/A
N/A
18-24-24-44 (2T)
20-26-26-45 (2T)
N/A
N/A
Patriot Viper 4 Blackout DDR4-4400 C18
17-25-25-45 (2T)
21-26-26-46 (2T)
N/A
N/A
N/A
N/A
At DDR4-4866, the Spectrix D50 Xtreme was cool operating with 19-27-27-46 timings. However, it wouldn’t go lower regardless of the voltage that we crank into it. We’ll revisit the overclocking portion of the review once we source a more capable processor and motherboard for the job.
Bottom Line
The Spectrix D50 Xtreme DDR4-5000 C19 won’t offer you the best bang for your buck by any means. However, the memory will make your system look good and give you some bragging rights along the way. Just make sure you have a processor and motherboard that can tame the memory before pulling the trigger on a memory kit of this caliber.
With that said, the Spectrix D50 Xtreme DDR4-5000 C19 doesn’t come cheap. The memory retails for $849.99 on Amazon. Not like there are tons of DDR4-5000 memory kits out there, but the Spectrix D50 Xtreme is actually the cheapest one out of the lot. For the more budget-conscious consumers, however, you should probably stick to a DDR4-3600 or even DDR4-3800 memory kit with the lowest timings possible. The Spectrix D50 Xtreme is more luxury than necessity.
NVIDIA today refreshed the top-end of the GeForce RTX 30-series “Ampere” family of graphics cards with the new GeForce RTX 3080 Ti, which we’re testing for you today. The RTX 3080 Ti is being considered the next flagship gaming product, picking up the mantle from the RTX 3080. While the RTX 3090 is positioned higher in the stack, NVIDIA has been treating it as a TITAN-like halo product for not just gaming, but also quasi-professional use cases. The RTX 3080 Ti has the same mandate as the RTX 3080—to offer leadership gaming performance with real-time raytracing at 4K UHD resolution.
NVIDIA’s announcement of the GeForce RTX 3080 Ti and RTX 3070 Ti was likely triggered by AMD’s unexpected success in taking a stab at the high-end market after many years with its Radeon RX 6800 series and RX 6900 XT “Big Navi” GPUs, which are able to compete with the RTX 3080, RTX 3070, and even pose a good alternative to the RTX 3090. NVIDIA possibly found itself staring at a large gap between the RTX 3080 and RTX 3090 that needed to be filled. We hence have the RTX 3080 Ti.
The GeForce RTX 3080 Ti is based on the same 8 nm GA102 silicon as the RTX 3080, but with more CUDA cores, while maxing out the 384-bit wide GDDR6X memory bus. It only has slightly fewer CUDA cores than the RTX 3090, the memory size is 12 GB as opposed to 24 GB, and the memory clock is slightly lower. NVIDIA has given the RTX 3080 Ti a grand 10,240 CUDA cores spread across 80 streaming multiprocessors, 320 3rd Gen Tensor cores that accelerate AI and DLSS, and 80 2nd Gen RT cores. It also has all 112 ROPs enabled, besides 320 TMUs. The 12 GB of memory maxes out the 384-bit memory bus, but the memory clock runs at 19 Gbps (compared to 19.5 Gbps on the RTX 3090). Memory bandwidth hence is 912.4 GB/s.
The NVIDIA GeForce RTX 3080 Ti Founders Edition looks similar in design to the RTX 3080 Founders Edition. NVIDIA is pricing the card at $1,200, or about $200 higher than the Radeon RX 6900 XT. The AMD flagship is really the main target of this NVIDIA launch, as it has spelled trouble for the RTX 3080. As rumors of the RTX 3080 Ti picked up pace, AMD worked with its board partners to release an enthusiast-class RX 6900 XT refresh based on the new “XTXH” silicon that can sustains 10% higher clock-speeds. In this review, we compare the RTX 3080 Ti with all the SKUs in its vicinity to show you if it’s worth stretching your penny to $1,200, or whether you could save some money by choosing this card over the RTX 3090.
MSI GeForce RTX 3080 Ti Suprim X is the company’s new flagship gaming graphics card, and part of NVIDIA’s refresh of the RTX 30-series “Ampere” family, to bolster its position in the high-end segment. The Suprim X is an MSI exercise at leveling up to the NVIDIA Founders Edition in terms of original design and build quality. The most premium materials and design combine with the company’s most advanced graphics card cooling solution, and overclocking-optimized PCB, to offer the highest tier of factory overclocks.
NVIDIA announced the GeForce RTX 3080 Ti and RTX 3070 Ti at its Computex 2021 event to answer two very specific challenges to its product stack—the Radeon RX 6900 XT outclassing the RTX 3080, and the RX 6800 performing well against the RTX 3070. The RTX 3080 Ti is designed to fill a performance gap between the RTX 3080 and the halo-segment RTX 3090.
The RTX 3080 Ti is based on the same 8 nm GA102 silicon as the RTX 3080, but features a lot more CUDA cores, but more importantly, maxes out the 384-bit wide GDDR6X memory bus of the GA102. NVIDIA is giving the card 12 GB of memory, and not 24 GB like on the RTX 3090, as it considers it a halo product, even targeting certain professional use-cases. The RTX 3080 Ti is also endowed with 320 TMUs, 320 Tensor cores, 80 RT cores, and 112 ROPs. The memory operates at the same 19 Gbps data-rate as the RTX 3080, but due to its increased bus-width, results in a memory bandwidth of 912 GB/s.
The MSI RTX 3080 Ti Suprim X supercharges the RTX 3080 Ti with its highest clock speeds—1830 MHz vs. 1665 MHz reference. It features the most elaborate version of the company’s TriFrozr 2S cooling solution, with a metal alloy shroud, a dense aluminium fin-stack heatsink, three TorX fans, a similar power-delivery to the company’s RTX 3090 Suprim X, and a metal back-plate. In this review, we take the card for a spin across our test-suite to tell you if shelling RTX 3090 kind of money for a top custom RTX 3080 Ti is worth it. MSI hasn’t provided any pricing info yet, we expect the card will end up at around $2100, $100 higher than our estimate for the NVIDIA baseline price.
The ASUS ROG Strix LC GeForce RTX 3080 Ti is the company’s flagship custom-design RTX 3080 Ti graphics card, characterized by its factory-fitted, all-in-one liquid cooling solution. The cooler combines an AIO liquid cold-plate to pull heat from the GPU and memory; while a set of heatsinks and lateral blower provide additional cooling. Interestingly, this cooler debuted with the Radeon RX 6800 XT Strix LC, which along with the RX 6900 XT, are believed to have triggered product-stack updates among NVIDIA’s ranks, to begin with.
The GeForce RTX 3080 Ti replaces the RTX 3080 as NVIDIA’s new flagship gaming product. The RTX 3090 is still positioned higher, but that SKU is more of a TITAN-like halo product, with its massive 24 GB memory favoring certain professional use-cases when paired with Studio drivers. The RTX 3080 Ti utilizes the same GA102 silicon, maxing out its 384-bit memory interface, with 12 GB of it. There are more CUDA cores on offer—10,240 vs. 8,796 on the RTX 3080, and proportionate increase in Tensor cores, RT cores, and other components. The GeForce RTX 3080 Ti is based on the new Ampere graphics architecture, which debuts the 2nd generation of NVIDIA’s path-breaking RTX real-time raytracing technology, combining 3rd generation Tensor cores, with 2nd generation RT cores, and faster Ampere CUDA cores.
As mentioned earlier the ASUS ROG Strix LC lugs a bulky all-in-one liquid cooling + air hybrid solution, without coming across as ugly and tacked on. ASUS appears to have taken a keen interest in adding to the industrial design of the card and radiator. The cooler also ends up supporting a major factory-overclock of 1830 MHz, compared to 1665 MHz reference. This puts its performance way above even the RTX 3090, while also costing higher than its starting price. In this review we show you whether it’s worth just picking this card over an RTX 3090 if one is available.
The EVGA GeForce RTX 3080 Ti FTW3 Ultra is the company’s premium offering based on NVIDIA’s swanky new RTX 3080 Ti graphics card, which the company hopes will restore its leadership in the high-end gaming graphics segment that felt disputed by the Radeon RX 6900 XT. Along with its sibling, the RTX 3070 Ti, the new graphics cards are a response to AMD’s return to competitiveness in the high-end graphics segment. It has the same mission as the RTX 3080—to offer maxed out gaming at 4K Ultra HD resolution, with raytracing, making it NVIDIA’s new flagship gaming product. The RTX 3090 is still positioned higher, but with its 24 GB memory, is branded as a TITAN-like halo product, capable of certain professional-visualization applications, when paired with NVIDIA’s Studio drivers.
The GeForce RTX 3080 Ti features a lot more CUDA cores than the RTX 3080—10,240 vs. 8,796, and maxes out the 384-bit wide memory interface of the GA102 silicon, much like the RTX 3090. The memory amount, however, is 12 GB, and runs at 19 Gbps data-rate. The RTX 3080 Ti is based on the Ampere graphics architecture, which debuts the 2nd generation of NVIDIA’s path-breaking RTX real-time raytracing technology. It combines new 3rd generation Tensor cores that leverage the sparsity phenomenon to accelerate AI inference performance by an order of magnitude over the previous gen; new 2nd generation RT cores which support even more hardware-accelerated raytracing effects; and the new faster Ampere CUDA core.
The EVGA RTX 3080 Ti FTW3 Ultra features the same top-tier iCX3 cooling solution as the top RTX 3090 FTW3, with a smart cooling that relies on several onboard thermal sensors besides what the GPU and memory come with; a meaty heatsink ventilated by a trio of fans, and plenty of RGB LED lighting to add life to your high-end gaming PC build. The PCB has several air guides that let airflow from the fans to pass through, improving ventilation. EVGA is pricing the RTX 3080 Ti FTW3 Ultra at $1340, a pretty premium over the $1,200 baseline price of the RTX 3080 Ti.
The Zotac GeForce RTX 3080 Ti AMP HoloBlack is the company’s top graphics card based on the swanky new RTX 3080 Ti “Ampere” GPU by NVIDIA. Hot on the heels of its Computex 2021 announcement, we have with us NVIDIA’s new flagship gaming graphics card, a distinction it takes from the RTX 3080. The RTX 3090 is still around in the NVIDIA’s product stack, but it’s positioned as a TITAN-like halo product, with its 24 GB video memory benefiting certain quasi-professional applications, when paired with NVIDIA’s GeForce Studio drivers. The RTX 3080 Ti has the same mandate from NVIDIA as the RTX 3080—to offer leadership 4K UHD gaming performance with maxed out settings and raytracing.
Based on the same 8 nm “GA102” silicon as the RTX 3080, the new RTX 3080 Ti has 12 GB of memory, maxing out the 384-bit GDDR6X memory interface of the chip; while also packing more CUDA cores and other components—10,240 vs. 8,796, 320 TMUs, those many Tensor cores, 80 RT cores, and 112 ROPs. The announcement of the RTX 3080 Ti and its sibling, the RTX 3070 Ti—which we’ll review soon—may have been triggered by AMD’s unexpected return to the high-end gaming graphics segment, with its “Big Navi” Radeon RX 6000 series graphics cards, particularly the RX 6900 XT, and the RX 6800.
The GeForce Ampere graphics architecture debuts the 2nd generation of NVIDIA RTX, bringing real-time raytracing to gamers. It combines 3rd generation Tensor cores that accelerate AI deep-learning neural nets that DLSS leverages; 2nd generation RT cores that introduce more hardware-accelerated raytracing effects, and the new Ampere CUDA core, that significantly increases performance over the previous generation “Turing.”
The Zotac RTX 3080 Ti AMP HoloBlack features the highest factory-overclocked speeds from the company for the RTX 3080 Ti, with up to 1710 MHz boost, compared to 1665 MHz reference, a bold new cooling solution design that relies on a large triple-fan heatsink that, and aesthetic ARGB lighting elements that bring your gaming rig to life. Zotac hasn’t provided us with any pricing info yet, we’re assuming the card will end up $100 pricier than the base cards, like Founders Edition.
Palit GeForce RTX 3080 Ti GamingPro is the company’s premium custom-design RTX 3080 Ti offering, letting gamers who know what to expect from this GPU to simply install and get gaming. Within Palit’s product stack, the GamingPro is positioned a notch below its coveted GameRock brand for enthusiasts. By itself, the RTX 3080 Ti is NVIDIA’s new flagship gaming graphics product, replacing the RTX 3080 from this distinction. The RTX 3090 is marketed as a halo product, with its large video memory even targeting certain professional use-cases. The RTX 3080 Ti has the same mandate as the RTX 3080—to offer leadership gaming performance at 4K UHD, with maxed out settings and raytracing.
The GeForce RTX 3080 Ti story likely begins with AMD’s unexpected return to the high-end graphics segment with its Radeon RX 6800 series and RX 6900 XT “Big Navi” graphics cards. The RX 6900 XT in particular, has managed to outclass the RTX 3080 in several scenarios, and with its “XTXH” bin, even trades blows with the RTX 3090. It is to fill exactly this performance gap between the two top Amperes—the RTX 3080 and RTX 3090, that NVIDIA developed the RTX 3080 Ti.
The RTX 3080 Ti is based on the same 8 nm GA102 GPU as the other two top cards from NVIDIA’s lineup, but features many more CUDA cores than the RTX 3080, at 10,240 vs. 8,704; and more importantly, maxes out the 384-bit wide memory bus of this silicon. NVIDIA endowed this card with 12 GB of memory. Other key specs include 320 Tensor cores, 80 RT cores, 320 TMUs, and 112 ROPs. The memory ticks at the same 19 Gbps data-rate as the RTX 3080, but the wider memory bus means that the bandwidth is now up to 912 GB/s.
Palit adds value to the RTX 3080 Ti, by pairing it with its TurboFan 3.0 triple-slot, triple-fan cooling solution that has plenty of RGB bling to satiate gamers. The cooler is longer than the PCB itself, so airflow from the third fan goes through the card, and out holes punched into the metal backplate. The card runs at reference clock speeds of 1665 MHz, and is officially priced at the NVIDIA $1200 baselines price for the RTX 3080 Ti, more affordable than the other custom designs we’re testing today. In this review, we tell you if this card is all you need if you have your eyes on an RTX 3080 Ti.
We recently noticed that Alienware’s just-announced X15 and X17 thin andvaguely light gaming laptops are conspicuously missing a port — and it’s not because they’re thin-and-light, it turns out. Alienware has just confirmed to The Verge that it has discontinued the Alienware Graphics Amplifier external GPU, and so these laptops won’t need that proprietary port anymore. The company isn’t saying whether it’ll offer a future eGPU, but pointed us to off-the-shelf Thunderbolt ones instead.
The Alienware Graphics Amp was first introduced in 2014 for $299 and designed to be a companion to the company’s midrange Alienware 13, giving it the vast majority of the power of a desktop graphics card plus four extra full-size USB ports when docked. I liked the combo well enough. But over the years, Alienware added the port to practically every laptop (and some of its more compact desktops, like the Alienware X51 mini-tower and Alienware Alpha R2 console-sized PC) it released, including the company’s flagship Area-51M which was designed to have built-in upgrades of its own.
With an included 460W power supply devoted entirely to the GPU, and a price that dipped to $199 and occasionally $150, the Amp managed to stay competitive for quite a while in the fairly niche market of eGPUs, which generally use manufacturer-agnostic Thunderbolt 3 ports instead of proprietary cables (and can often charge your laptop as well).
It’s not clear when Alienware discontinued the Amp. The Wayback Machine shows it was still live as of November 2020, and Dell last updated its support page in April 2021 — without adding compatibility for the latest wave of Nvidia and AMD graphics cards.
The new Alienware M15 R5 and M15 R6 also omit the Graphics Amplifier port. It’ll be interesting to see if this is the end for Alienware’s dreams of upgradable laptops; certainly the Amp lasted a lot longer than the idea of offering new chips for the giant Area-51m laptop.
Twitter has reopened verification applications after pausing them last week, Twitter announced on Tuesday. The pause, which went in place on Friday, came just eight days after the formal relaunch of verification applications on May 20th.
When Twitter first reopened verification applications in May, it said that responses to applications would come back “within a few days,” but warned that they might take up to “a few weeks” depending on the volume of applications the company received. It’s not clear if you should expect similar timelines for a response following the pause and reopening.
Requests are open! Sorry about that pause –– now you can get back to your quest for a blue badge.
— Twitter Verified (@verified) June 1, 2021
Twitter didn’t immediately reply to a request for comment.
With Twitter’s new verification program, anyone is able to apply to get a blue check mark that denotes someone as a verified user. However, only six categories of accounts actually qualify for verification, according to Twitter’s revamped criteria: government; companies, brands, and organizations; news organizations and journalists; entertainment; sports and gaming; and activists, organizers, and other influential individuals.
Asrock announced a new motherboard lineup called the Riptide series, and with it comes two brand new boards for AMD Ryzen CPUs, the X570S PG Riptide and the B550 PG Riptide. These motherboards are an off-shoot of the Phantom Gaming series, designed for gamers who use their systems for everyday use.
The most striking feature of both Riptide motherboards is the addition of a GPU anti-sag bracket built right into the motherboard itself. The bracket is installed right next to the chipset and SATA ports, and will prevent your graphics card from sagging in the front, where there’s the least amount of support.
The bracket is a nice feature to have, with how large graphics cards are getting these days. Many of the latest triple-slot graphics cards weigh around 1.5kg or more, including some of ASRock’s own models. Due to the bracket being situated behind the graphics card, this should give PC builders a very clean look, with the bracket tucked behind the graphics card and out of sight.
Speaking of aesthetics, both boards are very stealthy with a silver and black appearance. The only sight of color is the bright orange and purple Asrock logo on the chipset, which can easily be hidden with a large enough graphics card.
Other features include a 10 phase power delivery system, up to 4933MHz DDR4 memory support, and a special feature ASRock is calling ‘Lightning Gaming Ports.’ These ports are designed to give gamers the lowest latency possible for their keyboard and mouse.
We don’t know what kind of magic ASRock is doing to improve latency on these specific USB ports, but we believe it ensures you are plugging your mouse and keyboard directly into the USB ports wired to the CPU. This will allow for the lowest latency possible, as most of your USB ports are usually connected through the chipset.
Another interesting note is this is Asrock’s first-ever X570S motherboard, and it’s coming to the Riptide series. But we expect ASRock’s other lineups will get the X570S treatment soon. The biggest feature coming to X570S is the ability to run the chipset without a fan. This is great for reliability and acoustics, and something we’re excited to see coming back to AMD’s flagship chipset.
With Nvidia announcing the all-new RTX 3080 Ti and RTX 3070 Ti at Computex this year, AIB partners have wasted no time in announcing custom variants of the two GPUs. There are seven AIB partners so far that have listed custom variants of the RTX 3080 Ti and RTX 3070 Ti, with more to come.
The RTX 3080 Ti is Nvidia’s new gaming flagship for the Ampere generation, featuring 10,240 CUDA cores, 12GB of GDDR6X, a 1,395MHz base clock, and 1,695 Boost Clock. It’s just a hair slower than the RTX 3090, with the biggest tradeoff (between the two SKUs) being the VRAM capacity, which is shaved down from 24GB to 12GB.
The RTX 3070 Ti, is Nvidia’s new mid-range SKU that will slot in-between the RTX 3070 and RTX 3080. The 3070 Ti features 6,144 CUDA cores, 8GB of GDDR6X at 19Gbps, a base clock of 1,440 MHz, and a Boost frequency of 1,710MHz. Expect performance to lean more towards an RTX 3070 rather than the more powerful 3080, as the 3070 Ti uses the GA104 core, though the 35% boost in memory bandwidth should help.
Asus
Image 1 of 3
Image 2 of 3
Image 3 of 3
Asus is bringing out three custom models for the RTX 3080 Ti as well as two custom SKUs for the lower end RTX 3070 Ti. At the top end will be the ROG Strix LC RTX 3080 Ti, featuring a 240mm AIO cooler to keep temperatures as cool as possible, the card is also decked out in a brushed metal finish, with the Strix language design, as well as a fully lit RGB shroud and fans.
For air cooling, Asus is dishing out the ROG Strix treatment to the RTX 3080 Ti and RTX 3070 Ti. For the RTX 3080 Ti ROG Strix, the cooler looks identical to the RTX 3090 variant, with a large triple-slot design, and triple 8-pin power connectors. Styling hasn’t changed either, with a fully lit RGB light bar on the side, and brushed aluminum finish all around the card.
Asus’ lowest-end offering, for now, will be the TUF series, which you will see on both the RTX 3080 Ti and RTX 3070 Ti. Similar to the ROG models, the RTX 3080 Ti TUF is identical in looks to the RTX 3090 TUF. So we wouldn’t be surprised if Asus simply installed the RTX 3090 cooler onto the RTX 3080 Ti cards since both the 3090 and 3080 Ti share the exact same GPU core.
Unfortunately, we don’t have pictures of the custom Asus RTX 3070 TI SKUs at this moment, however, we guess that the cards will be using a beefed-up cooler from the RTX 3070 class of cards, given the RTX 3070 Ti uses the GA104 core instead of GA102. We also don’t know what frequencies these cards will have but be sure these custom RTX 3080 Tis and RTX 3070 Tis will have higher frequencies than the reference specification.
Gigabyte
Image 1 of 3
Image 2 of 3
Image 3 of 3
Gigabyte’s offerings are very minimal for now, with the company currently offering the RTX 3080 Ti and RTX 3070 Ti in the Gaming OC SKU. The Gaming series in Gigabyte’s lineup represents the more budget-friendly level of SKUs rather than its top-end Aorus branded cards.
The RTX 3080 Ti Gaming OC design is identical to that of the RTX 3090 Gaming OC, with no changes to the shroud or cooler (what we can see of the cooler) at all. The card features a matte black finish with silver accents to add some extra styling to the shroud. The 3080 Ti Gaming OC features a factory overclock of 1710MHz.
Surprisingly the RTX 3070 Ti Gaming OC appears to have either a brand new cooler or an altered variant of the RTX 3070 Gaming OC. The heatsink has a different design with two heatsinks joined together by copper heat pipes, rather than three separate heatsinks found on the vanilla RTX 3070 variant, connected by two sets of copper heat pipes.
The RTX 3070 Ti Gaming OC also features a large copper base plate that covers the GPU and all the GDDR6X memory modules. This is a big upgrade compared to the RTX 3070 Gaming OC which only has four copper heat pipes making direct contact with the GPU, paired with a metal base plate covering the memory modules.
Aesthetically, the card has also been noticeably altered. The Gigabyte logo that was at the rear of all Gaming OC cards is now near the front, and the “GEFORCE RTX” logo has its own silver badge on the top of the card. The silver accents on the shroud have also been switched, with silver accents to the top front and bottom rear of the card. With other Gaming OC cards, this was reversed. The RTX 3070 Ti also features a factory overclock of 1830MHz.
EVGA
Image 1 of 2
Image 2 of 2
So far, EVGA has the most custom SKUs announced for the RTX 3080 Ti and RTX 3070 Ti, with 8 custom models confirmed.
The RTX 3080 Ti alone will come in six flavors, the FTW3, FTW3 Hybrid and FTW3 Hydro Copper. The FTW models represent EVGA’s flagships, so expect robust power delivery and excellent performance from these models.
The remaining three consist of the XC3, XC3 Hybrid and XC3 Hydro Copper. These are EVGA’s budget and mid-range offerings, which should offer the best overall price to performance.
The RTX 3070 Ti will only come in two flavors for now, the FTW3 and XC3. Unfortunately, we don’t have specs or detailed pictures of any of EVGA’s SKUs at this time.
MSI
Image 1 of 3
Image 2 of 3
Image 3 of 3
Similar to EVGA, MSI is announcing a ton of SKUs for both RTX 3080 Ti and RTX 3070 Ti. The models will consist of the Suprim, Gaming Trio, and Ventus variants. Each variant also gets a vanilla and factory overclocked model.
Overall the RTX 3080 Ti Suprim, Gaming Trio, and Ventus are identical to the RTX 3090 models with very very minor changes to the aesthetics of the card. The Suprim will be the top tier model, the Gaming Trio represents the mid-tier, and the Ventus is your ‘budget’ friendly RTX 3080 Ti.
The RTX 3070 Ti will also receive Suprim, Gaming Trio, and Ventus variants, but unfortunately, product pages for those cards are not available at this time.
The same goes for clock speed specifications on all of MSI’s RTX 3080 Ti and RTX 3070 Ti SKU, so we’ll have to wait until those become available.
Zotac
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Zotac will feature five different SKUs for the RTX 3080 Ti and RTX 3070 Ti combined, consisting of the Trinity and Holo series. The RTX 3080 Tis are mostly identical in every way to the RTX 3090s, especially when it comes to the Trinity, where Zotac appears to have put the RTX 3090 cooler directly onto the RTX 3080 Ti.
For the RTX 3080 Ti Holo, there are a few things to note. The RTX 3080 Ti only has a single Holo SKU, while the RTX 3090 had two, the Core Holo and Extreme Holo. The RTX 3080 Ti holo seems to be its own SKU, with a slightly different aesthetic than any of the RTX 3090 Holos. The RTX 3080 Ti Holo features an elegant-looking RGB lightbar on the card’s side that goes from the top to almost the bottom of the card, with a grey color theme for the whole shroud.
The RTX 3080 Ti Trinity will receive a 1665MHz Boost clock (reference spec), the Trinity OC variant features a 1695MHz boost clock, and the Holo features the highest clock at 1710MHz.
The RTX 3070 Ti will also come in the Trinity and Holo flavors but will come with the same triple-fan cooling configuration as the RTX 3080 Zotac Trinity and Holo. This is very different from the vanilla RTX 3070 which maxes out at a twin-fan design.
We are not sure if the RTX 3070 Ti uses the RTX 3080 coolers from the Trinty and Holo series, but aesthetically they look nearly identical, making us believe this is probably true.
The RTX 3070 Ti Holo will come with a 1830MHz boost clock. and the Trinity will have an 1870MHz boost.
Colorful
Image 1 of 3
Image 2 of 3
Image 3 of 3
Colorful has the fewest amount of cards out of all the AIB partners so far, with only three SKUs announced, and only one of those being for the RTX 3080 Ti.
The only RTX 3080 Ti SKU Colorful has announced is the Vulkan OC-V, featuring a triple fan heatsink and a black and metal finish. Giving the card a very stealthy or ‘batman’ like appearance. The card will feature a base clock of 1365MHz along with a 1710MHz Boost Clock.
The first RTX 3070 Ti SKU announced is the 3070 Ti Advanced OC-V, a big chunky card measuring beyond two slots in thickness, and coming in with a rather unique color design consisting of a silver shroud, accented by purple and black, along with a red ringlit RGB fan in the middle. The card will come with a 1575MHz base clock and a 1830MHz boost clock.
Finally, the last SKU announced is the RTX 3070 Ti NB 8G-V, which appears to be the company’s budget-friendly 3070 Ti. The card features a dual-slot cooler, with a very boxy appearance. The shroud is covered in a matte black finish, accented by both glossy black and matte red finishes. The card will come with a 1575MHz base clock and a 1770Mhz boost clock.
PNY
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Last but not least is PNY with four new SKUs planned for the RTX 3080 Ti and RTX 3070 Ti for now. The RTX 3070 Ti and RTX 3080 Ti will both come in Revel and Uprising editions. What we have pictured are the RTX 3080 Ti Revel Epic X, 3080 Ti Uprising Epic X and the RTX 3070 Ti Revel Epic X.
The RTX 3080 Ti Revel Epic X carries a two-toned design to the shroud, with a matte black covering the actual shroud, as well as a uniquely designed metal fan protector with a silver finish. Between the fans lies rings of RGB lighting. The same apparently goes for the RTX 3070 Ti as well, but the 3070 Ti is slightly smaller.
The RTX 3080 Ti Uprising Epic X features a grey finish with RGB accents near the middle of the fan. From what we can tell from the pictures, the card is absolutely gigantic, with a very wide heatsink, along with a length that is hard to describe. For perspective, the heatsink stretches out from the main PCB a good 4 inches, and the PCB isn’t compact at all. So this card is going to be a challenge for some PC cases.
G.Skill has announced a special edition of the brand’s Trident Z memory that currently ranks as one of the best RAM kits on the market. The new Trident Z Maverik arrives as one of the key components for MSI’s latest MPG Gaming Maverik bundle.
G.Skill hasn’t drastically changed the Trident Z Maverik’s design. The memory maintains G.Skill’s distinctive tri-fin aluminum heat spreader with eye-catching RGB lighting, but the memory vendor did slightly redo part of the heat spreader to blend in with MSI’s industrial-looking theme.
Available only as part of the MPG Gaming Maverik bundle, MSI combines two Trident Z Maverik 16GB memory modules into a dual-channel setup. The memory modules are binned for DDR4-3600 with 18-22-22-42 timings. They’re rated for 1.35V and leave some overhead for memory clocking or optimization.
Image 1 of 3
Image 2 of 3
Image 3 of 3
MSI’s MPG Gaming Maverik bundle is practically an entire system, so it’s weird that the company refers to it as a bundle. At any rate, the bundle is comprised of Intel’s Core i7-11700K (Rocket Lake) processor, MSI’s own MPG Z590 Gaming Edge WiFi SP motherboard, MPG CoreLiquid K360 SP liquid cooler and MPG Velox 100 AirFlow SP case.
The MPG Gaming Maverik bundle will be available this month in limited quantities. MSI hasn’t revealed the pricing for the bundle.
Nvidia today announced at Computex 2021 that it’s partnered with Valve to bring its Deep Learning Super Sampling (DLSS) graphics tech to Linux via Steam Proton. Now people who game on Linux systems should be able to put their Nvidia graphics cards—including the new GeForce RTX 3080 Ti and RTX 3070 Ti—to even better use.
DLSS is Nvidia’s solution to the problem of improving a game’s performance without having to compromise too much on presentation. The first version of the technology debuted in September 2018; the second version was released in March 2020. Both versions were limited to RTX graphics cards used to play games on Windows.
That’s about to change. Nvidia said in a press release that it, Valve, and “the Linux gaming community are collaborating to bring NVIDIA DLSS to Proton – Linux gamers will be able to use the dedicated AI cores on GeForce RTX GPUs to boost frame rates for their favorite Windows Games running on the Linux operating system.”
Proton is Valve’s open-source tool “for use with the Steam client which allows games which are exclusive to Windows to run on the Linux operating system,” as it’s described on GitHub, with some assistance from the Wine utility that Linux users have relied on to run Windows programs since its debut in 1993.
Valve said that Proton is built in to the Linux Steam Client Beta; the open-source project is meant to give “advanced users” more control over their experience. Presumably, the upcoming DLSS support will be part of the core Linux Steam Client Beta, but it could also be implemented as an optional feature, at least to start.
Nvidia didn’t offer many other details about its partnership with Valve or to whom it was referring when it said “the Linux gaming community.” But it did make it clear that Linux gamers won’t have to wait long for DLSS: “Support for Vulkan titles is coming this month,” the company said, “with DirectX support coming in the Fall.”
The continued expansion of DLSS arrived shortly after AMD announced that its FidelityFX Super Resolution technology, which promises similar features but will be available on many hardware platforms, will be available on June 22. At least now Nvidia can say that DLSS will be available on multiple operating systems, right?
Samsung could enable HDR10+ for gaming, according to a German blog post spotted by HDTVtest. The article claims Samsung executives are working with ‘various unnamed studios’ to set up a steady supply of HDR10+ titles.
The HDR10+ format was created by Samsung and is a competitor to Dolby Vision. Like Dolby Vision, HDR10+ is all about adding dynamic metadata to the HDR signal to deliver more detail. Unlike Dolby Vision, companies don’t need to pay a fee to license HDR10+.
The report doesn’t reveal whether Samsung is planning to bring the technology to games consoles or reserve it for mobile devices such as the HDR10+- supporting Samsung Galaxy S21.
However, it’s interesting to note that Dolby Vision is supposed to be exclusive for the Xbox Series X and S for the next two years. Could Samsung be working with Sony to bring HDR10+ gaming to the PS5? It’s certainly a possibility.
The Xbox Series X and Xbox Series S systems have supported Dolby Atmos since launch, with Dolby Vision support expected later this year. Microsoft recently announced a Dolby Vision HDR test program for Alpha Ring members ahead of ‘general availability’.
Only a handful of titles make use of Dolby Vision HDR (Gears 5, Halo: The Master Chief Collection and Borderlands 3 are the biggies) but last month Microsoft revealed plans for a major push into Dolby Vision gaming.
If the rumours are true, HDR10+ for gaming could bring better contrast and more vibrant colours to your favourite titles, although you’ll still need a compatible 4K TV.
MORE:
Our round-up of the best gaming TVs
Read our PS5 review
Read our Xbox Series S review
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.