Fortnite is getting a visual boost on PC very soon. As part of the upcoming Chapter 2: Season 7, which will launch on June 8th, the PC version of the game is getting a new “epic” graphical setting.
Epic says it will include “new and enhanced effects plus improved post-processing features and shadow quality.” Among other changes, it sounds like many of the great visual enhancements that came to the PS5 and Xbox Series X versions of the game are coming to PC, including “more advanced explosion effects.” In other words, after you update, try to find a rocket launcher to play with.
As part of the update, the system requirements for Fortnite are now getting a tweak. Basically, there are now three ranges: epic, recommended, and minimum. Here’s what you’ll need:
AMD’s FidelityFX SuperResolution technology works not only on the company’s Radeon graphics processors, but also on Nvidia’s GeForce GPUs so developers can support it across all the best graphics cards. For Nvidia’s, which has its own deep learning super sampling (DLSS) technology, ensuring compatibility with AMD’s FSR is not a priority, but Intel, which is quarters away from launching its gaming GPUs, is looking at AMD’s FidelityFX SuperResolution.
AMD’s FidelityFX SuperResolution is a typical upscaling technology that generates the final image, based on multiple frames as a point of reference using linear and nonlinear processing techniques, according to an AMD patent. Unlike Nvidia’s DLSS, AMD’s FSR does not really use deep learning, something that has its advantages and disadvantages. Meanwhile, the technology promises tangible performance improvements without quality degradation, so game developers should be interested in supporting it.
Since AMD’s FidelityFX SuperResolution is hardware agnostic, it also makes sense for Intel to support it and optimize drivers for it. As a result, Intel’s graphics chief Raja Koduri said in a Twitter post that Intel could support the technology developed by AMD.
“Definitely looking at it — the deep learning capabilities of Xe-HPG architecture do lend to approaches that achieve better quality and performance,” said Raja Koduri. “We will definitely try to align with open approaches to make ISVs job easier.”
At present AMD’s FidelityFX SuperResolution is supported by Gearbox Software’s Godfall title, which supports a number of other AMD-designed technologies too. But making AMD’s FSR technology an industry-standard is not impossible. AMD’s FidelityFX technologies are available not only on PCs, but also will be available on the latest game consoles that use AMD’s GPUs. Therefore, over time these technologies will be adopted pretty widely and AMD’s rivals Intel and Nvidia will have to optimize their drivers for games that use AMD’s technologies.
As reported by Phoronix, AMD is focusing on expanding its SmartShift ecosystem to support operating systems beyond Windows 10. AMD has released two patches this week that continue adding support of SmartShift’s features to the Linux ecosystem. That’s excellent news for Linux buyers who want to use AMD’s shiny new PowerShift notebooks.
SmartShift was released last year by AMD (with only one laptop, the G5 SE) as a way to further improve notebook performance and efficiency when using AMD CPUs and discrete GPUs together. The technology aims to turn both the CPU and GPU into one cohesive system, allowing both chips to dynamically share power depending on the workload at hand.
At Computex this year AMD showed off its second wave of SmartShift laptops (like the new ROG Strix G15 Advantage) based on the all-new RX 6000M GPUs and Ryzen 5000 mobile CPUs, plus new enhancements for the Smartshift technology. This aggressive push for SmartShift adoption shows us that AMD is really focused on bringing this technology out in full force. And the push to expand adoption to Linux users seems to be part of that, despite the fact that those users make up a part of the notebook segment.
Just a few days ago on May 30th, AMD released a patch to Linux which allowed support for SmartShift when a discrete Radeon GPU was detected in a notebook with SmartShift Support.
Today, another patch was released, further adding support for Smartshift’s features. This patch exposes SmartShift’s power-share info to the user-space via sysfs, meaning Linux can now monitor SmartShift’s behavior and judge to see if the system is working as intended or not.
Another patch was released as well, adding controllability of SmartShift’s power-sharing parameters to Linux, meaning the OS or possibly a user can control how much power goes to the CPU or the discrete GPU.
With all this effort, it seems AMD is preparing to make SmartShift a mainstream technology, with not only Linux support, but also a wide variety of notebook support coming in the not-so-distant future. Some serious questions remain, though, like when we’ll see the tech in more than a handful of laptop models.
And for those AMD-based models to expand, the company will need to assure its partners that it can pump out a substantial and consistent amount of its current-gen CPUs and brand-new mobile GPUs. In the current climate of high demand for desktop graphics cards, chip shortages, and TSMC’s production pushed to its limits, the only thing certain seems to be uncertainty.
Last year’s Nvidia RTX 3080 was the first GPU to make 4K gaming finally feasible. It was a card that delivered impressive performance at 4K, especially for its retail price of $699 — far less than the 2080 Ti cost a generation earlier. That was before the reality of a global chip shortage drove the prices of modern GPUs well above $1,000. Now that the street prices of RTX 3080s have stayed above $2,000 for months, Nvidia is launching its RTX 3080 Ti flagship priced at $1,199.
It’s a card that aims to deliver near identical levels of performance to the $1,499 RTX 3090, but in a smaller package and with just 12GB of VRAM — half what’s found on the RTX 3090. Nvidia is effectively competing with itself here, and now offering three cards at the top end. That’s if you can even manage to buy any of them in the first place.
I’ve spent the past week testing the RTX 3080 Ti at both 4K and 1440p resolutions. 4K gaming might have arrived originally with the RTX 2080 Ti, but the RTX 3080 Ti refines it and offers more headroom in the latest games. Unfortunately, it does so with a $1,199 price tag that I think will be beyond most people’s budgets even before you factor in the inevitable street price markup it will see during the current GPU shortage.
Hardware
If you put the RTX 3080 Ti and the RTX 3080 side by side, it would be difficult to tell the difference between them. They look identical, with the same ports and fan setup. I’m actually surprised this card isn’t a three-slot like the RTX 3090, or just bigger generally. The RTX 3080 Ti has one fan on either side of the card, with a push-pull system in place. The bottom fan pulls cool air into the card, which then exhausts on the opposite side that’s closest to your CPU cooler and rear case fan. A traditional blower cooler also exhausts the hot air out of the PCIe slot at the back.
This helped create a quieter card on the original RTX 3080, and I’m happy to report it’s the same with the RTX 3080 Ti. The RTX 3080 Ti runs at or close to its max fan RPM under heavy loads, but the hum of the fans isn’t too distracting. I personally own an RTX 3090, and while the fans rarely kick in at full speed, they’re certainly a lot more noticeable than the RTX 3080 Ti’s.
Nvidia has used the same RTX 3080 design for the 3080 Ti Model.
That quiet performance might have a downside, though. During my week of testing with the RTX 3080 Ti, I noticed that the card seems to run rather hot. I recorded temperatures regularly around 80 degrees Celsius, compared to the 70 degrees Celsius temperatures on the larger RTX 3090. The fans also maxed out a lot during demanding 4K games on the RTX 3080 Ti in order to keep the card cool. I don’t have the necessary equipment to fully measure the heat output here, but when I went to swap the RTX 3080 Ti for another card after hours of testing, it was too hot to touch, and stayed hotter for longer than I’d noticed with either the RTX 3080 or RTX 3090. I’m not sure if this will result in problems in the long term, as we saw with the initial batch of 2080 Ti units having memory overheating issues, but most people will put this in a case and never touch it again. Still, I’m surprised at how long it stayed hot enough for me to not want to touch it.
As this is a Founders Edition card, Nvidia is using its latest 12-pin single power connector. There’s an ugly and awkward adapter in the box that lets you connect two eight-pin PCIe power connectors to it, but I’d highly recommend getting a single new cable from your PSU supplier to connect directly to this card. It’s less cabling, and a more elegant solution if you have a case window or you’re addicted to tidy cable management (hello, that’s me).
I love the look of the RTX 3080 Ti and the pennant-shaped board that Nvidia uses here. Just like the RTX 3080, there are no visible screws, and the regulatory notices are all on the output part of the card so there are no ugly stickers or FCC logos. It’s a really clean card, and I’m sorry to bring this up, but Nvidia has even fixed the way the number 8 is displayed. It was a minor mistake on the RTX 3080, but I’m glad the 8 has the correct proportions on the RTX 3080 Ti.
At the back of the card there’s a single HDMI 2.1 port and three DisplayPort 1.4a ports. Just like the RTX 3080, there are also LEDs that glow around the top part of the fan, and the GeForce RTX branding lights up, too. You can even customize the colors of the glowing part around the fan if you’re really into RGB lighting.
Just like the RTX 3080, this new RTX 3080 Ti needs a 750W power supply. The RTX 3080 Ti even draws more power, too, at up to 350 watts under load compared to 320 watts on the RTX 3080. That’s the same amount of power draw as the larger RTX 3090, which is understandable given the performance improvements, but it’s worth being aware of how this might impact your energy bills (and the cost of your PC build to run it).
1440p testing
I’ve been testing the RTX 3080 Ti with Intel’s latest Core i9 processor. For 1440p tests, I’ve also paired the GPU with a 32-inch Samsung Odyssey G7 monitor. This monitor supports refresh rates up to 240Hz, as well as Nvidia’s G-Sync technology.
I compared the RTX 3080 Ti against both the RTX 3080 and RTX 3090 to really understand where it fits into Nvidia’s new lineup. I tested a variety of AAA titles, including Fortnite, Control, Death Stranding, Metro Exodus, Call of Duty: Warzone, Microsoft Flight Simulator, and many more. You can also find the same games tested at 4K resolution below.
All games were tested at max or ultra settings on the RTX 3080 Ti, and most exceeded an average of 100fps at 1440p. On paper, the RTX 3080 Ti is very close to an RTX 3090, and my testing showed that plays out in most games at 1440p. Games like Microsoft Flight Simulator, Assassin’s Creed: Valhalla, and Watch Dogs: Legion all have near-identical performance across the RTX 3080 Ti and RTX 3090 at 1440p.
Even Call of Duty: Warzone is the same without Nvidia’s Deep Learning Super Sampling (DLSS) technology enabled, and it’s only really games like Control and Death Stranding where there’s a noteworthy, but small, gap in performance.
However, the jump in performance from the RTX 3080 to the RTX 3080 Ti is noticeable across nearly every game, with the exception of Death Stranding and Fortnite, which both perform really well on the base RTX 3080.
RTX 3080 Ti (1440p)
Benchmark
RTX 3080 Founders Edition
RTX 3080 Ti Founders Edition
RTX 3090 Founders Edition
Benchmark
RTX 3080 Founders Edition
RTX 3080 Ti Founders Edition
RTX 3090 Founders Edition
Microsoft Flight Simulator
46fps
45fps
45fps
Shadow of the Tomb Raider
147fps
156fps
160fps
Shadow of the Tomb Raider (DLSS)
154fps
162fps
167fps
CoD: Warzone
124fps
140fps
140fps
CoD: Warzone (DLSS+RT)
133fps
144fps
155fps
Fortnite
160fps
167fps
188fps
Fortnite (DLSS)
181fps
173fps
205fps
Gears 5
87fps
98fps
103fps
Death Stranding
163fps
164fps
172fps
Death Stranding (DLSS quality)
197fps
165fps
179fps
Control
124fps
134fps
142fps
Control (DLSS quality + RT)
126fps
134fps
144fps
Metro Exodus
56fps
64fps
65fps
Metro Exodus (DLSS+RT)
67fps
75fps
77fps
Assassin’s Creed: Valhalla
73fps
84fps
85fps
Watch Dogs: Legion
79fps
86fps
89fps
Watch Dogs: Legion (DLSS+RT)
67fps
72fps
74fps
Watch Dogs: Legion (RT)
49fps
55fps
56fps
Assassin’s Creed: Valhalla performs 15 percent better on the RTX 3080 Ti over the regular RTX 3080, and Metro Exodus also shows a 14 percent improvement. The range of performance increases ranges from around 4 percent all the way up to 15 percent, so the performance gap is very game dependent.
Even when using games with ray tracing, the RTX 3080 Ti still managed high frame rates when paired with DLSS. DLSS uses neural networks and AI supercomputers to analyze games and sharpen or clean up images at lower resolutions. In simple terms, it allows a game to render at a lower resolution and use Nvidia’s image reconstruction technique to upscale the image and make it look as good as native 4K.
Whenever I see the DLSS option in games, I immediately turn it on now to get as much performance as possible. It’s still very much required for ray tracing games, particularly as titles like Watch Dogs: Legion only manage to hit 55fps with ultra ray tracing enabled. If you enable DLSS, this jumps to 72fps and it’s difficult to notice a hit in image quality.
4K testing
For my 4K testing, I paired the RTX 3080 Ti with Acer’s 27-inch Nitro XV273K, a 4K monitor that offers up to 144Hz refresh rates and supports G-Sync. I wasn’t able to get any of the games I tested on both the RTX 3080 Ti and RTX 3090 to hit the frame rates necessary to really take advantage of this 144Hz panel, but some came close thanks to DLSS.
Metro Exodus manages a 14 percent improvement over the RTX 3080, and Microsoft Flight Simulator also sees a 13 percent jump. Elsewhere, other games see between a 4 and 9 percent improvement. These are solid gains for the RTX 3080 Ti, providing more headroom for 4K gaming over the original RTX 3080.
The RTX 3080 Ti comes close to matching the RTX 3090 performance at 4K in games like Watch Dogs: Legion, Assassin’s Creed: Valhalla, Gears 5, and Death Stranding. Neither the RTX 3080 Ti nor RTX 3090 is strong enough to handle Watch Dogs: Legion with ray tracing, though. Both cards manage around 30fps on average, and even DLSS only bumps this up to below 50fps averages.
RTX 3080 Ti (4K)
Benchmark
RTX 3080 Founders Edition
RTX 3080 Ti Founders Edition
RTX 3090 Founders Edition
Benchmark
RTX 3080 Founders Edition
RTX 3080 Ti Founders Edition
RTX 3090 Founders Edition
Microsoft Flight Simulator
30fps
34fps
37fps
Shadow of the Tomb Raider
84fps
88fps
92fps
Shadow of the Tomb Raider (DLSS)
102fps
107fps
111fps
CoD: Warzone
89fps
95fps
102fps
CoD: Warzone (DLSS+RT)
119fps
119fps
129fps
Fortnite
84fps
92fps
94fps
Fortnite (DLSS)
124fps
134fps
141fps
Gears 5
64fps
72fps
73fps
Death Stranding
98fps
106fps
109fps
Death Stranding (DLSS quality)
131fps
132fps
138fps
Control
65fps
70fps
72fps
Control (DLSS quality + RT)
72fps
78fps
80fps
Metro Exodus
34fps
39fps
39fps
Metro Exodus (DLSS+RT)
50fps
53fps
55fps
Assassin’s Creed: Valhalla
64fps
70fps
70fps
Watch Dogs: Legion
52fps
55fps
57fps
Watch Dogs: Legion (DLSS+RT)
40fps
47fps
49fps
Watch Dogs: Legion (RT)
21fps
29fps
32fps
Most games manage to comfortably rise above 60fps in 4K at ultra settings, with Microsoft Flight Simulator and Metro Exodus as the only exceptions. Not even the RTX 3090 could reliably push beyond 144fps at 4K without assistance from DLSS or a drop in visual settings. I think we’re going to be waiting on whatever Nvidia does next to really push 4K at these types of frame rates.
When you start to add ray tracing and ultra 4K settings, it’s clear that both the RTX 3080 Ti and RTX 3090 need to have DLSS enabled to play at reasonable frame rates across the most demanding ray-traced titles. Without DLSS, Watch Dogs: Legion manages an average of 29fps (at max settings), with dips below that making the game unplayable.
DLSS really is the key here across both 1440p and 4K. It was merely a promise when the 2080 Ti debuted nearly three years ago, but Nvidia has now managed to get DLSS into more than 50 popular games. Red Dead Redemption 2 and Rainbow Six Siege are getting DLSS support soon, too.
DLSS also sets Nvidia apart from AMD’s cards. While AMD’s RX 6800 XT is fairly competitive at basic rasterization at 1440p, it falls behind the RTX 3080 in the most demanding games at 4K — particularly when ray tracing is enabled. Even the $1,000 Radeon RX 6900 XT doesn’t fare much better at 4K. AMD’s answer to DLSS is coming later this month, but until it arrives we still don’t know exactly how it will compensate for ray tracing performance on AMD’s GPUs. AMD has also struggled to supply retailers with stock of its cards.
That’s left Nvidia in a position to launch the RTX 3080 Ti at a price point that really means it’s competing with itself, positioned between the RTX 3080 and RTX 3090. If the RTX 3090 wasn’t a thing, the RTX 3080 Ti would make a lot more sense.
Nvidia is also competing with the reality of the market right now, as demand has been outpacing supply for more than six months. Nvidia has introduced a hash rate limiter for Ethereum cryptocurrency mining on new versions of the RTX 3080, RTX 3070, and now this RTX 3080 Ti. It could help deter some scalpers, but we’ll need months of data on street prices to really understand if it’s driven pricing down to normal levels.
Demand for 30-series cards has skyrocketed as many rush to replace their aging GTX 1080 and GTX 1080 Ti cards. Coupled with Nvidia’s NVENC and professional tooling support, it’s also made the RTX 30-series a great option for creators looking to stream games, edit videos, or build games.
In a normal market, I would only recommend the RTX 3080 Ti if you’re really willing to spend an extra $500 to get some extra gains in 1440p and 4K performance. But it’s a big price premium when the RTX 3090 exists at this niche end of the market and offers more performance and double the VRAM if you’re really willing to pay this much for a graphics card.
At $999 or even $1,099, the RTX 3080 Ti would tempt me a lot more, but $1,199 feels a little too pricey. For most people, an RTX 3080 makes a lot more sense if it were actually available at its standard retail price. Nvidia also has a $599 RTX 3070 Ti on the way next week, which could offer some performance gains to rival the RTX 3080.
Either way, the best GPU is the one you can buy right now, and let’s hope that Nvidia and AMD manage to make that a reality soon.
NVIDIA today refreshed the top-end of the GeForce RTX 30-series “Ampere” family of graphics cards with the new GeForce RTX 3080 Ti, which we’re testing for you today. The RTX 3080 Ti is being considered the next flagship gaming product, picking up the mantle from the RTX 3080. While the RTX 3090 is positioned higher in the stack, NVIDIA has been treating it as a TITAN-like halo product for not just gaming, but also quasi-professional use cases. The RTX 3080 Ti has the same mandate as the RTX 3080—to offer leadership gaming performance with real-time raytracing at 4K UHD resolution.
NVIDIA’s announcement of the GeForce RTX 3080 Ti and RTX 3070 Ti was likely triggered by AMD’s unexpected success in taking a stab at the high-end market after many years with its Radeon RX 6800 series and RX 6900 XT “Big Navi” GPUs, which are able to compete with the RTX 3080, RTX 3070, and even pose a good alternative to the RTX 3090. NVIDIA possibly found itself staring at a large gap between the RTX 3080 and RTX 3090 that needed to be filled. We hence have the RTX 3080 Ti.
The GeForce RTX 3080 Ti is based on the same 8 nm GA102 silicon as the RTX 3080, but with more CUDA cores, while maxing out the 384-bit wide GDDR6X memory bus. It only has slightly fewer CUDA cores than the RTX 3090, the memory size is 12 GB as opposed to 24 GB, and the memory clock is slightly lower. NVIDIA has given the RTX 3080 Ti a grand 10,240 CUDA cores spread across 80 streaming multiprocessors, 320 3rd Gen Tensor cores that accelerate AI and DLSS, and 80 2nd Gen RT cores. It also has all 112 ROPs enabled, besides 320 TMUs. The 12 GB of memory maxes out the 384-bit memory bus, but the memory clock runs at 19 Gbps (compared to 19.5 Gbps on the RTX 3090). Memory bandwidth hence is 912.4 GB/s.
The NVIDIA GeForce RTX 3080 Ti Founders Edition looks similar in design to the RTX 3080 Founders Edition. NVIDIA is pricing the card at $1,200, or about $200 higher than the Radeon RX 6900 XT. The AMD flagship is really the main target of this NVIDIA launch, as it has spelled trouble for the RTX 3080. As rumors of the RTX 3080 Ti picked up pace, AMD worked with its board partners to release an enthusiast-class RX 6900 XT refresh based on the new “XTXH” silicon that can sustains 10% higher clock-speeds. In this review, we compare the RTX 3080 Ti with all the SKUs in its vicinity to show you if it’s worth stretching your penny to $1,200, or whether you could save some money by choosing this card over the RTX 3090.
MSI GeForce RTX 3080 Ti Suprim X is the company’s new flagship gaming graphics card, and part of NVIDIA’s refresh of the RTX 30-series “Ampere” family, to bolster its position in the high-end segment. The Suprim X is an MSI exercise at leveling up to the NVIDIA Founders Edition in terms of original design and build quality. The most premium materials and design combine with the company’s most advanced graphics card cooling solution, and overclocking-optimized PCB, to offer the highest tier of factory overclocks.
NVIDIA announced the GeForce RTX 3080 Ti and RTX 3070 Ti at its Computex 2021 event to answer two very specific challenges to its product stack—the Radeon RX 6900 XT outclassing the RTX 3080, and the RX 6800 performing well against the RTX 3070. The RTX 3080 Ti is designed to fill a performance gap between the RTX 3080 and the halo-segment RTX 3090.
The RTX 3080 Ti is based on the same 8 nm GA102 silicon as the RTX 3080, but features a lot more CUDA cores, but more importantly, maxes out the 384-bit wide GDDR6X memory bus of the GA102. NVIDIA is giving the card 12 GB of memory, and not 24 GB like on the RTX 3090, as it considers it a halo product, even targeting certain professional use-cases. The RTX 3080 Ti is also endowed with 320 TMUs, 320 Tensor cores, 80 RT cores, and 112 ROPs. The memory operates at the same 19 Gbps data-rate as the RTX 3080, but due to its increased bus-width, results in a memory bandwidth of 912 GB/s.
The MSI RTX 3080 Ti Suprim X supercharges the RTX 3080 Ti with its highest clock speeds—1830 MHz vs. 1665 MHz reference. It features the most elaborate version of the company’s TriFrozr 2S cooling solution, with a metal alloy shroud, a dense aluminium fin-stack heatsink, three TorX fans, a similar power-delivery to the company’s RTX 3090 Suprim X, and a metal back-plate. In this review, we take the card for a spin across our test-suite to tell you if shelling RTX 3090 kind of money for a top custom RTX 3080 Ti is worth it. MSI hasn’t provided any pricing info yet, we expect the card will end up at around $2100, $100 higher than our estimate for the NVIDIA baseline price.
The ASUS ROG Strix LC GeForce RTX 3080 Ti is the company’s flagship custom-design RTX 3080 Ti graphics card, characterized by its factory-fitted, all-in-one liquid cooling solution. The cooler combines an AIO liquid cold-plate to pull heat from the GPU and memory; while a set of heatsinks and lateral blower provide additional cooling. Interestingly, this cooler debuted with the Radeon RX 6800 XT Strix LC, which along with the RX 6900 XT, are believed to have triggered product-stack updates among NVIDIA’s ranks, to begin with.
The GeForce RTX 3080 Ti replaces the RTX 3080 as NVIDIA’s new flagship gaming product. The RTX 3090 is still positioned higher, but that SKU is more of a TITAN-like halo product, with its massive 24 GB memory favoring certain professional use-cases when paired with Studio drivers. The RTX 3080 Ti utilizes the same GA102 silicon, maxing out its 384-bit memory interface, with 12 GB of it. There are more CUDA cores on offer—10,240 vs. 8,796 on the RTX 3080, and proportionate increase in Tensor cores, RT cores, and other components. The GeForce RTX 3080 Ti is based on the new Ampere graphics architecture, which debuts the 2nd generation of NVIDIA’s path-breaking RTX real-time raytracing technology, combining 3rd generation Tensor cores, with 2nd generation RT cores, and faster Ampere CUDA cores.
As mentioned earlier the ASUS ROG Strix LC lugs a bulky all-in-one liquid cooling + air hybrid solution, without coming across as ugly and tacked on. ASUS appears to have taken a keen interest in adding to the industrial design of the card and radiator. The cooler also ends up supporting a major factory-overclock of 1830 MHz, compared to 1665 MHz reference. This puts its performance way above even the RTX 3090, while also costing higher than its starting price. In this review we show you whether it’s worth just picking this card over an RTX 3090 if one is available.
The EVGA GeForce RTX 3080 Ti FTW3 Ultra is the company’s premium offering based on NVIDIA’s swanky new RTX 3080 Ti graphics card, which the company hopes will restore its leadership in the high-end gaming graphics segment that felt disputed by the Radeon RX 6900 XT. Along with its sibling, the RTX 3070 Ti, the new graphics cards are a response to AMD’s return to competitiveness in the high-end graphics segment. It has the same mission as the RTX 3080—to offer maxed out gaming at 4K Ultra HD resolution, with raytracing, making it NVIDIA’s new flagship gaming product. The RTX 3090 is still positioned higher, but with its 24 GB memory, is branded as a TITAN-like halo product, capable of certain professional-visualization applications, when paired with NVIDIA’s Studio drivers.
The GeForce RTX 3080 Ti features a lot more CUDA cores than the RTX 3080—10,240 vs. 8,796, and maxes out the 384-bit wide memory interface of the GA102 silicon, much like the RTX 3090. The memory amount, however, is 12 GB, and runs at 19 Gbps data-rate. The RTX 3080 Ti is based on the Ampere graphics architecture, which debuts the 2nd generation of NVIDIA’s path-breaking RTX real-time raytracing technology. It combines new 3rd generation Tensor cores that leverage the sparsity phenomenon to accelerate AI inference performance by an order of magnitude over the previous gen; new 2nd generation RT cores which support even more hardware-accelerated raytracing effects; and the new faster Ampere CUDA core.
The EVGA RTX 3080 Ti FTW3 Ultra features the same top-tier iCX3 cooling solution as the top RTX 3090 FTW3, with a smart cooling that relies on several onboard thermal sensors besides what the GPU and memory come with; a meaty heatsink ventilated by a trio of fans, and plenty of RGB LED lighting to add life to your high-end gaming PC build. The PCB has several air guides that let airflow from the fans to pass through, improving ventilation. EVGA is pricing the RTX 3080 Ti FTW3 Ultra at $1340, a pretty premium over the $1,200 baseline price of the RTX 3080 Ti.
The Zotac GeForce RTX 3080 Ti AMP HoloBlack is the company’s top graphics card based on the swanky new RTX 3080 Ti “Ampere” GPU by NVIDIA. Hot on the heels of its Computex 2021 announcement, we have with us NVIDIA’s new flagship gaming graphics card, a distinction it takes from the RTX 3080. The RTX 3090 is still around in the NVIDIA’s product stack, but it’s positioned as a TITAN-like halo product, with its 24 GB video memory benefiting certain quasi-professional applications, when paired with NVIDIA’s GeForce Studio drivers. The RTX 3080 Ti has the same mandate from NVIDIA as the RTX 3080—to offer leadership 4K UHD gaming performance with maxed out settings and raytracing.
Based on the same 8 nm “GA102” silicon as the RTX 3080, the new RTX 3080 Ti has 12 GB of memory, maxing out the 384-bit GDDR6X memory interface of the chip; while also packing more CUDA cores and other components—10,240 vs. 8,796, 320 TMUs, those many Tensor cores, 80 RT cores, and 112 ROPs. The announcement of the RTX 3080 Ti and its sibling, the RTX 3070 Ti—which we’ll review soon—may have been triggered by AMD’s unexpected return to the high-end gaming graphics segment, with its “Big Navi” Radeon RX 6000 series graphics cards, particularly the RX 6900 XT, and the RX 6800.
The GeForce Ampere graphics architecture debuts the 2nd generation of NVIDIA RTX, bringing real-time raytracing to gamers. It combines 3rd generation Tensor cores that accelerate AI deep-learning neural nets that DLSS leverages; 2nd generation RT cores that introduce more hardware-accelerated raytracing effects, and the new Ampere CUDA core, that significantly increases performance over the previous generation “Turing.”
The Zotac RTX 3080 Ti AMP HoloBlack features the highest factory-overclocked speeds from the company for the RTX 3080 Ti, with up to 1710 MHz boost, compared to 1665 MHz reference, a bold new cooling solution design that relies on a large triple-fan heatsink that, and aesthetic ARGB lighting elements that bring your gaming rig to life. Zotac hasn’t provided us with any pricing info yet, we’re assuming the card will end up $100 pricier than the base cards, like Founders Edition.
Palit GeForce RTX 3080 Ti GamingPro is the company’s premium custom-design RTX 3080 Ti offering, letting gamers who know what to expect from this GPU to simply install and get gaming. Within Palit’s product stack, the GamingPro is positioned a notch below its coveted GameRock brand for enthusiasts. By itself, the RTX 3080 Ti is NVIDIA’s new flagship gaming graphics product, replacing the RTX 3080 from this distinction. The RTX 3090 is marketed as a halo product, with its large video memory even targeting certain professional use-cases. The RTX 3080 Ti has the same mandate as the RTX 3080—to offer leadership gaming performance at 4K UHD, with maxed out settings and raytracing.
The GeForce RTX 3080 Ti story likely begins with AMD’s unexpected return to the high-end graphics segment with its Radeon RX 6800 series and RX 6900 XT “Big Navi” graphics cards. The RX 6900 XT in particular, has managed to outclass the RTX 3080 in several scenarios, and with its “XTXH” bin, even trades blows with the RTX 3090. It is to fill exactly this performance gap between the two top Amperes—the RTX 3080 and RTX 3090, that NVIDIA developed the RTX 3080 Ti.
The RTX 3080 Ti is based on the same 8 nm GA102 GPU as the other two top cards from NVIDIA’s lineup, but features many more CUDA cores than the RTX 3080, at 10,240 vs. 8,704; and more importantly, maxes out the 384-bit wide memory bus of this silicon. NVIDIA endowed this card with 12 GB of memory. Other key specs include 320 Tensor cores, 80 RT cores, 320 TMUs, and 112 ROPs. The memory ticks at the same 19 Gbps data-rate as the RTX 3080, but the wider memory bus means that the bandwidth is now up to 912 GB/s.
Palit adds value to the RTX 3080 Ti, by pairing it with its TurboFan 3.0 triple-slot, triple-fan cooling solution that has plenty of RGB bling to satiate gamers. The cooler is longer than the PCB itself, so airflow from the third fan goes through the card, and out holes punched into the metal backplate. The card runs at reference clock speeds of 1665 MHz, and is officially priced at the NVIDIA $1200 baselines price for the RTX 3080 Ti, more affordable than the other custom designs we’re testing today. In this review, we tell you if this card is all you need if you have your eyes on an RTX 3080 Ti.
Remember when Elon Musk claimed you’d be able to play The Witcher 3 and Cyberpunk 2077 on a 10 teraflop gaming rig he’s stuffing into the new Tesla Model S and X? AMD is officially providing the guts — during its Computex 2021 keynote, the chipmaker just revealed that the new Tesla infotainment system consists of an AMD Ryzen processor paired with an AMD RDNA 2 GPU.
“So we actually have an AMD Ryzen APU powering the infotainment system in both cars as well as a discrete RDNA2-based GPU that kicks in when running AAA games, providing up to 10 teraflops of compute power…. we look forward to giving gamers a great platform for AAA gaming,” says AMD CEO Lisa Su.
And if you combine that information with another piece of news AMD revealed today, plus a earlier leak in January, we may now have a passing idea of how powerful that “10 teraflop” infotainment system could theoretically be: likely a little less than Sony’s PS5.
You see, leaker Patrick Schur dug up a Tesla block diagram in January that singled out an AMD Navi 23 GPU specifically for Tesla’s new vehicles, and today AMD announced the new Radeon 6800M, 6700M and 6600M laptop graphics chips — the weakest of which just so happens to use Navi 23, AnandTech reports.
As we learned today, that Radeon 6600M chip comes with 28CUs and 1792 shader units— compared to the 36CUs and an estimated 2304 shader units worth of RDNA 2 GPU in Sony’s PlayStation 5, which also claims to be a 10-teraflop gaming rig. While it’s not quite apples-to-apples, it’s largely the same technology beneath, and a smaller number of cores on the same GPU architecture suggests we should expect slightly less performance from a Tesla compared to Sony’s console. (The higher-end Radeon 6700M / Navi 22 has the same number of CUs as the PS5, for what it’s worth.)
Performance depends on the software platform, though, as we’ve seen with the 10-teraflop PS5 and the 12-teraflop Xbox Series X — and a recent job posting by Tesla suggests game developers may actually be building for Linux if they want to target the new Tesla in-car gaming rigs.
Linux isn’t necessarily a benefit when it comes to gaming performance, though. Google’s Stadia cloud gaming also boasted 10 teraflops of performance from its AMD GPUs, but ports of games from Bungie and Square Enix didn’t look nearly as good as they did on weaker Xbox and PC hardware at the service’s launch.
The most important question is probably still the one I asked back in January, though: Who is going to sit in their $80,000 sports car and play a triple-A video game?
Tesla CEO Elon Musk tweeted on Saturday that the Model S Plaid, which includes the new AMD system, will start deliveries on June 10th.
AMD has announced that FidelityFX Super Resolution (FSR), its super sampling technique that should boost performance and image quality in supported games, will launch on June 22nd. The company gave a presentation at Computex Taipei today with more information on the feature, though it’s still not clear just how effective it’ll be.
Supersampling is a major point of differentiation between AMD’s GPUs and those from its competitor Nvidia. DLSS (Deep Learning Super Sampling), Nvidia’s version of the technique, uses neural networks to reconstruct images at higher quality from lower resolutions in real time, enabling games to run at smoother frame rates without compromising the image quality. Nvidia launched DLSS back in 2018 with the RTX 20- series, and it’s has been increasing performance and support ever since. More than 50 games now work with DLSS, and Nvidia itself just announced today that Red Dead Redemption 2 and Rainbow Six Siege are getting the feature.
AMD first said it was working on super sampling last year when it announced the RX 6000-series GPUs. The company isn’t providing too many technical details on the feature just yet, but says it will be open source and that more than ten studios and engines will support it this year.
FSR will support four levels of scaling. In AMD’s own testing, running Godfall on a Radeon RX 6800 XT with epic graphic settings and ray tracing, the performance mode ran at 150fps — a huge increase over the native rendering result of 49fps. The balanced, quality, and ultra quality modes turned in results of 124fps, 99fps, and 78fps respectively.
Because FSR is open-source, it’ll also run on Nvidia GPUs, including 10-series models that don’t support DLSS. AMD is claiming a 41-percent performance increase in quality mode for Godfall on a GTX 1060, for example, boosting the frame rate from 27fps to 38fps.
Companies’ own benchmarks should never be taken at face value, of course, and the results aren’t all that meaningful without being able to see the effects on image quality with our own eyes. AMD has not shown off much evidence of how FSR actually works in practice — but we won’t have too much longer to find out, as it’ll be available in three weeks.
AMD CEO Lisa Su revealed two key new processors during the company’s Computex 2021 keynote. The $359 Ryzen 7 5700G and $259 Ryzen 5 5600G APU, both of which come to market August 5, 2021, will plug two glaring gaps in the company’s Ryzen 5000 product stack that currently leads our list of Best CPUs.
The new Cezanne chips mark the first new APUs for desktop PCs that you’ll be able to buy at retail since AMD launched the Zen+ “Picasso” models back in 2019. AMD did bring a refresh of those chips to market as the oft-maligned Ryzen Pro “Renoir” series, but in a disappointment to enthusiasts, those chips were destined for professional users and thus not available at retail.
In fact, AMD actually brought the very chips it’s announcing today to OEM systems a few months ago, meaning we already know most of the details about the silicon. The Cezanne APUs, which come with Zen 3 execution cores paired with the Radeon Vega graphics engine, feel like they’re a bit late to retail. The company’s first salvo of Ryzen 5000 processors delivered a stunning blow to Intel as it took the unequivocal lead in desktop PCs, but AMD’s pivot to premium pricing left it exposed with two massive gaps in its product stack. Unfortunately for AMD, Intel’s Rocket Lake blasted in a few months ago and plugged those gaps.
Now AMD’s retort comes as retail availability of a few of the Cezanne chips, though it’s noteworthy the company is still holding back several of its lower-end models from the retail market. Given the ongoing graphics card shortages, these newly revamped APUs are a welcome sight for the gaming market and serve as AMD’s “non-X” chips that traditionally offer more attractive price points at a given core count. That is if AMD can keep them in stock, of course. Let’s take a closer look.
AMD Ryzen 5000 ‘Cezanne’ G-Series Specifications
The Ryzen 5000G lineup spans from four to eight cores, but AMD is only bringing the eight-core 16-thread Ryzen 7 5700G and six-core 12-thread Ryzen 5 5600G to retail, while the Ryzen 3 5300G remains relegated to the OEM-only market (at least for now). AMD also isn’t bringing the 35W GE-Series models to retail, either, as it continues to focus on premium chips during the ongoing global semiconductor shortage.
AMD Ryzen 5000 G-Series 65W Cezanne APUs
CPU
Price
Cores/Threads
Base / Boost Freq.
Graphics Cores
Graphics Frequency
TDP
Cache
Ryzen 7 5800X
$449
8 / 16
3.8 / 4.7 GHz
N/a
N/a
105W
32MB (1×32)
Core i7-11700K (KF)
$374 – $349
8 / 16
3.6 / 5.0
UHD Graphics 750 Xe 32EU
125W
16MB
Ryzen 7 5700G
$359
8 / 16
3.8 / 4.6
RX Vega 8
2100 MHz
65W
20 MB
Ryzen 5 5600X
$299
6 / 12
3.7 / 4.6 GHz
N/a
N/a
65W
32MB (1×32)
Core i5-11600K (KF)
$262 (K) – $237(KF)
6 / 12
3.9 / 4.9
UHD Graphics 750 Xe 32EU
125W
12MB
Ryzen 5 5600G
$259
6 / 12
3.9 / 4.4
RX Vega 7
1900 MHz
65W
19 MB
Ryzen 5 3600
$200
6 / 12
Core i5-11400 (F)
$182 – $157
6 / 12
2.6 / 4.2
UHD Graphics 750 Xe 24EU
65W
12MB
Ryzen 3 5300G
N/a
4 / 8
4.0 / 4.2
RX Vega 6
1700 MHz
65W
10 MB
The 65W eight-core 16-thread Ryzen 7 5700G comes with a 3.8 GHz base, 4.6 GHz boost, and eight Radeon Vega CUs that operate at 2.0 GHz.
The Ryzen 7 5700G addresses the ~$350 price point to plug the sizeable gap between the $449 Ryzen 9 5800X and $299 Ryzen 5 5600X. That big gap left Intel’s Core i7-11700K with plenty of room to operate, but AMD says the new 5700G will plug that gap with CPU performance that slots in perfectly between the other Ryzen 5000 parts, not to mention the strengths borne of the integrated Vega graphics engine.
The 65W six-core 12-thread Ryzen 5 5600G comes with a 3.9 GHz base, 4.4 GHz boost, and seven Radeon Vega CUs that operate at 1.9 GHz.
The 5600G slots in at $259 to plug the gap between the $299 Ryzen 5 5600X and, well, the remainder of AMD’s sub-$299 product stack. AMD’s Ryzen 5 3600 is the only real relevant contender in this price range, and it launched two years ago with the Zen 2 architecture. The 3600 isn’t competitive with Intel’s Rocket Lake Core i5-11600K or -11400, leaving Intel plenty of room to roam uncontested in the budget market (as you can see in our Core i5-11400 review).
Based on suggested pricing, the 5600G contends with the Core i5-11600K and doesn’t do much to address the current value budget champ, the Intel Core i5-11400. That’s largely because AMD has decided not to include the 65W Ryzen 3 5300G, which it ships into the OEM market, in this round of chip releases. It also has yet to release the GE-series chips listed in the table below. AMD hasn’t indicated when the Ryzen 3 or GE-Series Cezanne chips will come to market.
AMD Ryzen 5000 GE-Series 35W Cezanne APUs
CPU
Cores/Threads
Frequency (Up to) Boost / Base
Graphics Cores
Graphics Frequency
TDP
Cache
Ryzen 7 5700GE
8 / 16
3.2 / 4.6
RX Vega 8
2000 MHz
35W
20 MB
Ryzen 5 5600GE
6 / 12
3.4 / 4.4
RX Vega 7
1900 MHz
35W
19 MB
Ryzen 3 5300GE
4 / 8
3.6 / 4.2
RX Vega 6
1700 MHz
35W
10 MB
Of course, integrated graphics are the big attraction for APUs. AMD continues to pair its APUs with the Vega graphics architecture, just as it did with the 4000-series APUs. AMD reworked the architecture for its last go-round — the revamped RX Vega graphics delivered up to ~60% percent more performance per compute unit (CU) than its predecessors, which equated to more graphics performance from fewer CU. We aren’t sure if AMD has made a similar adjustment this time around, but we’re sure to learn more as we get closer to launch.
As with all Ryzen 5000 processors, Cezanne fully supports overclocking, which includes memory, graphics and CPU cores. AMD also says that the auto-overclocking Precision Boost Overdrive (PBO) and adaptive offset features are also supported. The Cezanne chips drop into the same motherboards as the current-gen Ryzen 5000 processors, so X570, B550, X470 and B450 are all supported. As with the other Ryzen models, memory support weighs in at DDR4-3200, though that does vary by DIMM population rules.
The new APUs hail from the Ryzen 5000 Mobile family (deep dive here), so they have physically identical silicon that has been transitioned from the FP6 BGA-mounted arrangement found in laptops to the AM4 socket on desktop PC motherboards. AMD then simply tunes the silicon for the more forgiving power limits and thermal conditions of the desktop, meaning that it can uncork the power settings and be more aggressive with boosting activity while being less aggressive with power-sharing/shifting between the CPU and GPU units.
The Zen 3 architecture grants higher L3 cache capacities than we’ve seen with AMD’s past APUs. For instance, the eight-core 16-thread Ryzen 7 5700G now has 20MB of L3 cache compared to its eight-core predecessor that came with 12MB. These are the natural byproducts of the Zen 3 architecture and should benefit general iGPU performance, too.
However, in contrast to the existing Ryzen 5000 chips for the PC, the APUs come as a single monolithic die. That results in a less cache than we see with the chips without integrated graphics, like the eight-core Ryzen 5 5600X. The 5600X comes with 32MB of L3 cache, which is significantly more than the 16MB of L3 cache found on the eight-core Ryzen 7 5700G. We’ll be sure to poke and prod at the cache when the silicon lands in our labs.
Additionally, the 5000G chips have the same I/O controller on the SoC as the mobile parts, so the chips are limited to 24 lanes of PCIe 3.0, as opposed to the 24 lanes of PCIe 4.0 found on the other Ryzen 5000 parts. This comes as the tradeoff of bringing the mobile architecture to the desktop PC, with AMD’s initial decision to stick with PCIe 3.0 for its mobile parts largely being driven by battery life concerns.
AMD Ryzen 5 5700G Gaming and Productivity Benchmarks
AMD shared a surprisingly slim selection of its own benchmarks to compare the Ryzen 5 5700G with Intel’s Core i7-11700. AMD’s test notes are also lacking. As with all vendor-provided benchmarks, you should view these with the requisite amount of skepticism.
As expected, AMD’s benchmarks show notable performance advantages across the board, especially when gaming on the 5700G’s Radeon Vega 8 graphics compared to the -11700’s UHD Graphics 650 with the Xe architecture. AMD’s last batch of 5000G comparative benchmarks were much more expansive when it compared Cezanne to the Comet Lake chips, but the Rocket Lake comparisons are far more limited. We’ll suss all that out in the review.
Ryzen 5000G Pro Series Desktop Processors
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
AMD also released its Ryzen 5000G Pro series today. As you can see in the slides above, aside from a few extra professional features, they’re identical to the client chips.
Thoughts
Overall the Cezanne desktop APUs look promising, and AMD’s pricing goes a long way to addressing the notable price gaps that come from its lack of value “non-X” chips with the Ryzen 5000 generation, an exclusion that has received plenty of criticism from the enthusiast community.
AMD’s timing for desktop APUs could be a bit better — Intel’s value Rocket Lake chips have been on the market for several months, and the continuing chip shortage coupled with cryptomining has destroyed any chance of scoring a reasonably priced GPU, at least for now. That means a chip with competitive 1080p gaming performance will be a hit with enthusiasts looking to wait out the current GPU crisis.
That said, we’re still seeing a complete lack of AMD’s cheap chips on the market, so the company’s decision to keep the Ryzen 3 and 35W GE-Series models off the retail market is disappointing. It makes good business sense given the state of the market (AMD sells every single high-end chip it punches out), but we’d like to see some improvement on the lower end of the market.
The Ryzen 5000G chips come to market on August 5, 2021. As you can imagine, we’ll have the full story when reviews arrive near that same time.
AMD introduced its new Radeon RX 6000M-series laptop graphics at Computex, during a keynote by AMD’s CEO, Dr. Lisa Su. The new mobile graphics lineup is made up of the top-end AMD Radeon RX 6800M, a mid-range RX 6700M and the entry level RX 6600M. For now at least, the GPUs are being paired in systems from laptop vendors with AMD’s Ryzen processors for what the company calls “AMD Advantage.”
These are the first laptop GPUs from AMD that use its RDNA 2 architecture, with Infinity Cache for higher memory bandwidth, low power consumption (AMD claims near 0 watts at idle) and high frequencies even when the system is running at low power. The company is claiming up to 1.5 times performance over last-gen RDNA graphics and up to 43% lower power consumption.
AMD Radeon RX 6800M
AMD Radeon RX 6700M
AMD Radeon RX 6600M
Compute Units
40
36
28
Game Clock
2,300 MHz
2,300 MHz
2,177
Memory
12GB GDDR6
10GB GDDR6
8GB GDDR6
Infinity Cache
96MB
80MB
32MB
AMD Smart Access Memory
Yes
Yes
Yes
AMD Smart Shift
Yes
Yes
Yes
Power Targets
145W and above
Up to 135W
Up to 100W
Resolution Targets
1440p
1440p/1080p
1080p
The most powerful of the new bunch is the AMD Radeon RX 6800M, which will be available starting June 1 in the Asus ROG Strix G15 Advantage Edition. It has 40 compute units and ray accelerators, along with a 2,300 MHz game clock, 12GB of GDDR6 memory and a 96MB cache. It will also be compatible with AMD SmartShift and Smart Access Memory.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
AMD compared the ROG Strix G15 with the RX 6800M and a Ryzen 9 5900HX to a 2019 MSI Raider GE63 with a 9th Gen Intel Core i7 processor and an RTX 2070, claiming up to 1.4 times more frames per second at 1440p max settings in Assassin’s Creed Valhalla and Cyberpunk 2077, 1.5 times the performance in Dirt 5 and 1.7x more frames while playing Resident Evil: Village.
In closer comparisons, to an RTX 3070 (8GB) and RTX 3080 (8GB), AMD claimed its flagship GPU was typically the top performer – within a frame or so – in several of those games, as well as Borderlands 3 and Call of Duty: Black Ops Cold War, though it’s unclear which settings and resolutions were used for these tests.
Unlike Nvidia, AMD isn’t aiming for 4K gaming. The most powerful of the cards, the RX 6800M, aims for a power target of 145W and above and is designed for 1440p.
The middle-tier AMD Radeon RX 6700M is designed for 1440p or 1080p gaming, depending on the title. It has 36 compute units with a 2,300 MHz game clock, 10GB of GDDR6 RAM and an 80MB infinity cache, as well as the same support for SmartShift and SAM. AMD says these will ship in laptops “soon.’ It also said that the GPU will allow for 100 fps gaming at 1440p and high settings in “popular games,” though didn’t specify which games it was referring to.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The RX 6600M sits at the bottom of the stack for gaming at 1080p. AMD compared it to an RTX 3060 (6GB) on 1080p max settings, and found that it led in Assassin’s Creed Valhalla, Borderlands 3 and Dirt 5. It was five frames behind in Call of Duty: Black Ops Cold War in AMD’s tests, and there was a one-frame difference playing Cyberpunk 2077. Like the RX 6800M, the 6600M will start shipping on June 1.
AMD Advantage Laptops
AMD is now referring to laptops with both AMD processors and graphics as offering the “AMD Advantage.” The company says these designs should offer great performance because of power sharing between the CPU and GPU.
Image 1 of 2
Image 2 of 2
AMD says its technologies can achieve up to 11% better performance in Borderlands 3, 10% in Wolfenstein Young Blood, 7% in Cyberpunk 2077 and 6% in Godfall.
Additionally, the company says AMD Advantage laptops will only have “premium” displays — either IPS or OLED, but no VA or TN panels. They should hit or surpass 300 nits of brightness, hit 144 Hz or higher and use AMD FreeSync.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Each laptop should come with a PCIe NVMe Gen 3 SSD, keep the WASD keys below 40 degrees Celsius while gaming and allow for ten hours of video on battery. (AMD tested this with local video, not streaming.)
The first of these laptops is the Asus ROG Strix G15, with up to a Ryzen 9 5900HX and Radeon RX 6800M, a 15-inch display (either FHD at 300 Hz or WQHD at 165 Hz) with FreeSync Premium, liquid metal for cooling both the CPU and GPU along with a vapor chamber. It will launch in mid-June.
The HP Omen 16 will also come with a 165 Hz display with up ao a Ryzen 9 5900Hx and AMD Radeon RX 6600M for 1080p gaming. It will launch sune on JD.com, then become available worldwide.
In June, we should see more releases from HP, Asus, MSI and Lenovo.
, AMD has finally introduced FidelityFX Super Resolution (FSR), the company’s upscaling technology to rival Nvidia’s machine learning-powered DLSS. It was introduced during AMD chief executive Dr. Lisa Su’s virtual keynote address at Computex, which is being held online this year. The new feature will launch on June 22.
AMD promises that FSR will deliver up to 2.5 times higher performance while using the dedicated performance mode in “select titles.” At least ten game studios will integrate FSR into their games and engines this year. The first titles should show up this month, and the company also detailed FSR’s roots in open source. The feature is based on
AMD’s OpenGPU suite
.
FSR has four presets: ultra quality, quality, balanced and performance. The first two focus on higher quality by rendering at closer to native resolution, while the latter two push you to get as many frames as possible. FSR works on both desktops and laptops, as well as both integrated and discrete graphics.
In its own tests using Gearbox Software’s Godfall (AMD used the
Radeon RX 6900 XT
,
RX 6800 XT
and
RX 6700 XT
on the game’s epic preset at 4K with ray tracing on), the company claimed 49 frames per second at native rendering, but 78 fps using ultra quality FSR, 99 fps using quality, 124 fps on balanced and 150 fps on performance.
But FSR works on other hardware, including Nvidia’s graphics cards. AMD tested one of Nvidia’s older (but still very popular) mainstream GPUs, the GTX 1060, with Godfall at 1440p on the epic preset. It ran natively at 27 fps, but at 38 fps with quality mode on — a 41% boost. In fact, AMD says that FSR, which needs to be implemented by game developers to suit their titles, will work with over 100 CPUs and GPUs, including its own and competitors.
We’ll be able to test FidelityFX Super Resolution when it launches, starting with Godfall on June 22, so keep an eye out for our thoughts. While the performance gains sound impressive, we’re also keen to check out image quality. We’ve been fairly impressed by
Nvidia’s DLSS 2.0
, but the original DLSS implementation was far less compelling. It seems as though AMD aims to provide similar upscaling but without all the fancy machine learning.
Su’s keynote included other graphics announcements, such as the launch of the
Radeon RX 6800M, RX 6700M and RX 6600M mobile GPUs
based on RDNA 2, as well as a handful of new APUs.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.