In the latest installment of the MSI Insider show, MSI has revealed the brand’s Z590 Pro 12VO motherboard that employs Intel’s 10-pin ATX12VO power connector. Besides the motherboard’s feature set, the vendor also shared the benefits of the ATX12VO power connector.
Despite Intel promoting the ATX12VO power connector as far back as last year, the standard hasn’t really caught on. A handful of motherboards on the market utilize the ATX12VO specification, but it’s far from mainstream. As its name implies, the ATX12VO only uses the 12V rail. Therefore, motherboards will have to come with buck converters to translate voltages down to 5V and 3.3V for hardware that still relies on one of the aforementioned voltages.
In addition to improving power efficiency, the ATX12VO power connector is also smaller since it only comes with 10 pins. This is beneficial in compact systems since the footprint is smaller. However, the ATX12VO power connector has yet to prove its worth on ATX motherboards.
Take MSI’s Z590 Pro 12VO, for example. While the motherboard doesn’t have that chunky 24-pin power connector, it has gained a 6-pin PCIe power connector and up to three additional 4-pin power connectors. Evidently, the ATX12VO standard does little for cable clutter in a full-sized desktop system, but again, its advantages reside in power saving.
MSI Z590 Pro 12VO Power Consumption
The Z590 Pro WiFi is the mainstream counterpart of the Z590 Pro 12VO, so naturally, the MSI representatives used the former for comparison. They took out the wireless module from the Z590 Pro WiFi so that both motherboards had a level playing field. The hosts employed the same Core i9-11900K (Rocket Lake) processor, memory and SSD for both tests. There were a lot of fluctuations in the measurements and the tests were short, so take the results with a grain of salt. For easy comprehension, we’ve rounded off the values in the tables below.
Z590 Pro WiFi
Z590 Pro 12VO
Power Reduction
System Idle Consumption
42W
38W
10%
Average CPU Package Power
17W
14W
18%
System Idle Consumption (C10)
N/A
24W
N/A
Average CPU Package Power (C10)
N/A
8W
N/A
The Z590 Pro 12VO drew 10% less system idle power consumption than the Z590 Pro WiFi. There was also an 18% reduction in average processor package power.
The MSI representative went inside the Z590 Pro 12VO’s BIOS and changed the “Package C State Limit” option from Auto to C10. If you’re not familiar with C-states, they are low-power modes that a processor can come into when it’s idling. C10 is the deepest state, wherein the chip effectively turns off.
With C10 enabled, the Z590 Pro 12VO dropped its system idle power consumption from 38W to 24W, a 37% decrease. The average processor package power, on the other hand, decreased from 14W to 8W, representing a 43% power saving.
OEMs are held to stricter environmental standards, which is why you’ll likely find the ATX12VO power connection inside a pre-built system. DIY users, on the other hand, don’t have to abide by environmental regulations.
The ATX12VO standard only thrives in idle or low-load scenarios, which begs the question of how many of us leave our systems idling for prolonged periods of time. Only time will tell if the ATX12VO ever becomes a widely accepted standard. With the rumor that Intel is allegedly giving the specification a hard push with its next-generation Alder Lake-S processors, the 10-pin power connector may be more common on upcoming LGA1700 motherboards.
The Dark Z FPS DDR4-4000 C16 is a great alternative for Zen 3 CPU owners who want a kit that’s faster than the sweet spot but don’t want to break the piggy bank.
For
+ Quick out of the box
+ RGB-less design
+ Room for overclocking
Against
– Costs more than similarly-specced rivals
– No RGB (a letdown for some)
The Dark Z FPS DDR4-4000 memory kit comes to market to capitalize on the latest developments in the chip world. Like we see in other areas, continuous improvement is important in the processor world: If there weren’t any generational uplift, we’d have no reason to purchase the next best thing. It’s the job of memory makers to capitalize on those advancements and stay in step with the latest developments.
Zen 3, for example, brought a lot of interesting features to the table. One of its improvements is the ability to run faster memory without suffering performance penalties. It’s general knowledge that AMD’s Ryzen processors run the best with their Infinity Fabric Clock (FCLK) and memory clock (MEMCLK) in sync. As a result, DDR4-3800 was the practical ceiling for the majority of Zen 2 owners.
However, microarchitectural improvements have bumped the limit up to DDR4-4000 on Zen 3, allowing memory makers to put out kits that unlock another level of performance for Ryzen users. That’s where the Dark Z FPS kit steps in.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The Dark Z FPS features the familiar wing-inspired design that TeamGroup is fond of. The aluminum heat spreader arrives in black with white lines that highlight the design. In fact, the Dark Z FPS is only available in the aforementioned color. The overall design is pretty clean, and TeamGroup’s logos are kept to a minimum.
The heat spreader’s extended wings give you the sensation that the memory is overly tall, but it’s not. Coming in at 43.5mm (1.71 inches), the Dark Z FPS is conveniently sized. The memory is devoid of RGB lighting, which is a rare sight nowadays. That might be a pro or con, depending on your taste.
The Dark Z FPS is a 16GB memory kit, so you’ll get two 8GB memory modules. Of course, these conform to a single-rank design. TeamGroup equipped the memory with an eight-layer PCB and the highest quality Samsung K4A8G085WB-BCPB (B-die) integrated circuits (ICs).
TeamGroup only offers the Dark Z FPS in the DDR4-4000 flavor. You’ll find the memory running at DDR4-2400 with 16-16-16-39 timings at stock operation. The primary timings for DDR4-4000 are 16-18-18-38. To run at DDR4-4000, the Dark Z FPS requires 1.45V. For more on timings and frequency considerations, see our PC Memory 101 feature, as well as our How to Shop for RAM story.
Comparison Hardware
Memory Kit
Part Number
Capacity
Data Rate
Primary Timings
Voltage
Warranty
Thermaltake ToughRAM XG RGB
R016D408GX2-4600C19A
2 x 8GB
DDR4-4600 (XMP)
19-26-26-45 (2T)
1.50
Lifetime
Thermaltake ToughRAM RGB
R009D408GX2-4600C19A
2 x 8GB
DDR4-4600 (XMP)
19-26-26-45 (2T)
1.50
Lifetime
Predator Apollo RGB
BL.9BWWR.255
2 x 8GB
DDR4-4500 (XMP)
19-19-19-39 (2T)
1.45
Lifetime
GeIL Orion RGB AMD Edition
GAOSR416GB4400C18ADC
2 x 8GB
DDR4-4400 (XMP)
18-24-24-44 (2T)
1.45
Lifetime
Patriot Viper 4 Blackout
PVB416G440C8K
2 x 8GB
DDR4-4400 (XMP)
18-26-26-46 (2T)
1.45
Lifetime
TeamGroup T-Force Dark Z FPS
TDZFD416G4000HC16CDC01
2 x 8GB
DDR4-4000 (XMP)
16-18-18-38 (2T)
1.45
Lifetime
Klevv Cras XR
KD48GU880-40B190Z
2 x 8GB
DDR4-4000 (XMP)
19-25-25-45 (2T)
1.40
Lifetime
Thermaltake ToughRAM XG RGB
R016D408GX2-4000C19A
2 x 8GB
DDR4-4000 (XMP)
19-26-26-45 (2T)
1.45
Lifetime
TeamGroup T-Force Xtreem ARGB
TF10D416G3600HC14CDC01
2 x 8GB
DDR4-3600 (XMP)
14-15-15-35 (2T)
1.45
Lifetime
Our Intel test system is based on an Intel Core i9-10900K and Asus ROG Maximus XII Apex running the 0901 firmware. Our AMD testbed, on the other hand, leverages the AMD Ryzen 9 5900X with the Asus ROG Crosshair VIII Dark Hero that’s on the 3501 firmware. We use the MSI GeForce RTX 2080 Ti Gaming Trio for the gaming portion of our RAM benchmarks.
Intel Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
The T-Force Dark Z FPS put up a strong showing on the Intel platform. The memory kit ranked third overall, but excelled in various workloads, including the Corona ray tracing benchmark, LuxMark, and HandBrake conversion benchmarks.
AMD Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
The T-Force Dark Z FPS jumped up to the second position on the AMD platform, trailing only the brand’s own T-Force Xtreem ARGB DDR4-3600 C14 memory kit. Nonetheless, the Dark Z FPS still put up a strong showing in numerous benchmarks.
The Dark Z FPS’ gaming performance was consistent on both Intel and AMD platforms, outperforming the competition.
Overclocking and Latency Tuning
Image 1 of 3
Image 2 of 3
Image 3 of 3
We couldn’t get much overclocking headroom out of the Dark Z FPS without pumping lots of volts into the memory. Keeping the voltage increase at a moderate amount (0.05V), we pushed the memory to DDR4-4300 by loosening the timings from the default 16-18-18-38 to 17-17-17-37.
Lowest Stable Timings
Memory Kit
DDR4-4000 (1.45V)
DDR4-4000 (1.50V)
DDR4-4300 (1.50V)
DDR4-4400 (1.45V)
Klevv Cras XR DDR4-4000 C19
18-22-22-42 (2T)
N/A
N/A
19-25-25-45 (2T)
TeamGroup T-Force Dark Z FPS DDR4-4000 C16
N/A
15-15-15-35 (2T)
17-17-17-37 (2T)
N/A
Knowing that the Dark Z FPS employs Samsung’s B-die ICs, we set out to see whether the memory’s timings could go lower. At 1.50V, the memory had no problem operating at 15-15-15-35.
Bottom Line
When it comes to AMD’s desktop Ryzen processors, there’s no argument that DDR4-3600 offers the best performance for your money. Nonetheless, the Dark Z FPS DDR4-4000 C16 memory kit is a good place to start if you want to experiment with faster memory. As long as your Ryzen 5000 chip can run a 2,000 MHz FCLK, the Dark Z FPS DDR4-4000 C16 will offer you performance that’s pretty close to a DDR4-3600 C14 memory kit. You can easily decrease or eliminate the small margin by overclocking the Dark Z FPS down to C15, but as always, your overclocking mileage will vary.
TeamGroup priced the Dark Z FPS DDR4-4000 C16 well compared to other competing kits. The Dark Z FPS kit retails for $169.99, and it’s significantly cheaper than some of the flashier DDR4-4000 options with sloppier timings.
The RGB-less Dark Z FPS design also means that you don’t have to pay the RGB tax. There’s only one rival that will really give the Dark Z FPS a hard time — G.Skill’s Ripjaws V DDR4-4000 C16 memory kit that is $30 cheaper. Pricing fluctuates, though, so make sure to check your options before you hit the check-out lane.
Noctua is known for making mighty quiet PC fans, and now it has a CPU cooler that doesn’t need a fan at all. Two years after announcing a passive heatsink potent enough to keep a Core i9-9900K CPU in check, the dead-silent Noctua NH-P1 (via VideoCardz) has finally gone on sale today for $110. It’s an absolute unit at 1.2 kilograms (2.6 pounds).
According to the company’s delightfully ASMR build video below (via PC Gamer), the final product’s six soldered heatpipes and thick fins are good enough to run a Core i9-11900K near its TDP of 125W, and even give you a slight overclock to 3.6GHz, though it’ll heavily depend on your case and other components that might also generate heat. The company has a whole set of setup guidelines, a CPU compatibility list, and even a list of recommended cases so you know what you’re getting into and start off on the right foot.
Assuming you’ve got those things in check, you shouldn’t have too much trouble fitting it to your motherboard: it appears to be compatible with all modern desktop CPU sockets and has “100% RAM clearance on LGA1200 and AM4,” with a note that you might want to avoid tall RAM modules if you’re using an LGA2066 motherboard.
The company also has a quiet (12.1dB) new 120mm fan, the NF-A12x25 LS-PWM, if you really want an extra burst of cooling on occasion. It’s set to come to a dead stop at 0 percent PWM, so your fan controller can only turn it on when you need it.
The Nvidia GeForce RTX 3070 Ti continues the Ampere architecture rollout, which powers the GPUs behind many of the best graphics cards. Last week Nvidia launched the GeForce RTX 3080 Ti, a card that we felt increased the price too much relative to the next step down. RTX 3070 Ti should do better, both by virtue of only costing $599 (in theory), and also because there’s up to a 33% difference between the existing GeForce RTX 3070 and GeForce RTX 3080. That’s a $100 increase in price relative to the existing 3070, but both the 3070 and 3080 will continue to be sold, in “limited hash rate” versions, for the time being. We’ll be adding the RTX 3070 Ti to our GPU benchmarks hierarchy shortly, if you want to see how all the GPUs rank in terms of performance.
The basic idea behind the RTX 3070 Ti is simple enough. Nvidia takes the GA104 GPU that powers the RTX 3070 and RTX 3060 Ti, only this time it’s the full 48 SM variant of the chip, and pairs it with GDDR6X. While Nvidia could have tried doing this last year, both the RTX 3080 and RTX 3090 were already struggling to get enough GDDR6X memory, and delaying by nine months allowed Nvidia to build up enough inventory of both the GPU and memory for this launch. Nvidia has also implemented its Ethereum hashrate limiter, basically cutting mining performance in half on crypto coins that use the Ethash / Dagger-Hashimoto algorithm.
Will it be enough to avoid having the cards immediately sell out at launch? Let me think about that, no. Not a chance. In fact, miners are probably still trying to buy the limited RTX 3080 Ti, 3080, 3070, 3060 Ti, and 3060 cards. Maybe they hope the limiter will be cracked or accidentally unlocked again. Maybe they made too much money off of the jump in crypto prices during the past six months. Or maybe they’re just optimistic about where crypto is going in the future. The good news, depending on your perspective, is that mining profitability has dropped significantly during the past month, which means cards like the RTX 3090 are now making under $7 per day after power costs, and the RTX 3080 has dropped down to just over $5 per day.
GeForce RTX 3070 Ti: Not Great for Mining but Still Profitable
Image 1 of 3
Image 2 of 3
Image 3 of 3
Even if the RTX 3070 Ti didn’t have a limited hashrate, it would only net about $4.25 a day. With the limiter in place, Ravencoin (KAWPOW) and Conflux (Octopus) are the most profitable crypto coins right now, and both of those hashing algorithms still appear to run at full speed. Profitability should be a bit higher with tuning, but right now, we’d estimate making only $3.50 or so per day. That’s still enough for the cards to ‘break even’ in about six months, but again, profitability has dropped and may continue to drop.
The gamers among us will certainly hope so, but even without crypto coin mining, demand for GPUs continues to greatly exceed supply. By launching the RTX 3070 Ti, with its binned GA104 chips and GDDR6X memory, Nvidia continues to steadily increase the number of GPUs it’s selling. Nvidia is also producing more Turing GPUs right now, mostly for the CMP line of miner cards, and at some point, supply should catch up. Will that happen before the next-gen GPUs arrive? Probably, but only because the next-gen GPUs are likely to be pushed back thanks to the same shortages facing current-gen chips.
Okay, enough of the background information. Let’s take a look at the specifications for the RTX 3070 Ti, along with related Nvidia GPUs like the 3080, 3070, and the previous-gen RTX 2070 Super:
GPU Specifications
Graphics Card
RTX 3080
RTX 3070 Ti
RTX 3070
RTX 2070 Super
Architecture
GA102
GA104
GA104
TU104
Process Technology
Samsung 8N
Samsung 8N
Samsung 8N
TSMC 12FFN
Transistors (Billion)
28.3
17.4
17.4
13.6
Die size (mm^2)
628.4
392.5
392.5
545
SMs / CUs
68
48
46
40
GPU Cores
8704
6144
5888
2560
Tensor Cores
272
192
184
320
RT Cores
68
48
46
40
Base Clock (MHz)
1440
1575
1500
1605
Boost Clock (MHz)
1710
1765
1725
1770
VRAM Speed (Gbps)
19
19
14
14
VRAM (GB)
10
8
8
8
VRAM Bus Width
320
256
256
256
ROPs
96
96
96
64
TMUs
272
192
184
160
TFLOPS FP32 (Boost)
29.8
21.7
20.3
9.1
TFLOPS FP16 (Tensor)
119 (238)
87 (174)
81 (163)
72
RT TFLOPS
58.1
42.4
39.7
27.3
Bandwidth (GBps)
760
608
448
448
TDP (watts)
320
290
220
215
Launch Date
Sep 2020
Jun 2021
Oct 2020
Jul 2019
Launch Price
$699
$599
$499
$499
The GeForce RTX 3070 Ti provides just a bit more theoretical computational performance than the 3070, thanks to the addition of two more SMs. It also has slightly higher clocks, giving it 7% more TFLOPS — and it still has 27% fewer TFLOPS than the 3080. More important by far is that the 3070 Ti goes from 14Gbps of GDDR6 and 448 GB/s of bandwidth to 19Gbps GDDR6X and 608 GB/s of bandwidth, a 36% improvement. In general, we expect performance to land between the 3080 and 3070, but closer to the 3070.
Besides performance specs, it’s also important to look at power. It’s a bit shocking to see that the 3070 Ti has a 70W higher TDP than the 3070, and we’d assume nearly all of that goes into the GDDR6X memory. Some of it also allows for slightly higher clocks, but generally, that’s a significant increase in TDP just for a change in VRAM.
There’s still the question of whether 8GB of memory is enough. These days, we’d say it’s sufficient for any game you want to play, but there are definitely instances where you’ll run into memory capacity issues. Not surprisingly, many of those come in games promoted by AMD, it’s almost like AMD has convinced developers to target 12GB or 16GB of VRAM at maximum quality settings. But a few judicious tweaks to settings (like dropping texture quality a notch) will generally suffice.
The difficulty is that there’s no good way to get more memory other than simply doing it. The 256-bit interface means Nvidia can do 8GB or 16GB — nothing in between. And with the 3080 and 3080 Ti offering 10GB and 12GB, respectively, there was basically no chance Nvidia would equip a lesser GPU with more GDDR6X memory. (Yeah, I know, but the RTX 3060 12GB remains a bit of an anomaly in that department.)
GeForce RTX 3070 Ti Design: A Blend of the 3070 and 3080
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Unlike the RTX 3080 Ti, Nvidia actually made some changes to the RTX 3070 Ti’s design. Basically, the 3070 Ti has a flow-through cooling fan at the ‘back’ of the card, similar to the 3080 and 3090 Founders Edition cards. In comparison, the 3070 just used two fans on the same side of the card. This also required some tweaks to the PCB layout, so the 3070 Ti doesn’t use the exact same boards as the 3070 and 3060 Ti. It’s not clear exactly how much the design tweak helps with cooling, but considering the 290W vs. 220W TDP, presumably Nvidia did plenty of testing before settling on the final product.
Overall, whether the change significantly improves the cooling or not, we think it does improve the look of the card. The RTX 3070 and 3060 Ti Founders Editions looked a bit bland, as they lacked even a large logo indicating the product name. The 3080 and above (FE models) include RGB lighting, though, which the 3070 Ti and below lack. Third party cards can, of course, do whatever they want with the GPU, and we assume many of them will provide beefier cooling and RGB lighting, along with factory overclocks.
One question we had going into this review was how well the card would cool the GDDR6X memory. The various Founders Edition cards with GDDR6X memory can all hit 110 degrees Celsius on the memory with various crypto mining algorithms, at which point the fans kick into high gear and the GPU throttles. Gaming tends to be less demanding, but we still saw 102C-104C on the 3080 Ti. The 3070 Ti doesn’t have that problem. Even with mining algorithms, the memory peaked at 100C, and temperatures in games were generally 8C–12C cooler. That’s the benefit of only having to cool 8GB of GDDR6X instead of 10GB, 12GB, or 24GB.
GeForce RTX 3070 Ti: Standard Gaming Performance
TOM’S HARDWARE GPU TEST PC
Our test setup remains unchanged from previous reviews, and like the 3080 Ti, we’ll be doing additional testing with ray tracing and DLSS — using the same tests as our AMD vs. Nvidia: Ray Tracing Showdown. We’re using the test equipment shown above, which consists of a Core i9-9900K, 32GB DDR4-3600 memory, 2TB M.2 SSD, and the various GPUs being tested — all of which are reference models here, except for the RTX 3060 (an EVGA model running reference clocks).
That gives us two sets of results. First is the traditional rendering performance, using thirteen games, at 1080p, 1440p, and 4K with ultra/maximum quality settings. Then we have ten more games with RT (and sometimes DLSS, where applicable). We’ll start with 4K, since this is a top-tier GPU more likely to be used at that resolution, plus it’s where the card does best relative to the other GPUs — CPU bottlenecks are almost completely eliminated at 4K, but more prevalent at 1080p. If you want to check 1080p/1440p/4K medium performance, we’ll have those results in our best graphics cards and GPU benchmarks articles — though only for nine of the games.
Image 1 of 14
Image 2 of 14
Image 3 of 14
Image 4 of 14
Image 5 of 14
Image 6 of 14
Image 7 of 14
Image 8 of 14
Image 9 of 14
Image 10 of 14
Image 11 of 14
Image 12 of 14
Image 13 of 14
Image 14 of 14
The RTX 3070 Ti does best as a 1440p gaming solution, which remains the sweet spot in terms of image quality and performance requirements. Overall performance ended up 9% faster than the RTX 3070 and 13% slower than the RTX 3080, so the added memory bandwidth only goes so far toward removing bottlenecks. However, a few games benefit more, like Assassin’s Creed Valhalla, Dirt 5, Horizon Zero Dawn, Shadow of the Tomb Raider, and Strange Brigade — all of which show double-digit percentage improvements relative to the 3070.
Some of the games are also clearly hitting other bottlenecks, like the GPU cores. Borderlands 3, The Division 2, Far Cry 5, FFXIV, Metro Exodus, and Red Dead Redemption 2 all show performance gains closer to the theoretical 7% difference in compute that we get from core counts and clock speeds. Meanwhile, Watch Dogs Legions ends up showing the smallest change in performance, improving just 3% compared to the RTX 3070.
The RTX 3070 Ti makes for a decent showing here, but we’re still looking at an MSRP increase of 20% for a slightly less than 10% increase in performance. Compared to AMD’s RX 6000 cards, the 3070 Ti easily beats the RX 6700 XT, but it comes in 6% behind the RX 6800 — which, of course, means it trails the RX 6800 XT as well.
On the one hand, AMD’s GPUs tend to sell at higher prices, even when you see them in places like the Newegg Shuffle. At the same time, RTX 30-series hardware on eBay remains extremely expensive, with the 3070 selling for around $1,300, compared to around $1,400 for the RX 6800. Considering the RTX 3070 Ti is faster than the RTX 3070, it remains to be seen where street pricing lands. Of course, the reduced hashrates for Ethereum mining on the 3070 Ti may also play a role.
Image 1 of 14
Image 2 of 14
Image 3 of 14
Image 4 of 14
Image 5 of 14
Image 6 of 14
Image 7 of 14
Image 8 of 14
Image 9 of 14
Image 10 of 14
Image 11 of 14
Image 12 of 14
Image 13 of 14
Image 14 of 14
Next up is 1080p testing. Lowering the resolution tends to make games more CPU limited, and that’s exactly what we see. The 3070 Ti was 7% faster than the 3070 this time and 11% slower than the 3080. It was also 7% faster than the 6700 XT and 6% slower than the 6800. While you can still easily play games at 1080p on the RTX 3070 Ti, the same is true of most of the other GPUs on our charts.
We won’t belabor the point, other than to note that our current test suite is slightly more tilted in favor of AMD GPUs (six AMD-promoted games compared to four Nvidia-promoted games, with three ‘agnostic’ games). We’ll make up for that when we hit the ray tracing benchmarks in a moment.
Image 1 of 14
Image 2 of 14
Image 3 of 14
Image 4 of 14
Image 5 of 14
Image 6 of 14
Image 7 of 14
Image 8 of 14
Image 9 of 14
Image 10 of 14
Image 11 of 14
Image 12 of 14
Image 13 of 14
Image 14 of 14
Not surprisingly, while 4K ultra gaming gave the RTX 3070 Ti its biggest lead over the RTX 3070 (11%), it also got its biggest loss (17%) against the 3080. 4K also narrowed the gap between the 3070 Ti and the RX 6800, as AMD’s Infinity Cache starts to hit its limits at 4K.
Technically, the RTX 3070 Ti can still play all of the test games at 4K, just not always at more than 60 fps. Nearly half of the games we tested came in below that mark, with Valhalla and Watch Dogs Legion being the two lowest scores — and they’re still in the mid-40s. The RTX 3070 was already basically tied with the previous generation RTX 2080 Ti, which means the RTX 3070 Ti is now clearly faster than the previous-gen halo card, at half the price.
GeForce RTX 3070 Ti: Ray Tracing and DLSS Gaming Performance
So far, we’ve focused on gaming performance using traditional rasterization graphics. We’ve also excluded using Nvidia’s DLSS technology in order to provide an apples-to-apples comparison. Now we’ll focus on ray tracing performance, with DLSS 2.0 enabled where applicable. We’re only using DLSS in Quality mode (2x upscaling) in the six games where it is supported. We’ll have to wait for AMD’s FSR to see if it can provide a reasonable alternative to DLSS 2.0 in the coming months, though Nvidia clearly has a lengthy head start. Note that these are the same tests we used in our recent AMD vs. Nvidia Ray Tracing Battle.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
Nvidia’s RTX 3070 Ti does far better — at least against the AMD competition — in ray tracing games. It’s not a complete sweep, as the RX 6800 still leads in Godfall, but the 3070 Ti ties or wins in every other game. In fact, the 3070 Ti basically ties the RX 6800 XT in our ray tracing test suite, and that’s before we enable DLSS 2.0.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
Even 1080p DXR generally ends up being GPU limited, so the rankings don’t change much from above. DLSS doesn’t help quite as much at 1080p, but otherwise, the 3070 Ti ends up right around 25% faster than the RX 6800 — the same as at 1440p. We’ve mentioned before that Fortnite is probably the best ‘neutral’ look at advanced ray tracing techniques, and the 3070 Ti is about 5–7% faster there. Turn on DLSS Quality and it’s basically double the framerate of the RX 6800.
GeForce RTX 3070 Ti: Power, Clocks, and Temperatures
We’ve got our Powenetics equipment working again, so we’ve added the 3080 Ti to these charts. Unfortunately, there was another slight snafu: We couldn’t get proper fan speeds this round. It’s always one thing or another, I guess. Anyway, we use Metro Exodus running at 1440p ultra (without RT or DLSS) and FurMark running at 1600×900 in stress test mode for our power testing. Each test runs for about 10 minutes, and we log the result to generate the charts. For the bar charts, we only average data where the GPU load is above 90% (to avoid skewing things in Metro when the benchmark restarts).
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Nvidia gives the RTX 3070 Ti a 290W TDP, and it mostly makes use of that power. It averaged about 282W for our Metro testing, but that’s partly due to the lull in GPU activity between benchmark iterations. FurMark showed 291W of power use, right in line with expectations.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Core clocks were interesting, as the GeForce RTX 3070 Ti actually ended up with slightly lower clocks than the RTX 3070 in FurMark and Metro. On the other hand, both cards easily exceeded the official boost clocks by about 100 MHz. Custom third-party cards will likely hit higher clocks and performance, though also higher power consumption.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
While we don’t have fan data (or noise data — sorry, I’m still trying to get unpacked from the move), the RTX 3070 Ti did end up hitting the highest temperatures of any of the GPUs in both Metro and FurMark. As we’ve noted before, however, none of the cards are running “too hot,” and we’re more concerned with memory temperatures. The 3070 Ti thankfully didn’t get above 100C on GDDR6X junction temperatures when testing, and even that value occured while testing crypto coin mining.
GeForce RTX 3070 Ti: Good but With Diminishing Returns
We have to wonder what things would have been like for the RTX 3070 Ti without the double whammy of the Covid pandemic and the cryptocurrency boom. If you look at the RTX 20-series, Nvidia started at higher prices ($599 for the RTX 2070 FE) and then dropped things $100 with the ‘Super’ updates a year later. Ampere has gone the opposite route: Initial prices were excellent, at least on paper, and every one of the cards sold out immediately. That’s still happening today, and the result is a price increase — along with improved performance — for the 3070 Ti and 3080 Ti.
Thankfully, the jump in pricing on the 3070 Ti relative to the 3070 isn’t too arduous. $100 more for the switch to GDDR6X is almost palatable. Except, while the 3070 offers about 90% of the 3070 Ti performance for 80% of the price and represents an arguably better buy, the real problem is the RTX 3080. It’s about 12–20% faster across our 13 game test suite and only costs $100 more (a 17% price increase).
Well, in theory anyway. Nobody is really selling RTX 3080 for $700, and they haven’t done so since it launched. The 3080 often costs over $1,000 even in the lottery-style Newegg Shuffle, and the typical price is still above $2,000 on eBay. It’s one of the worst cards to buy on eBay, based on how big the markup is. In comparison, the RTX 3070 Ti might only end up costing twice its MSRP on eBay, but that’s still $1,200. And it could very well end up costing more than that.
We’ll have to see what happens in the coming months. Hopefully, the arrival of two more desktop graphics cards in the form of the RTX 3080 Ti and RTX 3070 Ti will alleviate the shortages a bit. The hashrate limiter can’t hurt either, at least if you’re only interested in gaming performance, and the drop in mining profitability might help. But we’re far from being out of the shortage woods.
If you can actually find the RTX 3070 Ti for close to its $600 MSRP, and you’re in the market for a new graphics card, it’s a good option. Finding it will be the difficult part. This is bound to be a repeat of every AMD and Nvidia GPU launch of the past year. If you haven’t managed to procure a new card yet, you can try again (and again, and again…). But for those who already have a reasonable graphics card, there’s nothing really new to see here: slightly better performance and higher power consumption at a higher price. Let’s hope supply and prices improve by the time fall blows in.
Twitter user HXL has discovered the first photograph of Intel’s upcoming NUC 11 Extreme (codename Beast Canyon) system. More interestingly, although the pictures don’t show the processor, the leaker claims that the NUC features one of the chipmaker’s latest Tiger Lake B-series processors. Take this claim with a pinch of salt as it is unconfirmed, but it would make sense given Intel’s shifting target audience.
Intel briefly teased Beast Canyon at its Computex 2021 keynote. Beast Canyon is the successor to the chipmaker’s Ghost Canyon NUC. However, Beast Canyon marks a fundamental turn for NUCs as it’ll be the first device to offer support for a full-length discrete graphics card, making it more akin to a small form factor (SFF) system than a traditional NUC.
The Beast Canyon NUC will even come equipped with its own power supply, but Intel didn’t mention the capacity. Unless it’s a really generous capacity, it’ll probably limit the type of graphics card you can install in the chassis.
Like its predecessor, Beast Canyon will also leverage Intel’s “The Element” compute module. Everything from the processor and memory to display outputs will reside on the module itself, which then slots into a PCIe slot.
It’s reasonable to assume that Intel will offer Beast Canyon with different processor options. The one from the photograph is reportedly based on the Core i9-11900KB, which is the flagship chip from the Tiger Lake B-series lineup.
The Core i9-11900KB is a 10nm desktop chip that features BGA packaging. The Tiger Lake processor delivers eight cores, 16 threads and 24MB of L3 cache. The Willow Cove cores run with a 3.3 GHz base clock and flaunt a 5.3 GHz TVB (Thermal Velocity Boost) clock. Intel rates the Core i9-11900KB as a 65W part but allows OEMs to drop the TDP all the way down to 55W.
At Computex 2021, Intel confirmed that Beast Canyon would launch later this year. From the rumors that we’ve heard, we could be looking at a potential fourth-quarter release.
FanlessTech has spotted Noctua’s highly anticipated NH-P1 passive heatsink at Newegg for $100. Noctua hasn’t revealed the highly-anticipated heatsink to the public yet, but Newegg’s listing suggests that an official announcement shouldn’t be far behind.
The NH-P1 features a fanless design with six heatpipes that transfer heat from the processor towards the massive radiator with widely spaced fins. Noctua claims 100% compatibility with memory slots and the first PCIe expansion slot for most ATX and microATX motherboards. For added cooling or consumers that want to leverage a semi-passive configuration, Noctua recommends pairing the passive CPU cooler with the brand’s own NF-A12x25 LS-PWM 120mm cooling fan that’s barely audible.
Noctua advises consumers not to use the CPU cooler for overclocking or with processors that are space heaters. Being a passive cooler, the NH-P1’s performance depends on various factors, including ambient temperature and the other hardware inside your system. Therefore, Noctua doesn’t commit to a TDP (thermal design power) rating, instead suggesting that consumers consult the NH-P1’s processor compatibility list.
In fact, the NH-P1 will only thrive in cases with good natural convection or open-air bench tables. Noctua will release a list of recommended cases for the NH-P1 once it officially launches the CPU cooler.
Although Noctua didn’t slap a TDP label on the NH-P1, the cooling specialist mentioned processors, such as the Core i9-9900K and Ryzen 9 3950X. For perspective, the Core i9-9900K has a 95W PL1 rating and 210W PL2 rating.
Like countless other Noctua CPU coolers, the NH-P1 employs the company’s proprietary SecuFirm2+ mounting system, which provides an easy and quick setup. The cooler is compatible with Intel’s LGA115x, LGA1200 and LGA1200xx sockets and AMD’s AM4, AM3(+), AM2(+), FM2(+) and FM1 sockets.
Noctua also includes a tube of its award-winning NT-H2 thermal compound with the NH-P1. Noctua’s NH-P1 is already available for purchase on Newegg for $100. The manufacturer backs the cooler with a limited six-year warranty.
A California man has filed for a class action lawsuit against PC manufacturer Dell, claiming that the company “intentionally misled and deceived” buyers of its Alienware Area 51-m R1 gaming laptop, which was advertised to be more upgradeable than other gaming notebooks.
The plaintiff, Robert Felter, who is based in San Francisco, alleges that Dell misleads customers to believe that the laptop would be upgradeable, possibly into future generations of components. The case, Felter v. Dell Technologies, Inc. (3:21-cv-04187) has been filed with the United States District Court in the Northern District of California.
The Alienware Area 51-m was announced at CES 2019 and launched soon after. (The complaint claims the announcement was made in the summer of 2019, which is incorrect.). Among the Area 51-m’s biggest touted innovations were a user-replaceable CPU and GPU.
At media briefings, Alienware representatives told the press that the CPU could be upgraded as long as it used Intel’s Z390 chipset. The laptop used Intel’s 9th Gen Core desktop processors, up to the Intel Core i9-9900K. Dell developed separate proprietary Dell Graphics Form Factor (DGFF) modules for the Nvidia graphics.
The lawsuit, however, claims that consumers were told that “core components” (meaning the CPU and GPU) could be replaced beyond the current generation of hardware.
“Dell’s advertisement to the public didn’t place any restrictions on the upgradeability of the laptop,” lawyer David W. Kani said in an email to Tom’s Hardware. “They also never disclosed that those with the highest spec CPU and/or GPU that their device would not be upgradeable.”
Dell did not respond to a request for comment prior to publishing. This article will be updated if and when it does.
The complaint reads that “Dell’s representations of the upgradability of the Area 51M R1 also extended to units that were equipped with the fastest, most advanced Core Components available to the market, thus creating a reasonable expectation with consumers that the upgradability of the Area 51M R1 extended to yet to be released INTEL CPUs and NVIDIA GPUs, and did in fact create such expectations with consumers.” Several times, the complaint refers to Dell’s claims of “unprecedented upgradeability.”
Those words indeed live on Dell’s web page for the Alienware Area-51M R1.
“Gamers have made it clear that they’ve noticed a lack of CPU and GPU upgradability in gaming laptops,” it reads. “The Area-51m was engineered with this in mind, finally allowing gamers to harness power comparable to even the highest-performance desktop… CPU upgrades can be done using standard desktop-class processors, while GPU upgrades can be done with GPU upgrade kits available on Dell.com or with the Alienware Graphics Amplifier.”
Upgrade kits for the graphics card finally launched in November of 2019 and included options for the Nvidia GeForce RTX 2070 and Nvidia GeForce RTX 2080. Those were the GPUs in the earliest sold Area-51m units, though later ones launched with the weaker RTX 2060 and GTX 1660 Ti. Those with an RTX 2070, could, in theory, upgrade to an RTX 2080, and those with lesser GPUs could move up the chain.
But in May of 2020, Alienware released the Alienware Area-51m R2, a refresh that added support for 10th Gen Intel Core desktop processors and a wider range of GPUs from the Nvidia GeForce GTX 1660 Ti up to the newer Nvidia GeForce RTX 2080 Super and an AMD option, the Radeon RX 5700M.
In June, Alienware laid bare the limits of the upgradeability of both machines. Like the earlier laptop that only supported 9th Gen Intel processors, the new one would only support Intel 10th Gen. The top-end RTX 2080 Super and RTX 2070 Super would be the end of the line of GPUs.
It’s the release of the second-generation Area-51m that is the crux of Felter’s argument.
“The Area 51M’s CPU was not upgradeable to the new INTEL 10th generation CPU, nor was its GPU upgradeable to the new NVIDIA RTX SUPER 2000 series,” the complaint states. “In fact, the only way Plaintiff could own a laptop with these newly released upgraded Core Components was to spend several thousand dollars more than what an upgrade would cost to purchase the then-newly released Alienware Area 51M R2 or a similarly equipped laptop from another manufacturer.”
In other words: To further upgrade the laptop, Felter would have to buy a new model.
Additionally, the plaintiff and his attorneys claim that because Dell includes Intel and Nvidia components in its machines and has roadmaps in advance, that the company knew the laptop could not be upgraded.
The case is an interesting one in the enthusiast space. At its essence, this boils down to a motherboard with Intel’s Z390 chipset as well as the proprietary graphics cards. Motherboards are upgraded at a regular cadence to work with the latest processors, though occasionally new processors will work on older boards. This could potentially set a sort of precedent about how far out a motherboard needs to support a CPU. In desktops, GPUs typically work for years, as long as it’s not using an outdated standard. But Dell’s graphics were in a proprietary form factor.
Felter is seeking damages, relief and attorneys fees for himself and those in Alaska, Arizona, California, Hawaii, Idaho, Montana, Nevada, Oregon, and Washington state who purchased the laptop on their own since its release in 2019. He is represented by attorneys Brian H. Mahany of Mahany Law and Steven I. Hochfelsen and David W. Kani of Hochfelsen & Kani, LLP. He is asking the court for a jury trial.
We have our first glimpse at the performance of Nvidia’s recently announced GeForce RTX 3070 TI, thanks to two leaked benchmark runs from Ashes of the Singularity and Geekbench 5. They were brought to our attention by @leakbench on Twitter. However, the benchmark results are about as vague as it gets, giving us a little indication as to the real performance of the 3070 Ti.
Unfortunately for us, the Ashes of the Singularity score in particular tells us almost nothing about the performance of Nvidia’s new mid-range part, due to the system configuration and game quality settings.
The RTX 3070 Ti was paired with a Ryzen 9 3900X and 32GB of system memory, and the resolution used was 1080P with a high-quality preset (not the crazy preset). The 3070 Ti scores an average frame rate of 105.5 frames per second and a CPU frame rate of 105.9 fps.
While the frame rate looks good, this is probably one of the worst Ashes benchmarks to date, as the resolution is locked to 1080P and the teseter didn’t even use the maximum quality preset. On top of that, the Ryzen 9 3900X is a Zen 2 part, and at 1080P there should be a noticeable amount of bottlenecking, enough to skew performance results when comparing graphics cards specifically.
What we really needed from the leaker is a better testbed with a more powerful CPU like a Ryzen 5000 series or a highly overclocked Intel 10th gen or 11th gen part. But most importantly, we would like to see resolutions of 1440P or 4K and have the game running at the crazy preset.
As it stands now, there aren’t enough benchmarks on the high preset, with the same CPU and memory configuration to judge performance. For instance, the top result for the 1080P high preset benchmark results belongs to a system running on an i7-11700KF running 16GB of RAM, with an RTX 3080, which has exactly 1-2 FPS less on its score than the leaked RTX 3070 Ti test.
Geekbench
Hopefully, Geekbench 5’s result can give us some better guesses as to the performance of the RTX 3070 Ti. Paired with a Core i9-11900K, the RTX 3070 Ti scored 155763 points in the OpenCL test.
For comparison, the RTX 3080 on Geekbench’s browser earned a score of 183,452 and the RTX 3070 scores a 135,886. So the RTX 3070 Ti is sandwiched right in-between the vanilla 3080 and 3070, being 17% slower than the RTX 3080 and 14% faster than the RTX 3070, which is to be expected.
Take this result with a pinch of salt, until you see full reviews of the RTX 3070 Ti come online. This is especially true of Geekbench 5, which focuses on the raw compute performance of GPUs, which rarely applies to the actual gaming performance of Nvidia and AMD’s graphics cards.
Last year’s Nvidia RTX 3080 was the first GPU to make 4K gaming finally feasible. It was a card that delivered impressive performance at 4K, especially for its retail price of $699 — far less than the 2080 Ti cost a generation earlier. That was before the reality of a global chip shortage drove the prices of modern GPUs well above $1,000. Now that the street prices of RTX 3080s have stayed above $2,000 for months, Nvidia is launching its RTX 3080 Ti flagship priced at $1,199.
It’s a card that aims to deliver near identical levels of performance to the $1,499 RTX 3090, but in a smaller package and with just 12GB of VRAM — half what’s found on the RTX 3090. Nvidia is effectively competing with itself here, and now offering three cards at the top end. That’s if you can even manage to buy any of them in the first place.
I’ve spent the past week testing the RTX 3080 Ti at both 4K and 1440p resolutions. 4K gaming might have arrived originally with the RTX 2080 Ti, but the RTX 3080 Ti refines it and offers more headroom in the latest games. Unfortunately, it does so with a $1,199 price tag that I think will be beyond most people’s budgets even before you factor in the inevitable street price markup it will see during the current GPU shortage.
Hardware
If you put the RTX 3080 Ti and the RTX 3080 side by side, it would be difficult to tell the difference between them. They look identical, with the same ports and fan setup. I’m actually surprised this card isn’t a three-slot like the RTX 3090, or just bigger generally. The RTX 3080 Ti has one fan on either side of the card, with a push-pull system in place. The bottom fan pulls cool air into the card, which then exhausts on the opposite side that’s closest to your CPU cooler and rear case fan. A traditional blower cooler also exhausts the hot air out of the PCIe slot at the back.
This helped create a quieter card on the original RTX 3080, and I’m happy to report it’s the same with the RTX 3080 Ti. The RTX 3080 Ti runs at or close to its max fan RPM under heavy loads, but the hum of the fans isn’t too distracting. I personally own an RTX 3090, and while the fans rarely kick in at full speed, they’re certainly a lot more noticeable than the RTX 3080 Ti’s.
Nvidia has used the same RTX 3080 design for the 3080 Ti Model.
That quiet performance might have a downside, though. During my week of testing with the RTX 3080 Ti, I noticed that the card seems to run rather hot. I recorded temperatures regularly around 80 degrees Celsius, compared to the 70 degrees Celsius temperatures on the larger RTX 3090. The fans also maxed out a lot during demanding 4K games on the RTX 3080 Ti in order to keep the card cool. I don’t have the necessary equipment to fully measure the heat output here, but when I went to swap the RTX 3080 Ti for another card after hours of testing, it was too hot to touch, and stayed hotter for longer than I’d noticed with either the RTX 3080 or RTX 3090. I’m not sure if this will result in problems in the long term, as we saw with the initial batch of 2080 Ti units having memory overheating issues, but most people will put this in a case and never touch it again. Still, I’m surprised at how long it stayed hot enough for me to not want to touch it.
As this is a Founders Edition card, Nvidia is using its latest 12-pin single power connector. There’s an ugly and awkward adapter in the box that lets you connect two eight-pin PCIe power connectors to it, but I’d highly recommend getting a single new cable from your PSU supplier to connect directly to this card. It’s less cabling, and a more elegant solution if you have a case window or you’re addicted to tidy cable management (hello, that’s me).
I love the look of the RTX 3080 Ti and the pennant-shaped board that Nvidia uses here. Just like the RTX 3080, there are no visible screws, and the regulatory notices are all on the output part of the card so there are no ugly stickers or FCC logos. It’s a really clean card, and I’m sorry to bring this up, but Nvidia has even fixed the way the number 8 is displayed. It was a minor mistake on the RTX 3080, but I’m glad the 8 has the correct proportions on the RTX 3080 Ti.
At the back of the card there’s a single HDMI 2.1 port and three DisplayPort 1.4a ports. Just like the RTX 3080, there are also LEDs that glow around the top part of the fan, and the GeForce RTX branding lights up, too. You can even customize the colors of the glowing part around the fan if you’re really into RGB lighting.
Just like the RTX 3080, this new RTX 3080 Ti needs a 750W power supply. The RTX 3080 Ti even draws more power, too, at up to 350 watts under load compared to 320 watts on the RTX 3080. That’s the same amount of power draw as the larger RTX 3090, which is understandable given the performance improvements, but it’s worth being aware of how this might impact your energy bills (and the cost of your PC build to run it).
1440p testing
I’ve been testing the RTX 3080 Ti with Intel’s latest Core i9 processor. For 1440p tests, I’ve also paired the GPU with a 32-inch Samsung Odyssey G7 monitor. This monitor supports refresh rates up to 240Hz, as well as Nvidia’s G-Sync technology.
I compared the RTX 3080 Ti against both the RTX 3080 and RTX 3090 to really understand where it fits into Nvidia’s new lineup. I tested a variety of AAA titles, including Fortnite, Control, Death Stranding, Metro Exodus, Call of Duty: Warzone, Microsoft Flight Simulator, and many more. You can also find the same games tested at 4K resolution below.
All games were tested at max or ultra settings on the RTX 3080 Ti, and most exceeded an average of 100fps at 1440p. On paper, the RTX 3080 Ti is very close to an RTX 3090, and my testing showed that plays out in most games at 1440p. Games like Microsoft Flight Simulator, Assassin’s Creed: Valhalla, and Watch Dogs: Legion all have near-identical performance across the RTX 3080 Ti and RTX 3090 at 1440p.
Even Call of Duty: Warzone is the same without Nvidia’s Deep Learning Super Sampling (DLSS) technology enabled, and it’s only really games like Control and Death Stranding where there’s a noteworthy, but small, gap in performance.
However, the jump in performance from the RTX 3080 to the RTX 3080 Ti is noticeable across nearly every game, with the exception of Death Stranding and Fortnite, which both perform really well on the base RTX 3080.
RTX 3080 Ti (1440p)
Benchmark
RTX 3080 Founders Edition
RTX 3080 Ti Founders Edition
RTX 3090 Founders Edition
Benchmark
RTX 3080 Founders Edition
RTX 3080 Ti Founders Edition
RTX 3090 Founders Edition
Microsoft Flight Simulator
46fps
45fps
45fps
Shadow of the Tomb Raider
147fps
156fps
160fps
Shadow of the Tomb Raider (DLSS)
154fps
162fps
167fps
CoD: Warzone
124fps
140fps
140fps
CoD: Warzone (DLSS+RT)
133fps
144fps
155fps
Fortnite
160fps
167fps
188fps
Fortnite (DLSS)
181fps
173fps
205fps
Gears 5
87fps
98fps
103fps
Death Stranding
163fps
164fps
172fps
Death Stranding (DLSS quality)
197fps
165fps
179fps
Control
124fps
134fps
142fps
Control (DLSS quality + RT)
126fps
134fps
144fps
Metro Exodus
56fps
64fps
65fps
Metro Exodus (DLSS+RT)
67fps
75fps
77fps
Assassin’s Creed: Valhalla
73fps
84fps
85fps
Watch Dogs: Legion
79fps
86fps
89fps
Watch Dogs: Legion (DLSS+RT)
67fps
72fps
74fps
Watch Dogs: Legion (RT)
49fps
55fps
56fps
Assassin’s Creed: Valhalla performs 15 percent better on the RTX 3080 Ti over the regular RTX 3080, and Metro Exodus also shows a 14 percent improvement. The range of performance increases ranges from around 4 percent all the way up to 15 percent, so the performance gap is very game dependent.
Even when using games with ray tracing, the RTX 3080 Ti still managed high frame rates when paired with DLSS. DLSS uses neural networks and AI supercomputers to analyze games and sharpen or clean up images at lower resolutions. In simple terms, it allows a game to render at a lower resolution and use Nvidia’s image reconstruction technique to upscale the image and make it look as good as native 4K.
Whenever I see the DLSS option in games, I immediately turn it on now to get as much performance as possible. It’s still very much required for ray tracing games, particularly as titles like Watch Dogs: Legion only manage to hit 55fps with ultra ray tracing enabled. If you enable DLSS, this jumps to 72fps and it’s difficult to notice a hit in image quality.
4K testing
For my 4K testing, I paired the RTX 3080 Ti with Acer’s 27-inch Nitro XV273K, a 4K monitor that offers up to 144Hz refresh rates and supports G-Sync. I wasn’t able to get any of the games I tested on both the RTX 3080 Ti and RTX 3090 to hit the frame rates necessary to really take advantage of this 144Hz panel, but some came close thanks to DLSS.
Metro Exodus manages a 14 percent improvement over the RTX 3080, and Microsoft Flight Simulator also sees a 13 percent jump. Elsewhere, other games see between a 4 and 9 percent improvement. These are solid gains for the RTX 3080 Ti, providing more headroom for 4K gaming over the original RTX 3080.
The RTX 3080 Ti comes close to matching the RTX 3090 performance at 4K in games like Watch Dogs: Legion, Assassin’s Creed: Valhalla, Gears 5, and Death Stranding. Neither the RTX 3080 Ti nor RTX 3090 is strong enough to handle Watch Dogs: Legion with ray tracing, though. Both cards manage around 30fps on average, and even DLSS only bumps this up to below 50fps averages.
RTX 3080 Ti (4K)
Benchmark
RTX 3080 Founders Edition
RTX 3080 Ti Founders Edition
RTX 3090 Founders Edition
Benchmark
RTX 3080 Founders Edition
RTX 3080 Ti Founders Edition
RTX 3090 Founders Edition
Microsoft Flight Simulator
30fps
34fps
37fps
Shadow of the Tomb Raider
84fps
88fps
92fps
Shadow of the Tomb Raider (DLSS)
102fps
107fps
111fps
CoD: Warzone
89fps
95fps
102fps
CoD: Warzone (DLSS+RT)
119fps
119fps
129fps
Fortnite
84fps
92fps
94fps
Fortnite (DLSS)
124fps
134fps
141fps
Gears 5
64fps
72fps
73fps
Death Stranding
98fps
106fps
109fps
Death Stranding (DLSS quality)
131fps
132fps
138fps
Control
65fps
70fps
72fps
Control (DLSS quality + RT)
72fps
78fps
80fps
Metro Exodus
34fps
39fps
39fps
Metro Exodus (DLSS+RT)
50fps
53fps
55fps
Assassin’s Creed: Valhalla
64fps
70fps
70fps
Watch Dogs: Legion
52fps
55fps
57fps
Watch Dogs: Legion (DLSS+RT)
40fps
47fps
49fps
Watch Dogs: Legion (RT)
21fps
29fps
32fps
Most games manage to comfortably rise above 60fps in 4K at ultra settings, with Microsoft Flight Simulator and Metro Exodus as the only exceptions. Not even the RTX 3090 could reliably push beyond 144fps at 4K without assistance from DLSS or a drop in visual settings. I think we’re going to be waiting on whatever Nvidia does next to really push 4K at these types of frame rates.
When you start to add ray tracing and ultra 4K settings, it’s clear that both the RTX 3080 Ti and RTX 3090 need to have DLSS enabled to play at reasonable frame rates across the most demanding ray-traced titles. Without DLSS, Watch Dogs: Legion manages an average of 29fps (at max settings), with dips below that making the game unplayable.
DLSS really is the key here across both 1440p and 4K. It was merely a promise when the 2080 Ti debuted nearly three years ago, but Nvidia has now managed to get DLSS into more than 50 popular games. Red Dead Redemption 2 and Rainbow Six Siege are getting DLSS support soon, too.
DLSS also sets Nvidia apart from AMD’s cards. While AMD’s RX 6800 XT is fairly competitive at basic rasterization at 1440p, it falls behind the RTX 3080 in the most demanding games at 4K — particularly when ray tracing is enabled. Even the $1,000 Radeon RX 6900 XT doesn’t fare much better at 4K. AMD’s answer to DLSS is coming later this month, but until it arrives we still don’t know exactly how it will compensate for ray tracing performance on AMD’s GPUs. AMD has also struggled to supply retailers with stock of its cards.
That’s left Nvidia in a position to launch the RTX 3080 Ti at a price point that really means it’s competing with itself, positioned between the RTX 3080 and RTX 3090. If the RTX 3090 wasn’t a thing, the RTX 3080 Ti would make a lot more sense.
Nvidia is also competing with the reality of the market right now, as demand has been outpacing supply for more than six months. Nvidia has introduced a hash rate limiter for Ethereum cryptocurrency mining on new versions of the RTX 3080, RTX 3070, and now this RTX 3080 Ti. It could help deter some scalpers, but we’ll need months of data on street prices to really understand if it’s driven pricing down to normal levels.
Demand for 30-series cards has skyrocketed as many rush to replace their aging GTX 1080 and GTX 1080 Ti cards. Coupled with Nvidia’s NVENC and professional tooling support, it’s also made the RTX 30-series a great option for creators looking to stream games, edit videos, or build games.
In a normal market, I would only recommend the RTX 3080 Ti if you’re really willing to spend an extra $500 to get some extra gains in 1440p and 4K performance. But it’s a big price premium when the RTX 3090 exists at this niche end of the market and offers more performance and double the VRAM if you’re really willing to pay this much for a graphics card.
At $999 or even $1,099, the RTX 3080 Ti would tempt me a lot more, but $1,199 feels a little too pricey. For most people, an RTX 3080 makes a lot more sense if it were actually available at its standard retail price. Nvidia also has a $599 RTX 3070 Ti on the way next week, which could offer some performance gains to rival the RTX 3080.
Either way, the best GPU is the one you can buy right now, and let’s hope that Nvidia and AMD manage to make that a reality soon.
The Spectrix D50 Xtreme DDR4-5000 is one of those luxury memory kits that you don’t necessarily need inside your system. However, you’d purchase it in a heartbeat if you had the funds.
For
+ Good performance
+ Gorgeous aesthetics
Against
– Costs an arm and a leg
– XMP requires 1.6V
When a product has the word “Xtreme” in its name, you can tell that it’s not tailored towards the average consumer. Adata’s XPG Spectrix D50 Xtreme memory is that kind of product. A simple glance at the memory’s specifications is more than enough to tell you that Adata isn’t marketing the Spectrix D50 Xtreme towards average joes. Unlike the vanilla Spectrix D50, the Xtreme version only comes in DDR4-4800 and DDR4-5000 flavors with a limited 16GB (2x8GB) capacity. The memory will likely not be on many radars unless you’re a very hardcore enthusiast.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Adata borrowed the design from Spectrix D50 and took it to another level for the Spectrix D50 Xtreme. The heat spreader retains the elegant look with geometric lines. The difference is that the Xtreme variant features a polished, mirror-like heat spreader. The reflective finish looks stunning, but it’s also a fingerprint and dust magnet, which is why Adata includes a microfiber cloth to tidy up.
The memory module measures 43.9mm (1.73 inches) so compatibility with big CPU air coolers is good. The Spectrix D50 Xtreme still has that RGB diffuser on the top of the memory module. Adata provides its own XPG RGB Sync application to control the lighting or if you prefer, you can use your motherboard’s software. The Spectrix D50 Xtreme’s RGB illumination is compatible with the ecosystems from Asus, Gigabyte, MSI and ASRock.
Each Spectrix D50 Xtreme memory module is 8GB big and sticks to a conventional single-rank design. It features a black, eight-layer PCB and Hynix H5AN8G8NDJR-VKC (D-die) integrated circuits (ICs).
The default data rate and timings for the Spectrix D50 Xtreme are DDR4-2666 and 19-19-19-43, respectively. Adata equipped the memory with two XMP profiles with identical 19-28-28-46 timings. The primary profile corresponds to DDR4-5000, while the secondary profile sets the memory to DDR4-4800. Both data rates require a 1.6V DRAM voltage to function properly. For more on timings and frequency considerations, see our PC Memory 101 feature, as well as our How to Shop for RAM story.
Comparison Hardware
Memory Kit
Part Number
Capacity
Data Rate
Primary Timings
Voltage
Warranty
Crucial Ballistix Max
BLM2K8G51C19U4B
2 x 8GB
DDR4-5100 (XMP)
19-26-26-48 (2T)
1.50
Lifetime
Adata XPG Spectrix D50 Xtreme
AX4U500038G19M-DGM50X
2 x 8GB
DDR4-5000 (XMP)
19-28-28-46 (2T)
1.60
Lifetime
Thermaltake ToughRAM RGB
R009D408GX2-4600C19A
2 x 8GB
DDR4-4600 (XMP)
19-26-26-45 (2T)
1.50
Lifetime
Predator Apollo RGB
BL.9BWWR.255
2 x 8GB
DDR4-4500 (XMP)
19-19-19-39 (2T)
1.45
Lifetime
Patriot Viper 4 Blackout
PVB416G440C8K
2 x 8GB
DDR4-4400 (XMP)
18-26-26-46 (2T)
1.45
Lifetime
TeamGroup T-Force Dark Z FPS
TDZFD416G4000HC16CDC01
2 x 8GB
DDR4-4000 (XMP)
16-18-18-38 (2T)
1.45
Lifetime
TeamGroup T-Force Xtreem ARGB
TF10D416G3600HC14CDC01
2 x 8GB
DDR4-3600 (XMP)
14-15-15-35 (2T)
1.45
Lifetime
Our Intel platform simply can’t handle the Spectrix D50 Xtreme DDR4-5000 memory kit. Neither our Core i7-10700K or Core i9-10900K sample has a strong IMC (integrated memory controller) for a memory kit.
The Ryzen 9 5900X, on the other hand, had no problems with the memory. The AMD test system leverages a Gigabyte B550 Aorus Master with the F13j firmware and aMSI GeForce RTX 2080 Ti Gaming Trio to run our RAM benchmarks.
Unfortunately, we ran into a small problem that prevented us from testing the Spectrix D50 Xtreme at its advertised frequency. One of the limitations with B550 motherboards is the inability to set memory timings above 27. The Spectrix D50 Xtreme requires 19-28-28-46 to run at DDR4-5000 properly. Despite brute-forcing the DRAM voltage, we simply couldn’t get the Spectrix D50 Xtreme to run at 19-27-27-46. The only stable data rate with the aforementioned timings was DDR4-4866, which is what we used for testing.
AMD Performance
Image 1 of 19
Image 2 of 19
Image 3 of 19
Image 4 of 19
Image 5 of 19
Image 6 of 19
Image 7 of 19
Image 8 of 19
Image 9 of 19
Image 10 of 19
Image 11 of 19
Image 12 of 19
Image 13 of 19
Image 14 of 19
Image 15 of 19
Image 16 of 19
Image 17 of 19
Image 18 of 19
Image 19 of 19
There’s always a performance penalty when you break that 1:1 ratio with the Infinity Fabric Clock (FCLK) and memory clock on Ryzen processors. The Spectrix D50 Xtreme was just a hairline from surpassing the Xtreem ARGB memory kit where DDR4-3600 is basically the sweet spot for Ryzen.
It’s important to bear in mind that the Spectrix D50 Xtreme was running at DDR4-4866. As small as it may seem, that 134 MHz difference should put Adata’s offering really close to Crucial’s Ballistix Max DDR4-5100, which is the highest-specced memory kit that has passed through our labs so far.
Overclocking and Latency Tuning
Due to the motherboard limitation, we couldn’t pursue overclocking on the Spectrix D50 Xtreme. However, in our experience, high-speed memory kits typically don’t have much gas left in the tank. Furthermore, the Spectrix D50 Xtreme already requires 1.6V to hit DDR4-5000 so it’s unlikely that we would have gotten anywhere without pushing insame amounts of volts into the memory
Lowest Stable Timings
Memory Kit
DDR4-4400 (1.45V)
DDR4-4500 (1.50V)
DDR4-4600 (1.55V)
DDR4-4666 (1.56V)
DDR4-4866 (1.60V)
DDR4-5100 (1.60V)
Crucial Ballistix Max DDR4-5100 C19
N/A
N/A
N/A
N/A
N/A
17-25-25-48 (2T)
Adata XPG Spectrix D50 Xtreme DDR4-5000 CL19
N/A
N/A
N/A
N/A
19-27-27-46 (2T)
N/A
Thermaltake ToughRAM RGB DDR4-4600 C19
N/A
N/A
18-24-24-44 (2T)
20-26-26-45 (2T)
N/A
N/A
Patriot Viper 4 Blackout DDR4-4400 C18
17-25-25-45 (2T)
21-26-26-46 (2T)
N/A
N/A
N/A
N/A
At DDR4-4866, the Spectrix D50 Xtreme was cool operating with 19-27-27-46 timings. However, it wouldn’t go lower regardless of the voltage that we crank into it. We’ll revisit the overclocking portion of the review once we source a more capable processor and motherboard for the job.
Bottom Line
The Spectrix D50 Xtreme DDR4-5000 C19 won’t offer you the best bang for your buck by any means. However, the memory will make your system look good and give you some bragging rights along the way. Just make sure you have a processor and motherboard that can tame the memory before pulling the trigger on a memory kit of this caliber.
With that said, the Spectrix D50 Xtreme DDR4-5000 C19 doesn’t come cheap. The memory retails for $849.99 on Amazon. Not like there are tons of DDR4-5000 memory kits out there, but the Spectrix D50 Xtreme is actually the cheapest one out of the lot. For the more budget-conscious consumers, however, you should probably stick to a DDR4-3600 or even DDR4-3800 memory kit with the lowest timings possible. The Spectrix D50 Xtreme is more luxury than necessity.
Alienware is going thinner than ever with its newest gaming laptops. The X-series, comprised of the Alienware x15 and x17, are among the most svelte machines that Dell’s gaming arm has ever produced.
Both the x15 and x17 are available in limited configurations today, with the full range available on June 15. The smaller laptop starts at $1,999.99, while the larger one begins at $2,099.99.
Alienware x15 R1
Alienware x17 R1
CPU
Up to Intel Core i9-11900H
Up to Intel Core i9-11980HK
GPU
Up to Nvidia GeForce RTX 3080 (8GB GDDR6), 90W TGP, 1,365 MHz boost clock
Up to Nvidia GeForce RTX 3080 (16GB GDDR6), 150W TGP, 1,710 MHz boost clock
Display
15.6-inches, up to 1080p/360 Hz with G-Sync or 1440p/240 Hz with G-Sync
17.3-inches, up to 1080p/ 360 Hz with G-Sync or 1440p/120 Hz
RAM
Up to 32GB DDR4-3200
Up to 64GB DDR4-3466 XMP
Storage
Up to 4TB RAID0
UP to 4TB RAID0
Size
14.16 x 10.92 x 0.63 inches
15.72 x 11.79 x 0.82 inches
Battery
87 WHr
87 WHr
Starting Price
$1,999.99
$2,099.99
At 0.63 inches thick (or 0.64 inches with a 1080, 165Hz display), the x15 is the slimmest gaming laptop that Alienware has ever made, while the 0.82-inch thick x17 is still its most lean in that screen size. Both feature an update to Alienware’s “Legend” design, and the company claims many of the technologies it used to get the laptops this thin are patent-pending and “industry exclusive.”
To get this thin, Alienware needs to nail the cooling. There are several new technologies involved, but the one the company is boasting the most about is its thermal interface material, which it has dubbed Element 31. It’s a proprietary, gallium-silicon liquid metal material. The silicon should protect it from oxidation, meaning that it will last longer. The company is claiming a 25% improvement in overall thermal resistance over previous Alienware laptops. This is the special sauce that it hopes will get the X-series on our list of the best gaming laptops. (Note that Element 31 will only come in configurations with an RTX 3070 or RTX 3080).
The system is cooled by four fans, which Alienware says it optimized based on location. The QWER keys and number keys, popular in esports titles, are all near the front intake fan to keep them cool. (These fans also cool the SSDs). The laptops’ rear fans, for the CPU and GPU, exhaust out the back and intake from the top and bottom. The idea is to have positive pressure, with more cool air entering the laptop than leaving it. The 12-volt fans are designed for lower power and fan speeds, and Alienware claims they shouldn’t be much louder than its existing gaming notebooks.
To control power, you can bias performance toward the CPU or GPU in the BIOS or Alienware Command Center software.
Will any of this majorly affect a benchmark? Probably not, Alienware claims. But it’s promising more stability over long gaming sessions, keeping 11th Gen Core processors (up to a Core i9-11980HK on the x17 and Core i9-11900H on the x15) and up to an Nvidia GeForce RTX 3080 (a max of 110W on the x15, and 165W on the x17).
Image 1 of 2
Image 2 of 2
Of course, some of the changes are on the outside. The thin chassis has what Alienware is calling “Dark Core,” which is a remarkably fancy term for the simple act of putting black keyboard deck on the x15 and x17’s white magnesium alloy chassis.
The x17 will come with an option for its custom Cherry MX ultra low-profile mechanical switches, which won’t be in the x15. Otherwise, both laptops will come with a new keyboard with 1.5 mm of travel,
N-key rollover
and per-key RGB lighting. Alienware is also bringing back RGB lighting on the touchpad, but that will only show up in models with an RTX 3080 GPU.
Both the 15.6-inch and 17.3-inch displays will come in 1080p options up to 360 Hz or
1440p
options at 240 Hz, with
G-Sync
on the x15 or at 120 Hz on the x17. Other panel options include ComfortView Plus to prevent blue light, Advanced Optimus and the option for infrared cameras to log in with Windows Hello.
At this size, almost all of the ports, with the exception of the power barrel, have been relegated to the back. Alienware has always kept a number of ports back there, so of all the changes, it’s definitely the smallest.
Alienware is keen on giving Razer a run for its money when it comes to making a super-thin gaming laptop. Two of the configurations of Alienware’s new X15 flagship model are actually 15.9mm thick, almost the same as Razer’s just-refreshed 15.8mm-thick Blade 15 Advanced. That’s impressively thin, especially considering that Alienware doesn’t usually try to compete in this realm.
What’s also noteworthy is that, despite its thin build, the X15 looks like it will be a capable machine. Alienware is also announcing a bigger and thicker 17-inch X17 laptop that’s even more powerful. We’ll go into detail on both below.
Let’s start with the X15, which will cost $1,999 for the base model, available starting today. Packed into that entry model is Intel’s 11th Gen Core i7-11800H processor (eight cores and a boost clock speed of up to 4.6GHz), 16GB of RAM clocked at 3,200MHz (but not user-upgradeable due to size constraints), 256GB of fast NVMe storage (which is user-upgradeable, with two slots that support either M.2 2230 or 2280-sized SSDs), and Nvidia’s RTX 3060 graphics chip (90W maximum graphics power, and a base clock speed of 1,050MHz and boost clock of 1,402MHz). A 15.6-inch FHD display with a 165Hz refresh rate, 3ms response time, and up to 300 nits of brightness with 100-percent sRGB color gamut support comes standard.
Alienware hasn’t shared pricing for spec increases, but you can load the X15 with up to an Intel Core i9-11900H processor, a 2TB NVMe M.2 SSD (with a maximum 4TB of dual storage supported via RAID 0), and 32GB of RAM. To top it off, you can put in an RTX 3080 graphics card (the 8GB version, with 110W maximum graphics power, a base clock speed of 930MHz and a boost clock speed of 1,365MHz). The display can be upgraded to a 400-nit QHD G-Sync panel with a 240Hz refresh rate, 2ms response time, and 99-percent coverage of the DCI-P3 color gamut. The X15 has a 87Wh battery and includes a 240W “small form factor” adapter. At its lowest weight, the X15 comes in at five pounds, but it goes up to 5.2 pounds depending on the specs.
All of the X15’s ports, aside from a headphone jack and power input, are located on its back. There’s a USB-A 3.2 Gen 1 port, one USB-C 3.2 Gen 2 port, one Thunderbolt 4 port, a microSD card slot, and an HDMI 2.1 port that will allow the X15 to output a 4K signal at up to 120Hz.
If you’re all about getting a 17.3-inch screen, the X17 starts at $2,099 and has similar starting specs. It has a thicker chassis than the X15 at 20.9mm, and it’s heavier, starting at 6.65 pounds. But that extra heft apparently allows for more graphical and processing power, if you’re willing to pay for it. For example, its RTX 3060 card has a higher maximum graphics power of 130W. This pattern is seen for more pricey GPU upgrades, too, especially the RTX 3080 (16GB) that can sail with 165W of max graphics power at a boost clock speed of 1,710MHz. In the processor department, you can go up to an Intel Core i9-11900HK. Additionally, you can spec this one with up to 64GB of XMP RAM clocked at 3,466MHz.
As for the screen, there’s an upgrade option to get a 300-nit FHD G-Sync panel with a 360Hz refresh rate and 1ms response time, but you can go all the way up to a 500-nit 4K display with a 120Hz refresh rate and 4ms response time. Like the X15, the X17 has an 87Wh battery, but whether you get a 240W or 330W power supply will depend on the configuration that you buy.
The X17 has all of the same ports as the X15, along with one extra USB-A port, a Mini DisplayPort jack, and a 2.5G ethernet port (the X15 includes a USB-C to ethernet adapter).
Generally speaking, thinner laptops struggle with heat management. But Alienware’s Quad Fan claims to move a lot of air, and in X15 and X17 models that have the RTX 3070 or 3080 chips, it touts a new “Element 31 thermal interface material” that apparently provides a boost in the thermal resistance of its internals compared to previous Alienware laptops. We’ll have to see how this fares when we try out a review unit. I’m curious how loud they might get in order to stay cool.
If you’re an Alienware enthusiast, be aware that the company’s mainstay graphics amplifier port is missing. We asked Alienware about this, and it provided this statement to The Verge:
Today’s latest flagship desktop graphics cards achieve graphical power beyond what the Alienware Graphics Amplifiers (as well as other external graphics amplifiers) can successfully port back through PCI (and Thunderbolt) connections. For Alienware customers who are already purchasing high-end graphics configurations, the performance improvements from our Alienware Graphics Amplifier would be limited. While improvements would be noticeable, in many cases it wouldn’t be enough to justify purchasing an external amplifier and flagship graphics card. So instead, we are using that additional space to offer extra ports and thermal headroom which provides a better experience for all gamers purchasing this product.
Wrapping up this boatload of specs, the X15 and X17 each have a 720p Windows Hello webcam, and configurations with the RTX 3080 have an illuminated trackpad that can be customized within Alienware’s pre-installed software. These laptops come standard with Alienware’s X-Series keyboard that has per-key lighting, n-key rollover, anti-ghosting, and 1.5mm of key travel. In the X17, you have the option to upgrade to Alienware’s Cherry MX ultra low-profile mechanical switches, which have a longer 1.8mm key travel.
Lastly, both laptops are available in the “Lunar Light” colorway, which is white on the outside shell and black on the inside.
Intel kicked off Computex 2021 by adding two new flagship 11th-Gen Tiger Lake U-series chips to its stable, including a new Core i7 model that’s the first laptop chip for the thin-and-light segment that boasts a 5.0 GHz boost speed. As you would expect, Intel also provided plenty of benchmarks to show off its latest silicon.
Intel also teased its upcoming Beast Canyon NUCs that are the first to accept full-size graphics cards, making them more akin to a small form factor PC than a NUC. These new machines will come with Tiger Lake processors. Additionally, the company shared a few details around its 5G Solution 5000, its new 5G silicon for Always Connected PCs that it developed in partnership with MediaTek and Fibocom. Let’s jump right in.
Intel 11th-Gen Tiger Lake U-Series Core i7-1195G7 and i5-1155G7
Intel’s two new U-series Tiger Lake chips, the Core i7-1195G7 and Core i5-1155G7, slot in as the new flagships for the Core i7 and Core i5 families. These two processors are UP3 models, meaning they operate in the 12-28W TDP range. These two new chips come with all the standard features of the Tiger Lake family, like the 10nm SuperFin process, Willow Cove cores, the Iris Xe graphics engine, and support for LPDDR4x-4266, PCIe 4.0, Thunderbolt 4 and Wi-Fi 6/6E.
Intel expects the full breadth of its Tiger Lake portfolio to span 250 designs by the holidays from the usual suspects, like Lenovo MSI, Acer and ASUS, with 60 of those designs with the new 1195G7 and 1155G7 chips.
Intel Tiger Lake UP3 Processors
PROCESSOR
CORES/THREADS
GRAPHICS (EUs)
OPERATING RANGE (W)
BASE CLOCK (GHZ)
SINGLE CORE TURBO FREQ (GHZ)
MAXIMUM ALL CORE FREQ (GHZ)
Cache (MB)
GRAPHICS MAX FREQ (GHZ)
MEMORY
Core i7-1195G7
4C / 8T
96
12 -28W
2.9
5.0
4.6
12
1.40
DDR4-3200, LPDDR4x-4266
Core i7-1185G7
4C / 8T
96
12 – 28W
3.0
4.8
4.3
12
1.35
DDR4-3200, LPDDR4x-4266
Core i7-1165G7
4C / 8T
96
12 – 28W
2.8
4.7
4.1
12
1.30
DDR4-3200, LPDDR4x-4266
Core i5-1155G7
4C / 8T
80
12 – 28W
2.5
4.5
4.3
8
1.35
DDR4-3200, LPDDR4x-4266
Core i5-1145G7
4C / 8T
80
12 – 28W
2.6
4.4
4.0
8
1.30
DDR4-3200, LPDDR4x-4266
Core i5-1135G7
4C / 8T
80
12 – 28W
2.4
4.2
3.8
8
1.30
DDR4-3200, LPDDR4x-4266
Core i3-1125G4*
4C / 8T
48
12 – 28W
2.0
3.7
3.3
8
1.25
DDR4-3200, LPDDR4x-3733
The four-core eight-thread Core i7-1195G7 brings the Tiger Lake UP3 chips up to a 5.0 GHz single-core boost, which Intel says is a first for the thin-and-light segment. Intel has also increased the maximum all-core boost rate up to 4.6 GHz, a 300 MHz improvement.
Intel points to additional tuning for the 10nm SuperFin process and tweaked platform design as driving the higher boost clock rates. Notably, the 1195G7’s base frequency declines by 100 MHz to 2.9 GHz, likely to keep the chip within the 12 to 28W threshold. As with the other G7 models, the chip comes with the Iris Xe graphics engine with 96 EUs, but those units operate at 1.4 GHz, a slight boost over the 1165G7’s 1.35 GHz.
The 1195G7’s 5.0 GHz boost clock rate also comes courtesy of Intel’s Turbo Boost Max Technology 3.0. This boosting tech works in tandem with the operating system scheduler to target the fastest core on the chip (‘favored core’) with single-threaded workloads, thus allowing most single-threaded work to operate 200 MHz faster than we see with the 1185G7. Notably, the new 1195G7 is the only Tiger Lake UP3 model to support this technology.
Surprisingly, Intel says the 1195G7 will ship in higher volumes than the lower-spec’d Core i7-1185G7. That runs counter to our normal expectations that faster processors fall higher on the binning distribution curve — faster chips are typically harder to produce and thus ship in lower volumes. The 1195G7’s obviously more forgiving binning could be the result of a combination of the lower base frequency, which loosens binning requirements, and the addition of Turbo Boost Max 3.0, which only requires a single physical core to hit the rated boost speed. Typically all cores are required to hit the boost clock speed, which makes binning more challenging.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The four-core eight-thread Core i5-1155G7 sees more modest improvements over its predecessor, with boost clocks jumping an additional 100 MHz to 4.5 GHz, and all-core clock rates improving by 300 MHz to 4.3 GHz. We also see the same 100 MHz decline in base clocks that we see with the 1195G7. This chip comes with the Iris Xe graphics engine with 80 EUs that operate at 1.35 GHz.
Intel’s Tiger Lake Core i7-1195G7 Gaming Benchmarks
Intel shared its own gaming benchmarks for the Core i7-1195G7, but as with all vendor-provided benchmarks, you should view them with skepticism. Intel didn’t share benchmarks for the new Core i5 model.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Intel put its Core i7-1195G7 up against the AMD Ryzen 7 5800U, but the chart lists an important caveat here — Intel’s system operates between 28 and 35W during these benchmarks, while AMD’s system runs at 15 to 25W. Intel conducted these tests on the integrated graphics for both chips, so we’re looking at Iris Xe with 96 EUs versus AMD’s Vega architecture with eight CUs.
Naturally, Intel’s higher power consumption leads to higher performance, thus giving the company the lead across a broad spate of triple-A 1080p games. However, this extra performance comes at the cost of higher power consumption and thus more heat generation. Intel also tested using its Reference Validation Platform with unknown cooling capabilities (we assume they are virtually unlimited) while testing the Ryzen 7 5800U in the HP Probook 455.
Intel also provided benchmarks with DirectX 12 Ultimate’s new Sampler Feedback feature. This new DX12 feature reduces memory usage while boosting performance, but it requires GPU hardware-based support in tandem with specific game engine optimizations. That means this new feature will not be widely available in leading triple-A titles for quite some time.
Intel was keen to point out that its Xe graphics architecture supports the feature, whereas AMD’s Vega graphics engine does not. ULMark has a new 3DMark Sampler Feedback benchmark under development, and Intel used the test release candidate to show that Iris Xe graphics offers up to 2.34X the performance of AMD’s Vega graphics with the feature enabled.
Intel’s Tiger Lake Core i7-1195G7 Application Benchmarks
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Here we can see Intel’s benchmarks for applications, too, but the same rules apply — we’ll need to see these benchmarks in our own test suite before we’re ready to claim any victors. Again, you’ll notice that Intel’s system operates at a much higher 28 to 35W power range on a validation platform while AMD’s system sips 15 to 25W in the HP Probook 455 G8.
As we’ve noticed lately, Intel now restricts its application benchmarks to features that it alone supports at the hardware level. That includes AVX-512 based benchmarks that leverage the company’s DL Boost suite that has extremely limited software support.
Intel’s benchmarks paint convincing wins across the board. However, be aware that the AI-accelerated workloads on the right side of the chart aren’t indicative of what you’ll see with the majority of productivity software. At least not yet. For now, unless you use these specific pieces of software very frequently in these specific tasks, these benchmarks aren’t very representative of the overall performance deltas you can expect in most software.
In contrast, the Intel QSV benchmarks do have some value. Intel’s Quick Sync Video is broadly supported, and the Iris Xe graphics engine supports hardware-accelerated 10-bit video encoding. That’s a feature that Intel rightly points out also isn’t supported with MX-series GPUs, either.
Intel’s support for hardware-accelerated 10-bit encoding does yield impressive results, at least in its benchmarks, showing a drastic ~8X reduction in a Handbrake 4K 10-bit HEVC to 1080P HEVC transcode. Again, bear in mind that this is with the Intel chip running at a much higher power level. Intel also shared a chart highlighting its broad support for various encoding/decoding options that AMD doesn’t support.
Intel Beast Canyon NUC
Image 1 of 2
Image 2 of 2
Intel briefly showed off its upcoming Beast Canyon NUC that will sport 65W H-Series Tiger Lake processors and be the first NUC to support full-length graphics cards (up to 12 inches long).
The eight-litre Beast Canyon certainly looks more like a small form factor system than what we would expect from the traditional definition of a NUC, and as you would expect, it comes bearing the Intel skull logo. Intel’s Chief Performance Strategist Ryan Shrout divulged that the system will come with an internal power supply. Given the size of the unit, that means there will likely be power restrictions for the GPU. We also know the system uses standard air cooling.
Intel is certainly finding plenty of new uses for its Tiger Lake silicon. The company recently listed new 10nm Tiger Lake chips for desktop PCs, including a 65W Core i9-11900KB and Core i7-11700KB, and told us that these chips would debut in small form factor enthusiast systems. Given that Intel specifically lists the H-series processors for Beast Canyon, it doesn’t appear these chips will come in the latest NUC. We’ll learn more about Beast Canyon as it works its way to release later this year.
Intel sold its modem business to Apple back in 2019, leaving a gap in its Always Connected PC (ACPC) initiative. In the interim, Intel has worked with MediaTek to design and certify new 5G modems with carriers around the world. The M.2 modules are ultimately produced by Fibocom. The resulting Intel 5G Solution 5000 is a 5G M.2 device that delivers up to five times the speed of the company’s Gigabit LTE solutions. The solution is compatible with both Tiger and Alder Lake platforms.
Intel claims that it leads the ACPC space with three out of four ACPCs shipping with LTE (more than five million units thus far). Intel’s 5G Solution 5000 is designed to extend that to the 5G arena with six designs from three OEMs (Acer, ASUS and HP) coming to market in 2021. The company says it will ramp to more than 30 designs next year.
Intel says that while it will not be the first to come to market with a 5G PC solution, it will be the first to deliver them in volume, but we’ll have to see how that plays out in the face of continued supply disruptions due to the pandemic.
Update 28/05/2021 3:13 pm PT: Intel has provided us with the following statement that sheds more light on the latest Tiger Lake desktop processors:
“Intel has partnered with customers interested in expanding their product portfolio with enthusiast, small form-factor desktop designs. The Intel Core i9-11900KB processor is a BGA solution built with unique specifications and performance specifically for these designs.”
Update 28/05/2021 11:13 am PT: Intel has updated the product pages for the Tiger Lake B-series processors to confirm that they are indeed desktop processors. We’ve amended the article to reflect the change.
Original Article:
If you think Intel was done with Tiger Lake, then you have another thing coming. The chipmaker has unceremoniously posted four new Tiger Lake chips (via momomo_us) in its ARK database. Apparently, the processors are already launched.
The quartet of new processors are listed under the Tiger Lake family, with the 11th Generation moniker. However, they carry the “B” suffix, which is a designation that Intel hasn’t used until now. We’re unsure of what the letter stands for. The product pages for the Core i9-11900KB, Core i5-11500B, Core i7-11700B and Core i3-11100B have the aforementioned processors as desktop chips. Nevertheless, the “B” is rumored to BGA (Ball Grid Array), which makes sense since Intel doesn’t specify a type of socket for the B-series parts. There’s a possibility that these processors are soldered to the motherboard via the BGA package.
The core configurations for the listed Tiger Lake processors stick to Intel’s guidelines. The Core i9 and Core i7 are equipped with eight cores and 16 threads, but with clock speeds as the main differentiating factor. The Core i5 and Core i3 SKUs arrive with six-core, 12-thread and four-core, eight-thread setups, respectively. It would appear that the Tiger Lake B-series processors benefit from Thermal Velocity Boost (TVB), though.
Intel Tiger Lake B-Series Specifications
Processor
Cores / Threads
Base / Boost / TVB Clocks (GHz)
L3 Cache (MB)
TDP (W)
Graphics
Graphics Base / Boost Clocks (MHz)
RCP
Core i9-11900KB
8 / 16
3.3 / 4.9 / 5.3
24
65
Intel UHD Graphics
350 / 1,450
$417
Core i7-11700B
8 / 16
3.2 / 4.8 / 5.3
24
65
Intel UHD Graphics
350 / 1,450
?
Core i5-11500B
6 / 12
3.3 / 4.6 / 5.3
12
65
Intel UHD Graphics
350 / 1,450
?
Core i3-11100B
4 / 8
3.6 / 4.4 / 5.3
12
65
Intel UHD Graphics
350 / 1,400
?
Since the B-series all enjoy a 65W TDP, it’s common sense that they are faster than Intel’s recently announced Tiger Lake-H 45W processors. The 20W margin allows the B-series access to TVB after all, which can be a difference maker in certain workloads. According to the Intel’s specification sheets, only the Core i9-11900KB and Core i7-11700B can be configured down to 55W. The Core i5-11500B and Core i3-11100B have a fixed 65W TDP.
The Core i9-11900KB is the only chip out of the lot that comes with an unlocked multiplier. The octa-core processor appears to feature a 3.3 GHz base clock, 4.9 GHz boost clock and 5.3 GHz TVB boost clock. Despite the Core i9-11900KB and the Core i9-11980HK having the same maximum 65W TDP, the first leverages TVB to boost to 5.3 GHz, 300 MHz higher than the latter.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Comparing from tier to tier, we’re noticing higher base clocks on the B-series SKUs. The difference is between 400 MHz to 700 MHz, depending on which models you’re looking at. Obviously, TVB gives the B-series higher boost clocks on paper. If we don’t take TVB into consideration, the improvement is very little. For example, the Core i7-11700B has a 4.8 GHz boost clock speed, only 200 MHz higher than the Core i7-11800H. The Core i5-11500B is rated for 4.6 GHz boost clock, 100 MHz faster than a Core i5-11400H.
It seems that Intel only made improvements to the processing aspect of the B-series. The iGPU and Tiger Lake’s other features look untouched. Like Tiger Lake-H, the B-series also comes with native support for DDR4-3200 memory and a maximum capacity of 128GB. However, the B-series seems to offer less memory bandwidth. For comparison, Tiger Lake-H delivers up to 51.2 GBps of maximum memory bandwidth, while the B-series tops out at 45.8 GBps.
It’s unknown what Intel’s intentions are for the Tiger Lake B-series lineup. Given the 65W TDP, it’s reasonable to think that Intel launched the new processors to compete with AMD’s Ryzen 5000G (codename Cezanne) desktop APUs that will eventually make their way to the DIY market.
(Pocket-lint) – The ‘Style Edition’ edition of Acer’s Predator Triton series returns in a 16-inch format, bringing gaming/creator levels of performance into an altogether more discreet, less flashy clamshell than the ‘gaming norm’.
The Predator Triton 500 SE arrives hot on the heels of the smaller-scale Triton 300 SE becoming available to buy. So if the smaller model doesn’t quite pack enough of a punch then is the larger device worth waiting for – and worth saving up for?
Design & Display
16-inch Mini LED panel
2560 x 1600 resolution (WQXGA)
1600 nits brightness maximum
240Hz refresh rate
16:10 aspect ratio
Built-in fingerprint sensor
Thickness: 19.9mm
DTS:X Ultra audio
The 500 SE is, as its 16-inch diagonal panel would dictate, a larger machine than the original 14-inch 300 SE. Not only that, the 500 SE is a rather more developed device, its screen embodying the latest Mini LED technology for a much brighter experience.
Pocket-lint
Mini LED – a technology used by some high-end TVs – houses multiple LEDs behind the surface for a more intense brightness, because there are literally more of the illuminators than earlier technologies could cram into place.
In the case of the Predator Triton 500 SE that means a maximum of 1600 nits – which is as bright as you’ll see the most flagship of mobile phone achieve. It’s better than most high-end OLED tellies, too, so this panel has got the guns to really deliver a strong image to the eyes.
Not only that, it’s a WQXGA resolution, bringing greater sharpness potential to your games, movies and content. All across a 16:10 aspect ratio, which is versatile for all kinds of content and not ‘tall’ like some older laptops.
Pocket-lint
The screen, then, is the Triton 500 SE’s main event, no doubt. But the sell of this laptop is in its design – the idea being that its silvery colour is subtle enough to not scream ‘gaming laptop!‘. The lid has a simple raised Predator symbol logo to the top corner, but no in-your-face text or other logo prints anywhere else to be seen.
However, just as we said of the smaller-scale Style Edition original, the Triton 500 SE’s panel just feels a bit, well, flimsy. There’s too much flex to it; the lid looks and feels too plasticky – when it really shouldn’t at this end of the market.
Pocket-lint
It’s all pretty discreet, although switch on the RGB lighting under the keyboard and there’s no hiding it. And you only need to look at the large vents to the rear to know that it’s ready to pass a lot of air through for the sake of cooling. Still, at 19.9mm thick, it’s really not that massive for such a device.
11th Gen Intel Core i7 / Core i9 processor options
Nvidia RTX 3070 / 3080 GPU options
Up to 4GB PCIe storage / 64GB RAM
5th Gen AeroBlade fan cooling tech
Turbo button for overclocking
Killer Wi-Fi 6 (AX1650i)
Predator Sense
In terms of power available the Triton 500 SE delivers a lot more than the 300 SE can muster. The 16-inch model packs in 11th Gen Intel Core i7 and Nvidia RTX 3070 for its circa two-grand asking price (£1,999 in the UK). That’s nearer three-grand (£2,999 in the UK) if you opt for the Core i9 and RTX 3080. No small chunk of change, more just a big chunk of awesome power.
EaseUS is the easiest way to recover your sensitive data on Mac or PC
By Pocket-lint Promotion
·
Pocket-lint
All of that obviously requires more cooling than your average, hence those big vents to the back and side of the device. But we’ve found the fans do kick in with little fuss, meaning there’s quite a bit of potential noise. There are additional fan controls within Predator Sense software – which has its own dedicated activation button – to take extra command, including maxxing them during gaming sessions.
Even the dedicated Turbo button to the top left area above the keyboard, can push overclocking – and that’ll send those fans into a frenzy. The cooling setup is called AeroBlade 3D – now in its fifth generation – a system that uses the fans to pull air in over the most heat capacitive components (CPU, GPU, RAM) and hold air in chambers to aid with this cooling process.
We’ve not had the time to test this laptop under full pressure, merely see it at a pre-launch Acer event to gauge some of how well it will handle serious tasks. Being a gaming laptop with Intel architecture we wouldn’t assume the battery will last especially long – and you’re going to need it plugged into the wall to get maximum potential anyway – but Acer does claim it can manage up to 12 hours in altogether more work-like conditions.
Pocket-lint
Interestingly there’s some pretty serious ports built into the design, from the dedicated Ethernet port for best connectivity, to the full-size SD card reader – which is a really rare sight these days on laptops. As for speeds, the USB-C ports are Thunderbolt 4, so there’s certainly no slack there – a bit like the Predator Triton 500 SE’s overall ethos really.
First Impressions
If Acer’s original 14-inch ‘Style Edition’ Predator Triton didn’t quite deliver on scale or power, then the Predator Triton 500 SE is here to up the ante. It’s got a bigger, brighter and meaner screen, plus power options that are far more considerable – but then so is the price tag, so you’ll need to get saving.
The design – pretty much pitched as ‘gaming laptop for the business person’ – is more discreet than your gaming laptop average, but there’s still all the RGB lighting, cooling vents, ports and Turbo overclocking that you could want.
It’s good to see something a bit different to diversify the gaming laptop and creators market. Although, as we said of the original SE model, the 500 SE ought to up its game when it comes to screen sturdiness – especially at this price point.
Writing by Mike Lowe.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.