(Pocket-lint) – Samsung revealed two new models of the Samsung Galaxy Note earlier in 2020 – the Galaxy Note 20 Ultra and the regular Galaxy Note.
As fans of the series will recall, in 2019, Samsung offered two sizes of this phone, taking the Note 10 smaller while pushing the Note 10+ as the larger size model. In reality, it was the Note 10+ that was the true successor to the Note crown, while the “normal” model slipped into a smaller and more affordable position.
That gap between the Note 20 and the Ultra model has become wider in 2020. Here’s how they compare.
squirrel_widget_326997
Design
Note 20 Ultra: 164.8 x 77.2 x 8.1mm, 208g Gorilla Glass 7
Note 20: 161.6 x 75.2 x 8.3mm, 192g, polycarbonate back
Never has so much been written about design when it comes to two phones in the same family. In the past, Samsung has often offered much the same design between regular and plus models. That changed with the launch of the S20 Ultra – and the Note 20 Ultra is different to the regular Note 20 too.
While the difference in size is to be expected because the displays are a different size, the design itself is quite different too. The Note 20 Ultra has flattened ends and squared corners, while the Note 20 has softer curved corners.
The Note 20 also moves to a plastic back – or “glasstic” as Samsung calls it – rather than glass. This is quite a move, considering that Samsung has been using glass for its rear panels for some time. It also means the Note 20 is positioned quite differently to the Note 20 Ultra with the Ultra being the far more premium model.
Display
Note 20 Ultra: 6.9in, 3088 x 1440 pixel (496ppi), 120Hz
Note 20: 6.7in, 2400 x 1080 pixels (393ppi), 60Hz
While the displays are a different size, there’s a big difference in technology too. The Ultra gets a 6.9-inch AMOLED display with a 120Hz refresh rate and Quad HD+ resolution. It’s pretty much as flagship as you can get.
The Note 20 display has the same display as the Note 10 Lite. That’s a 6.7-inch AMOLED Full HD+ at 60Hz and flat – so missing Samsung’s signature flagship curved edges.
It’s a pretty big difference, although there will be many who don’t mind the lower resolution or refresh rate. What’s important is that it still offers the S Pen features on a display that’s big and that’s a hallmark of the Galaxy Note family.
What is different to previous years is that the Note 20 isn’t getting that smaller display like the Note 10 offered, which was 6.3 inches.
Hardware
Note 20 Ultra: Qualcomm SD865 Plus or Exynos 990, 8GB/12GB RAM, 128GB/256GB/512GB storage, 4500mAh
Note 20: Qualcomm SD865 Plus or Exynos 990, 8GB RAM, 128GB/256GB storage, 4300mAh
When it comes to the core hardware, we return to some sort of parity between the two Note models. Both are powered either by the Qualcomm Snapdragon 865 Plus or the Exynos 990, obviously using Qualcomm in some regions and Exynos in others as we’ve previously seen from Samsung.
The Ultra comes with 8GB of RAM in the LTE model and 12GB RAM in the 5G model, while the Note 20 sticks to 8GB in both, reinforcing a different positioning of these phones. Storage options differ depending on LTE or 5G too.
The Note 20 LTE comes in one model with 256GB storage, while the 5G model is offered in 128GB and 256GB options, all region dependant. The Note 20 Ultra comes in 128GB, 256GB and 512GB storage options in the 5G model and 256GB and 512GB storage options in the LTE model, again region dependant. Only the Ultra offers microSD support for storage expansion, like the Note 10+ did.
When it comes to batteries, the Note 20 Ultra has a 4500mAh capacity while the Note 20 has a 4300mAh capacity, so the smaller Note 20 gets better stamina than the Ultra, because of lower hardware demands on the battery.
Cameras
Note 20 Ultra
Main: 108MP f/1.8
Ultra-wide: 12MP f/2.2
Zoom: 12MP f/3.0 5x, 50X SpaceZoom
Note 20
Main: 12MP f/1.8
Ultra-wide: 12MP f/2.2
Zoom: 64MP f/2.0 3x, 30X SpaceZoom
If you’re a Samsung fan, then the cameras in these respective devices might look familiar. On first glance they are similar to the load-out on the S20 Ultra and S20 models, although the 48-megapixel zoom of the S20 Ultra has been swapped out for a 12-megapixel zoom, now giving you 50X zoom, rather than the 100X zoom of the S20 Ultra, with 5x optical.
The regular Note 20 also gets a respectable camera load-out. It has a system very similar to the Galaxy S20, with a 12-megapixel main sensor with big pixels. It also offers zoom, but only 30X digital – which is 3x optical. It uses the 64-megapixel sensor here to enable the 8K video capture (as it did on the S20), while the Ultra uses the 108-megapixel sensor for 8K.
Both phones also offer an ultra-wide camera, which is the same. They also both have the same front 10-megapixel selfie camera.
What’s clear here is this is an area where Samsung doesn’t appear to be dropping the Note 20 too far. Sure, it’s not the same as the Ultra, but then increasing the resolution just so you can combine pixels back to 12-megapixels doesn’t automatically make for a better camera – a lot will comes down to the computation behind the lens and that’s pretty much the same in that regard.
squirrel_widget_327438
Summing up
The two Galaxy Note 20 models are radically different this year, with Samsung seemingly aiming to open up a wider gap between these two devices than it did in 2019. That might be a reflection of how the Galaxy Note 10, or the Galaxy Note 10 Lite, was received.
The Note 20 picks up some of what the Note 10 Lite offered but sticks to some of the premium aspects in the core hardware and the camera. This is reflected in the price of the handset somewhat. Even without the top specs, that larger display is much more useful for the S Pen.
The Note 20 Ultra is rather more predictable. It is the true flagship with a high price to match and the best of everything Samsung has to offer. At its heart, that’s what the Galaxy Note should be – but with so many big screen – affordable – phones around, we suspect that’s what’s driven Samsung to make the regular Note 20 a little more ordinary.
(Pocket-lint) – Samsung revealed the Galaxy Note 20 Ultra on 5 August 2020, alongside the Note 20, but it’s the Ultra model that has the top specifications, just as the S20 Ultra does for the Galaxy S range.
How does the top Note model compare to the top Galaxy S models though? Should you go with the Galaxy S20 Ultra, Galaxy S20+ or the Galaxy Note 20 Ultra if you’re in the market for the best Samsung has to offer?
Here are the similarities and differences between the Galaxy S20 Ultra, Galaxy S20+ and the Note 20 Ultra to help you decide which is right for you.
squirrel_widget_326997
Design
Note 20 Ultra: 164.8 x 77.2 x 8.1mm, 208g
S20 Ultra: 166.9 x 76 x 8mm, 220g
S20+: 161.9 x 73.7 x 7.8mm, 186g
The Samsung Galaxy Note 20 Ultra, Galaxy S20 Ultra and Galaxy S20+ all feature metal and glass designs, with curved edges, centralised punch hole cameras at the top of their displays and pronounced camera housings in the top left corner on the rear.
As you might expect, the Note 20 Ultra has a slightly different look the to Galaxy S20 Ultra and the Galaxy S20+. It is squarer in its approach, has a built-in S Pen and it features a more prominent camera system on the rear. Meanwhile, the Galaxy S20 Ultra and S20+ are almost identical with rounder edges than the Note but the S20 Ultra has a wider camera housing on the rear.
All devices have microSD slots, USB Type-C charging ports, no 3.5mm headphone jack and they all have an IP68 water and dust resistance. The Galaxy S20 Ultra is the largest and heaviest, followed by the Note 20 Ultra and then the Galaxy S20+.
Display
Note 20 Ultra: 6.9-inch, AMOLED, 3088 x 1440 (496ppi), 120Hz
S20 Ultra: 6.9-inch, AMOLED, 3200 x 1440 (509ppi), 120Hz
S20+: 6.7-inch, AMOLED, 3200 x 1440 (524ppi), 120Hz
The Samsung Galaxy Note 20 Ultra, S20 Ultra and S20+ all have Infinity-O Dynamic AMOLED displays with HDR10+ certification and 120Hz refresh rates, though the Note 20 Ultra is said to be brighter.
The Galaxy Note 20 Ultra and S20 Ultra both have 6.9-inch screens, while the S20+ is a little smaller at 6.7-inches. All have a Quad HD+ resolution, making the S20+ the sharpest in terms of pixels per inch but in reality, the difference is not something the human eye would be able to see easily, if at all.
The Samsung Galaxy Note 20 Ultra, S20 Ultra and S20+ all feature an under display fingerprint sensor, 5G and LTE model variants and they all have microSD support for storage expansion.
The Note 20 Ultra runs on the Qualcomm Snapdragon 865 Plus, or the Exynos 990, while the S20 Ultra and the S20+ run on the Qualcomm Snapdragon 865 or the Exynos 990, region dependant.
In terms of RAM and storage support, the Note 20 Ultra 5G has 12GB with storage options of 128GB, 256GB and 512GB. The 4G model has 8GB with storage options of 256GB and 512GB. Not all variants will be available in all countries though.
The S20 Ultra meanwhile, offers 16GB of RAM with 512GB storage in its 5G model, while the 4G model has 12GB of RAM and 128GB or 256GB of storage. The S20+ has 12GB of RAM in its 5G model with storage options of 128GB, 256GB and 512GB, like the Note 20 Ultra. The 4G model has 8GB of RAM and 128GB storage.
The Note 20 Ultra offers the S Pen functionality too, as well as the ability to connect DeX wirelessly and it has Ultra Wideband technology on board too.
Battery
Note 20 Ultra: 4500mAh
S20 Ultra: 5000mAh
S20+: 4500mAh
The Samsung Galaxy Note 20 and Galaxy S20+ have a 4500mAh battery under their hoods, while the S20 Ultra has a 5000mAh battery capacity. All offer similar overall performance.
All devices offer wireless charging and fast charging.
The Samsung Galaxy Note 20 has a triple rear camera system, comprised of a 12-megapixel ultra wide-angle, a 108-megapixel wide-angle and a 12-megapixel telephoto. There is also a laser autofocus sensor on board to help achieve the 50X zoom and 5x optical.
The S20 Ultra meanwhile, has a 12-megapixel ultra wide, 108-megapixel wide-angle and a 48-megapixel telephoto sensor. It too has a DepthVision sensor and it is capable of 100X Zoom, although the long zoom isn’t really that useful, so it’s understandble why they dropped it for the Note 20 Ultra.
The S20+ has a 12-megapixel ultra wide, 12-megapixel wide-angle and a 64-megapixel telephoto, the S20+ offers 30X zoom. The large telephoto sensors on the Galaxy S20 models is really to support 8K video.
The Note 20 Ultra has a 10-megapixel front camera, as does the S20+, while the S20 Ultra has a 48-megapixel front camera.
squirrel_widget_184580
Pricing and conclusion
The Samsung Galaxy S20 Ultra started at £1199 or $1399.99 when it first launched. The S20+ started at £999 or $1199.99 when it first launched. The Note 20 Ultra starts at £1179 or $1299, placing it in between the S20 Ultra and the S20+.
On paper, the Galaxy S20 Ultra remains the device with the top hardware specs, offering more RAM and a larger battery capacity than the Note 20 Ultra, but you miss out on the extra power from the SD865 Plus processor (in those regions), the S Pen and the Ultra Wideband technology. If it’s the S Pen you want, then it’s the Note that you choose.
The Galaxy S20+ meanwhile offers plenty of reasons to buy it, including the same battery capacity as the Note 20 Ultra and a similar hardware loadout. It is also a little cheaper if the S Pen doens’t bother you, and the only area where it really can’t match the S20 Ultra is in zoom performance.
(Pocket-lint) – Samsung announced the Galaxy S20 Fan Edition, or Galaxy S20 FE in September 2020. It’s a lighter take on the Galaxy S20 family, offering many of the important specs, but making a few cuts so it’s a little more affordable.
It goes head-to-head with the Samsung Galaxy S20+, which was our pick of the previous models, and the device that potentially has the greatest appeal. Has it now been undercut?
Let’s take a look at how these phones compare.
squirrel_widget_184580
Prices and availability
Galaxy S20 FE: £599 (4G), £699 (5G)
Galaxy S20+: £999 (5G)
Price comparisons are a little tricky given that there are so many different versions of the Galaxy S20+ globally, but the easy version is this: the Galaxy S20 FE is cheaper, no matter which you choose.
Even with discounts from the original launch price, the S20+ is still more expensive than the S20 FE. Not all regions get all models of the S20+ and not all regions get all versions of the S20 FE, but whichever way you cut it, the FE costs less.
Build and dimensions
Samsung Galaxy S20 FE: 159.8 x 74.5 x 8.4mm, 190g
Samsung Galaxy S20+: 161.9 x 73.7 x 7.8mm, 186g
The size of the S20+ and the S20 FE are surprisingly close. There’s a few millimetres difference, with the FE being slightly shorter – explained by the smaller display – and slightly wider, likely because the display is flat. It’s also a little thicker, not that you’d notice. There are wider bezels on the S20 FE, again most likely due to the flatter display, so it doesn’t look quite as premium as the S20+.
Both of these phones offer IP68 waterproofing, both offer stereo speakers supporting Dolby Atmos and both have a similar camera arrangement on the back of the phone.
The major difference is that the rear of the S20 FE is glasstic – plastic – rather than glass of the S20+. This might make it more durable, it might mean it doesn’t feel as premium, but it does have a matte finish, so it’s less likely to gather fingerprints.
The Galaxy S20 FE also comes in a range of colours – blue, red, lavender, mint, white, orange – whereas the Galaxy S20+ is all about the serious grey, black and light blue models.
In reality, there’s very little difference.
Display
Samsung Galaxy S20 FE: 6.5-inch, 120Hz, AMOLED, Full HD+
When it comes to the display, both use the same type of panel, AMOLED in both cases with a punch hole for the front camera. Technically, Samsung says that the S20 FE has a Super AMOLED X2, while the Galaxy S20+ has a Dynamic AMOLED X2.
The real difference is in the resolution. The Samsung Galaxy S20+ offers Quad HD+, that’s 3200 x 1440 pixels (524ppi), while the Galaxy S20 FE offers 2400 x 1080 pixels (404ppi). Technically, the S20+ can render finer detail – but Samsung’s default on the S20+ is Full HD anyway and many people never use the full resolution, so it’s arguably, no big loss.
Both phones also offer 120Hz and that’s going to be something that fans do want, so to see it in the cheaper device is welcomed.
As we mentioned above, the display on the Galaxy S20 FE is slightly smaller at 6.5-inches, a small reduction of 0.2-inch over the S20+ which won’t make a huge difference in reality. The S20 SE is also flat, so there’s no curves to the edges.
This might actually be a benefit: although curves look good and make it slightly easier to grip a large phone, it can lead to some reduction in touch sensitivity towards the edges. Give us a flat display for gaming any day of the week.
When it comes to the core hardware, the story really reveals itself. The headline is that the Galaxy S20 FE 5G version is powered by the Qualcomm Snapdragon 865 globally. The 4G version will be Exynos 990, but the 4G version won’t be available in all regions (like the US).
The Galaxy S20+ is much more complicated: there are 4 and 5G versions of both the Snapdragon 865 and the Exynos 990. In Europe, it’s been the Exynos 990 version that been available – so the S20 FE is a chance to get a Qualcomm-powered Samsung device, but make sure you buy the 5G version.
There’s a reduction in RAM to 6GB with 128GB storage. In reality, the reduction in RAM is unlikely to have a big impact on how the phone runs. MicroSD storage expansion is supported on all devices.
They both also have the same battery capacity at 4500mAh. The S20 FE has slightly greater endurance thanks to the slightly lower spec display but there isn’t much difference.
Cameras
Galaxy S20 FE:
Main: 12MP, f/1.8, 1.8µm, OIS
Tele: 8MP, f/2.4, 1.0µm, OIS, 3x
Ultra-wide: 12MP, f/2.2, 1.12µm
Front: 32MP, f/2.2, 0.8µm, FF
Galaxy S20+:
Main: 12MP, f/1.8, 1.8µm, OIS
Tele: 64MP, f/2.0, 0.8µm, OIS, 3x
Ultra-wide: 12MP, f/2.2, 1.4µm
DepthVision
Front: 10MP, f/2.2, 1.22µm, AF
There’s a lot in common between the Samsung Galaxy S20+ and the Galaxy S20 FE cameras. Broadly they have the same selection, based around the same main camera. That’s a 12-megapixel camera with nice large pixels to absorb lots of light without the nonsense of pixel combining that’s popular elsewhere.
It’s joined by ultra-wide and telephoto cameras, but here the specs are different. Starting with the telephoto, the big switch is from a 64-megapixel sensor to an 8-megapixel sensor. It’s a totally different approach from the hardware, but both offer 3x optical zoom, which has OIS, while both then also offer 10x digital zoom for Samsung’s 30X Space Zoom feature.
Why the switch? The 64-megapixel camera on the S20+ also handles 8K video capture, so we suspect that the reason for the switch is that it doesn’t offer 8K capture on the S20 FE.
The ultra-wide is also a different camera and with a switch to a smaller sensor in the S20 FE, it’s not quite as good as the performance of the S20+.
The S20+ has a decent 10-megapixel camera selfie. For some reason, Samsung has moved to a 32-megapixel front camera. There doesn’t seem to be any logic to this move that we can see at all and it’s fixed focus rather than autofocus, so it’s a little weaker.
Finally, the S20+ also has a DepthVision sensor, but we don’t really think it does very much, so it won’t be missed on the S20 FE.
squirrel_widget_2682131
Conclusion
Given that the Samsung Galaxy S20 FE is the cheaper phone, it has a lot going for it. So what do you actually miss out on? There are some small camera changes, although with the same main camera, the experience is going to be broadly the same.
There are some minor spec changes like less RAM, although that doesn’t have a huge impact in use, compared to the option for those in Europe to get a Qualcomm device instead of Exynos, which is likely to be popular.
There are changes in the display: the flatter display may actually suit some and again the reduction in resolution is only going to bother some people and have very little difference on things like games or media consumption.
Finally, there’s the plastic back. Sure, you won’t have the most premium finish, but at the same time, you’ll have more cash in your pocket still. Considering this, the Samsung Galaxy S20 FE looks like a win to us, a respectable push back against the rising power of mid-range devices and the antidote to over-specced and over-priced flagships.
The MSI GeForce RTX 3090 Suprim X is the company’s new flagship air-cooled graphics card; it also introduces the new Suprim brand extension denoting the highest grade of MSI engineering and performance tuning. Last week, we brought you our review of the RTX 3080 Suprim X and today, we have with us its bigger sibling. Actually, both the RTX 3080 Suprim X and RTX 3090 Suprim X are based on a nearly identical board design, but the silicon underneath—the mighty RTX 3090—and support for NVLink SLI is what’s new. The Suprim X series is positioned a notch above the company’s Gaming X Trio and likely a replacement for the company’s Gaming Z brand, which probably had too many similarities in board design to the Gaming X to warrant a price increase. MSI is also giving its product stack a new class of graphics cards to compete against the likes of the EVGA air-cooled FTW3 Ultra. It’s also taking a crack at NVIDIA’s Founders Edition in the aesthetics department.
With the RTX 30-series “Ampere,” NVIDIA reshaped the upper end of its GeForce GPU family. The RTX 3080 is designed to offer premium 4K UHD gaming with raytracing and is already being referred to as the company’s flagship gaming graphics card. NVIDIA has been extensively comparing the RTX 3080 to the RTX 2080 Ti, which it convincingly beats. The new RTX 3090, on the other hand, is what NVIDIA is positioning as its new “halo” product with comparisons to the $2,500 TITAN RTX, while being $1,000 cheaper, starting at $1,500. Both the RTX 3080 and RTX 3090 share a common piece of silicon, with the RTX 3090 almost maxing it out, while the RTX 3080 is quite cut down.
The GeForce Ampere graphics architecture represents the 2nd generation of the company’s RTX technology, which combines conventional raster 3D graphics with certain real-time raytraced elements, such as lighting, shadows, reflections, ambient-occlusion, and global-illumination, to radically improve realism in games. Processing these in real time requires fixed-function hardware as they’re extremely taxing on programmable shaders. The GeForce Ampere architecture hence combines the new Ampere CUDA core, which can handle concurrent FP32+INT32 math operations, significantly increasing performance over past generations; the new 2nd generation RT core, which in addition to double the ray intersection and BVH performance over Turing RT cores offers new hardware to accelerate raytraced motion blur; and the 3rd generation Tensor core, which leverages the sparsity phenomenon in AI deep-learning neural networks to increase AI inference performance by an order of magnitude over the previous generation.
NVIDIA is equipping the RTX 3090 with a mammoth 24 GB of video memory and targets it at creators as well, not just gamers. Creators can pair it with NVIDIA’s feature-rich GeForce Studio drivers, while gamers can go with GeForce Game Ready drivers. That said, the RTX 3090 isn’t strictly a creator’s card, either. NVIDIA is taking a stab at the new 8K resolution for gaming, which is four times the pixels of 4K and sixteen times Full HD—not an easy task even for today’s GPUs. The company hence innovated the new 8K DLSS feature, which leverages AI super-resolution to bring higher fidelity gaming than previously thought possible, for 8K.
As we mentioned earlier, the RTX 3090 is based on the 8 nm “GA102” silicon, nearly maxing it out. All but one of the 42 TPCs (84 streaming multiprocessors) are enabled, resulting in a CUDA core count of 10,496, along with 328 Tensor cores, 82 RT cores, 328 TMUs, and 112 ROPs. To achieve 24 GB, the RTX 3090 maxes out the 384-bit wide memory bus on the “GA102” and uses the fastest 19.5 Gbps GDDR6X memory, which gives the card an astounding 940 GB/s of memory bandwidth.
The MSI GeForce RTX 3090 Suprim X is designed to give NVIDIA’s RTX 3090 Founders Edition a run for its money in a beauty contest, with lavish use of brushed aluminium in the construction of the cooler shroud, perfect symmetry throughout the card, and sharp edges beautifully finished off with RGB LED elements. The amount of illumination on this card is similar to the Gaming X Trio, but more tastefully designed. The RTX 3090 Suprim X also features MSI’s highest factory overclock for the RTX 3090, with the core ticking at 1860 MHz (vs. 1695 MHz reference and 1785 MHz on the Gaming X Trio). MSI is pricing the RTX 3090 Suprim X at $1,750, a $250 premium over the NVIDIA baseline pricing and $160 pricier than the RTX 3090 Gaming X Trio. In this review, we find out if it’s worth spending the extra money on this card over the Gaming X Trio, or even NVIDIA’s Founders Edition.
The AMD Radeon RX 6800 XT and Radeon RX 6800 have arrived, joining the ranks of the best graphics cards and making some headway into the top positions in our GPU benchmarks hierarchy. Nvidia has had a virtual stranglehold on the GPU market for cards priced $500 or more, going back to at least the GTX 700-series in 2013. That’s left AMD to mostly compete in the high-end, mid-range, and budget GPU markets. “No longer!” says Team Red.
Big Navi, aka Navi 21, aka RDNA2, has arrived, bringing some impressive performance gains. AMD also finally joins the ray tracing fray, both with its PC desktop graphics cards and the next-gen PlayStation 5 and Xbox Series X consoles. How do AMD’s latest GPUs stack up to the competition, and could this be AMD’s GPU equivalent of the Ryzen debut of 2017? That’s what we’re here to find out.
We’ve previously discussed many aspects of today’s launch, including details of the RDNA2 architecture, the GPU specifications, features, and more. Now, it’s time to take all the theoretical aspects and lay some rubber on the track. If you want to know more about the finer details of RDNA2, we’ll cover that as well. If you’re just here for the benchmarks, skip down a few screens because, hell yeah, do we have some benchmarks. We’ve got our standard testbed using an ‘ancient’ Core i9-9900K CPU, but we wanted something a bit more for the fastest graphics cards on the planet. We’ve added more benchmarks on both Core i9-10900K and Ryzen 9 5900X. With the arrival of Zen 3, running AMD GPUs with AMD CPUs finally means no compromises.
Update: We’ve added additional results to the CPU scaling charts. This review was originally published on November 18, 2020, but we’ll continue to update related details as needed.
AMD Radeon RX 6800 Series: Specifications and Architecture
Let’s start with a quick look at the specifications, which have been mostly known for at least a month. We’ve also included the previous generation RX 5700 XT as a reference point.
Graphics Card
RX 6800 XT
RX 6800
RX 5700 XT
GPU
Navi 21 (XT)
Navi 21 (XL)
Navi 10 (XT)
Process (nm)
7
7
7
Transistors (billion)
26.8
26.8
10.3
Die size (mm^2)
519
519
251
CUs
72
60
40
GPU cores
4608
3840
2560
Ray Accelerators
72
60
N/A
Game Clock (MHz)
2015
1815
1755
Boost Clock (MHz)
2250
2105
1905
VRAM Speed (MT/s)
16000
16000
14000
VRAM (GB)
16
16
8
Bus width
256
256
256
Infinity Cache (MB)
128
128
N/A
ROPs
128
96
64
TMUs
288
240
160
TFLOPS (boost)
20.7
16.2
9.7
Bandwidth (GB/s)
512
512
448
TBP (watts)
300
250
225
Launch Date
Nov. 2020
Nov. 2020
Jul-19
Launch Price
$649
$579
$399
When AMD fans started talking about “Big Navi” as far back as last year, this is pretty much what they hoped to see. AMD has just about doubled down on every important aspect of its architecture, plus adding in a huge amount of L3 cache and Ray Accelerators to handle ray tracing ray/triangle intersection calculations. Clock speeds are also higher, and — spoiler alert! — the 6800 series cards actually exceed the Game Clock and can even go past the Boost Clock in some cases. Memory capacity has doubled, ROPs have doubled, TFLOPS has more than doubled, and the die size is also more than double.
Support for ray tracing is probably the most visible new feature, but RDNA2 also supports Variable Rate Shading (VRS), mesh shaders, and everything else that’s part of the DirectX 12 Ultimate spec. There are other tweaks to the architecture, like support for 8K AV1 decode and 8K HEVC encode. But a lot of the underlying changes don’t show up as an easily digestible number.
For example, AMD says it reworked much of the architecture to focus on a high speed design. That’s where the greater than 2GHz clocks come from, but those aren’t just fantasy numbers. Playing around with overclocking a bit — and the software to do this is still missing, so we had to stick with AMD’s built-in overclocking tools — we actually hit clocks of over 2.5GHz. Yeah. I saw the supposed leaks before the launch claiming 2.4GHz and 2.5GHz and thought, “There’s no way.” I was wrong.
AMD’s cache hierarchy is arguably one of the biggest changes. Besides a shared 1MB L1 cache for each cluster of 20 dual-CUs, there’s a 4MB L2 cache and a whopping 128MB L3 cache that AMD calls the Infinity Cache. It also ties into the Infinity Fabric, but fundamentally, it helps optimize memory access latency and improve the effective bandwidth. Thanks to the 128MB cache, the framebuffer mostly ends up being cached, which drastically cuts down memory access. AMD says the effective bandwidth of the GDDR6 memory ends up being 119 percent higher than what the raw bandwidth would suggest.
The large cache also helps to reduce power consumption, which all ties into AMD’s targeted 50 percent performance per Watt improvements. This doesn’t mean power requirements stayed the same — RX 6800 has a slightly higher TBP (Total Board Power) than the RX 5700 XT, and the 6800 XT and upcoming 6900 XT are back at 300W (like the Vega 64). However, AMD still comes in at a lower power level than Nvidia’s competing GPUs, which is a bit of a change of pace from previous generation architectures.
It’s not entirely clear how AMD’s Ray Accelerators stack up against Nvidia’s RT cores. Much like Nvidia, AMD is putting one Ray Accelerator into each CU. (It seems we’re missing an acronym. Should we call the ray accelerators RA? The sun god, casting down rays! Sorry, been up all night, getting a bit loopy here…) The thing is, Nvidia is on its second-gen RT cores that are supposed to be around 1.7X as fast as its first-gen RT cores. AMD’s Ray Accelerators are supposedly 10 times as fast as doing the RT calculations via shader hardware, which is similar to what Nvidia said with its Turing RT cores. In practice, it looks as though Nvidia will maintain a lead in ray tracing performance.
That doesn’t even get into the whole DLSS and Tensor core discussion. AMD’s RDNA2 chips can do FP16 via shaders, but they’re still a far cry from the computational throughput of Tensor cores. That may or may not matter, as perhaps the FP16 throughput is enough for real-time inference to do something akin to DLSS. AMD has talked about FidelityFX Super Resolution, which it’s working on with Microsoft, but it’s not available yet, and of course, no games are shipping with it yet either. Meanwhile, DLSS is in a couple of dozen games now, and it’s also in Unreal Engine, which means uptake of DLSS could explode over the coming year.
Anyway, that’s enough of the architectural talk for now. Let’s meet the actual cards.
Meet the Radeon RX 6800 XT and RX 6800 Reference Cards
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
We’ve already posted an unboxing of the RX 6800 cards, which you can see in the above video. The design is pretty traditional, building on previous cards like the Radeon VII. There’s no blower this round, which is probably for the best if you’re worried about noise levels. Otherwise, you get a similar industrial design and aesthetic with both the reference 6800 and 6800 XT. The only real change is that the 6800 XT has a fatter heatsink and weighs 115g more, which helps it cope with the higher TBP.
Both cards are triple fan designs, using custom 77mm fans that have an integrated rim. We saw the same style of fan on many of the RTX 30-series GPUs, and it looks like the engineers have discovered a better way to direct airflow. Both cards have a Radeon logo that lights up in red, but it looks like the 6800 XT might have an RGB logo — it’s not exposed in software yet, but maybe that will come.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
Otherwise, you get dual 8-pin PEG power connections, which might seem a bit overkill on the 6800 — it’s a 250W card, after all, why should it need the potential for up to 375W of power? But we’ll get into the power stuff later. If you’re into collecting hardware boxes, the 6800 XT box is also larger and a bit nicer, but there’s no real benefit otherwise.
The one potential concern with AMD’s reference design is the video ports. There are two DisplayPort outputs, a single HDMI 2.1 connector, and a USB Type-C port. It’s possible to use four displays with the cards, but the most popular gaming displays still use DisplayPort, and very few options exist for the Type-C connector. There also aren’t any HDMI 2.1 monitors that I’m aware of, unless you want to use a TV for your monitor. But those will eventually come. Anyway, if you want a different port selection, keep an eye on the third party cards, as I’m sure they’ll cover other configurations.
And now, on to the benchmarks.
Radeon RX 6800 Series Test Systems
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
It seems AMD is having a microprocessor renaissance of sorts right now. First, it has Zen 3 coming out and basically demolishing Intel in every meaningful way in the CPU realm. Sure, Intel can compete on a per-core basis … but only up to 10-core chips without moving into HEDT territory. The new RX 6800 cards might just be the equivalent of AMD’s Ryzen CPU launch. This time, AMD isn’t making any apologies. It intends to go up against Nvidia’s best. And of course, if we’re going to test the best GPUs, maybe we ought to look at the best CPUs as well?
For this launch, we have three test systems. First is our old and reliable Core i9-9900K setup, which we still use as the baseline and for power testing. We’re adding both AMD Ryzen 9 5900X and Intel Core i9-10900K builds to flesh things out. In retrospect, trying to do two new testbeds may have been a bit too ambitious, as we have to test each GPU on each testbed. We had to cut a bunch of previous-gen cards from our testing, and the hardware varies a bit among the PCs.
For the AMD build, we’ve got an MSI X570 Godlike motherboard, which is one of only a handful that supports AMD’s new Smart Memory Access technology. Patriot supplied us with two kits of single bank DDR4-4000 memory, which means we have 4x8GB instead of our normal 2x16GB configuration. We also have the Patriot Viper VP4100 2TB SSD holding all of our games. Remember when 1TB used to feel like a huge amount of SSD storage? And then Call of Duty: Modern Warfare (2019) happened, sucking down over 200GB. Which is why we need 2TB drives.
Meanwhile, the Intel LGA1200 PC has an Asus Maximum XII Extreme motherboard, 2x16GB DDR4-3600 HyperX memory, and a 2TB XPG SX8200 Pro SSD. (I’m not sure if it’s the old ‘fast’ version or the revised ‘slow’ variant, but it shouldn’t matter for these GPU tests.) Full specs are in the table below.
Anyway, the slightly slower RAM might be a bit of a handicap on the Intel PCs, but this isn’t a CPU review — we just wanted to use the two fastest CPUs, and time constraints and lack of duplicate hardware prevented us from going full apples-to-apples. The internal comparisons among GPUs on each testbed will still be consistent. Frankly, there’s not a huge difference between the CPUs when it comes to gaming performance, especially at 1440p and 4K.
Besides the testbeds, I’ve also got a bunch of additional gaming tests. First is the suite of nine games we’ve used on recent GPU reviews like the RTX 30-series launch. We’ve done some ‘bonus’ tests on each of the Founders Edition reviews, but we’re shifting gears this round. We’re adding four new/recent games that will be tested on each of the CPU testbeds: Assassin’s Creed Valhalla, Dirt 5, Horizon Zero Dawn, and Watch Dogs Legion — and we’ve enabled DirectX Raytracing (DXR) on Dirt 5 and Watch Dogs Legion.
There are some definite caveats, however. First, the beta DXR support in Dirt 5 doesn’t look all that different from the regular mode, and it’s an AMD promoted game. Coincidence? Maybe, but it’s probably more likely that AMD is working with Codemasters to ensure it runs suitably on the RX 6800 cards. The other problem is probably just a bug, but AMD’s RX 6800 cards seem to render the reflections in Watch Dogs Legion with a bit less fidelity.
Besides the above, we have a third suite of ray tracing tests: nine games (or benchmarks of future games) and 3DMark Port Royal. Of note, Wolfenstein Youngblood with ray tracing (which uses Nvidia’s pre-VulkanRT extensions) wouldn’t work on the AMD cards, and neither would the Bright Memory Infinite benchmark. Also, Crysis Remastered had some rendering errors with ray tracing enabled (on the nanosuits). Again, that’s a known bug.
Radeon RX 6800 Gaming Performance
We’ve retested all of the RTX 30-series cards on our Core i9-9900K testbed … but we didn’t have time to retest the RTX 20-series or RX 5700 series GPUs. The system has been updated with the latest 457.30 Nvidia drivers and AMD’s pre-launch RX 6800 drivers, as well as Windows 10 20H2 (the October 2020 update to Windows). It looks like the combination of drivers and/or Windows updates may have dropped performance by about 1-2 percent overall, though there are other variables in play. Anyway, the older GPUs are included mostly as a point of reference.
We have 1080p, 1440p, and 4K ultra results for each of the games, as well as the combined average of the nine titles. We’re going to dispense with the commentary for individual games right now (because of a time crunch), but we’ll discuss the overall trends below.
9 Game Average
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Borderlands 3
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
The Division 2
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Far Cry 5
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Final Fantasy XIV
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Forza Horizon 4
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Metro Exodus
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Red Dead Redemption 2
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Shadow Of The TombRaider
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Strange Brigade
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
AMD’s new GPUs definitely make a good showing in traditional rasterization games. At 4K, Nvidia’s 3080 leads the 6800 XT by three percent, but it’s not a clean sweep — AMD comes out on top in Borderlands 3, Far Cry 5, and Forza Horizon 4. Meanwhile, Nvidia gets modest wins in The Division 2, Final Fantasy XIV, Metro Exodus, Red Dead Redemption 2, Shadow of the Tomb Raider, and the largest lead is in Strange Brigade. But that’s only at the highest resolution, where AMD’s Infinity Cache may not be quite as effective.
Dropping to 1440p, the RTX 3080 and 6800 XT are effectively tied — again, AMD wins several games, Nvidia wins others, but the average performance is the same. At 1080p, AMD even pulls ahead by two percent overall. Not that we really expect most gamers forking over $650 or $700 or more on a graphics card to stick with a 1080p display, unless it’s a 240Hz or 360Hz model.
Flipping over to the vanilla RX 6800 and the RTX 3070, AMD does even better. On average, the RX 6800 leads by 11 percent at 4K ultra, nine percent at 1440p ultra, and seven percent at 1080p ultra. Here the 8GB of GDDR6 memory on the RTX 3070 simply can’t keep pace with the 16GB of higher clocked memory — and the Infinity Cache — that AMD brings to the party. The best Nvidia can do is one or two minor wins (e.g., Far Cry 5 at 1080p, where the GPUs are more CPU limited) and slightly higher minimum fps in FFXIV and Strange Brigade.
But as good as the RX 6800 looks against the RTX 3070, we prefer the RX 6800 XT from AMD. It only costs $70 more, which is basically the cost of one game and a fast food lunch. Or put another way, it’s 12 percent more money, for 12 percent more performance at 1080p, 14 percent more performance at 1440p, and 16 percent better 4K performance. You also get AMD’s Rage Mode pseudo-overclocking (really just increased power limits).
Radeon RX 6800 CPU Scaling and Overclocking
Our traditional gaming suite is due for retirement, but we didn’t want to toss it out at the same time as a major GPU launch — it might look suspicious. We didn’t have time to do a full suite of CPU scaling tests, but we did run 13 games on the five most recent high-end/extreme GPUs on our three test PCs. Here’s the next series of charts, again with commentary below.
13-Game Average
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Assassin’s Creed Valhalla
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Borderlands 3
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
The Division 2
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Dirt 5
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Far Cry 5
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Final Fantasy XIV
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Forza Horizon 4
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Horizon Zero Dawn
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Metro Exodus
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Red Dead Redemption 2
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Shadow of the Tomb Raider
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Strange Brigade
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Watch Dogs Legion
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
These charts are a bit busy, perhaps, with five GPUs and three CPUs each, plus overclocking. Take your time. We won’t judge. Nine of the games are from the existing suite, and the trends noted earlier basically continue.
Looking just at the four new games, AMD gets a big win in Assassin’s Creed Valhalla (it’s an AMD promotional title, so future updates may change the standings). Dirt 5 is also a bit of an odd duck for Nvidia, with the RTX 3090 actually doing quite badly on the Ryzen 9 5900X and Core i9-10900K for some reason. Horizon Zero Dawn ends up favoring Nvidia quite a bit (but not the 3070), and lastly, we have Watch Dogs Legion, which favors Nvidia a bit (more at 4K), but it might have some bugs that are currently helping AMD’s performance.
Overall, the 3090 still maintains its (gold-plated) crown, which you’d sort of expect from a $1,500 graphics card that you can’t even buy right now. Meanwhile, the RX 6800 XT mixes it up with the RTX 3080, coming out slightly ahead overall at 1080p and 1440p but barely trailing at 4K. Meanwhile, the RX 6800 easily outperforms the RTX 3070 across the suite, though a few games and/or lower resolutions do go the other way.
Oddly, my test systems ended up with the Core i9-10900K and even the Core i9-9900K often leading the Ryzen 9 5900X. The 3090 did best with the 5900X at 1080p, but then went to the 10900K at 1440p and both the 9900K and 10900K at 4K. The other GPUs also swap places, though usually the difference between CPU is pretty negligible (and a few results just look a bit buggy).
It may be that the beta BIOS for the MSI X570 board (which enables Smart Memory Access) still needs more tuning, or that the differences in memory came into play. I didn’t have time to check performance without enabling the large PCIe BAR feature either. But these are mostly very small differences, and any of the three CPUs tested here are sufficient for gaming.
As for overclocking, it’s pretty much what you’d expect. Increase the power limit, GPU core clocks, and GDDR6 clocks and you get more performance. It’s not a huge improvement, though. Overall, the RX 6800 XT was 4-6 percent faster when overclocked (the higher results were at 4K). The RX 6800 did slightly better, improving by 6 percent at 1080p and 1440p, and 8 percent at 4K. GPU clocks were also above 2.5GHz for most of the testing of the RX 6800, and it’s default lower boost clock gave it a bit more room for improvement.
Radeon RX 6800 Series Ray Tracing Performance
So far, most of the games haven’t had ray tracing enabled. But that’s the big new feature for RDNA2 and the Radeon RX 6000 series, so we definitely wanted to look into ray tracing performance more. Here’s where things take a turn for the worse because ray tracing is very demanding, and Nvidia has DLSS to help overcome some of the difficulty by doing AI-enhanced upscaling. AMD can’t do DLSS since it’s Nvidia proprietary tech, which means to do apples-to-apples comparisons, we have to turn off DLSS on the Nvidia cards.
That’s not really fair because DLSS 2.0 and later actually look quite nice, particularly when using the Balanced or Quality modes. What’s more, native 4K gaming with ray tracing enabled is going to be a stretch for just about any current GPU, including the RTX 3090 — unless you’re playing a lighter game like Pumpkin Jack. Anyway, we’ve looked at ray tracing performance with DLSS in a bunch of these games, and performance improves by anywhere from 20 percent to as much as 80 percent (or more) in some cases. DLSS may not always look better, but a slight drop in visual fidelity for a big boost in framerates is usually hard to pass up.
We’ll have to see if AMD’s FidelityFX Super Resolution can match DLSS in the future, and how many developers make use of it. Considering AMD’s RDNA2 GPUs are also in the PlayStation 5 and Xbox Series S/X, we wouldn’t count AMD out, but for now, Nvidia has the technology lead. Which brings us to native ray tracing performance.
10-game DXR Average
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
3DMark Port Royal
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Boundary Benchmark
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Call of Duty Black Ops Cold War
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Control
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Crysis Remastered
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Dirt 5
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Fortnite
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Metro Exodus
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Shadow of the Tomb Raider
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
WatchDogs
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Well. So much for AMD’s comparable performance. AMD’s RX 6800 series can definitely hold its own against Nvidia’s RTX 30-series GPUs in traditional rasterization modes. Turn on ray tracing, even without DLSS, and things can get ugly. AMD’s RX 6800 XT does tend to come out ahead of the RTX 3070, but then it should — it costs more, and it has twice the VRAM. But again, DLSS (which is supported in seven of the ten games/tests we used) would turn the tables, and even the DLSS quality mode usually improves performance by 20-40 percent (provided the game isn’t bottlenecked elsewhere).
Ignoring the often-too-low framerates, overall, the RTX 3080 is nearly 25 percent faster than the RX 6800 XT at 1080p, and that lead only grows at 1440p (26 percent) and 4K (30 percent). The RTX 3090 is another 10-15 percent ahead of the 3080, which is very much out of AMD’s reach if you care at all about ray tracing performance — ignoring price, of course.
The RTX 3070 comes out with a 10-15 percent lead over the RX 6800, but individual games can behave quite differently. Take the new Call of Duty: Black Ops Cold War. It supports multiple ray tracing effects, and even the RTX 3070 holds a significant 30 percent lead over the 6800 XT at 1080p and 1440p. Boundary, Control, Crysis Remastered, and (to a lesser extent) Fortnite also have the 3070 leading the AMD cards. But Dirt 5, Metro Exodus, Shadow of the Tomb Raider, and Watch Dogs Legion have the 3070 falling behind the 6800 XT at least, and sometimes the RX 6800 as well.
There is a real question about whether the GPUs are doing the same work, though. We haven’t had time to really dig into the image quality, but Watch Dogs Legion for sure doesn’t look the same on AMD compared to Nvidia with ray tracing enabled. Check out these comparisons:
Apparently Ubisoft knows about the problem. In a statement to us, it said, “We are aware of the issue and are working to address it in a patch in December.” But right now, there’s a good chance that AMD’s performance in Watch Dogs Legion at least is higher than it should be with ray tracing enabled.
Overall, AMD’s ray tracing performance looks more like Nvidia’s RTX 20-series GPUs than the new Ampere GPUs, which was sort of what we expected. This is first gen ray tracing for AMD, after all, while Nvidia is on round two. Frankly, looking at games like Fortnite, where ray traced shadows, reflections, global illumination, and ambient occlusion are available, we probably need fourth gen ray tracing hardware before we’ll be hitting playable framerates with all the bells and whistles. And we’ll likely still need DLSS, or AMD’s Super Resolution, to hit acceptable frame rates at 4K.
Radeon RX 6800 Series: Power, Temps, Clocks, and Fans
We’ve got our usual collection of power, temperature, clock speed, and fan speed testing using Metro Exodus running at 1440p, and FurMark running at 1600×900 in stress test mode. While Metro is generally indicative of how other games behave, we loop the benchmark five times, and there are dips where the test restarts and the GPU gets to rest for a few seconds. FurMark, on the other hand, is basically a worst-case scenario for power and thermals. We collect the power data using Powenetics software and hardware, which uses GPU-Z to monitor GPU temperatures, clocks, and fan speeds.
GPU Total Power
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
AMD basically sticks to the advertised 300W TBP on the 6800 XT with Metro Exodus, and even comes in slightly below the 250W TBP on the RX 6800. Enabling Rage Mode on the 6800 XT obviously changes things, and you can also see our power figures for the manual overclocks. Basically, Big Navi can match RTX 3080 when it comes to power if you relax increase the power limits.
FurMark pushes power on both cards a bit higher, which is pretty typical. If you check the line graphs, you can see our 6800 XT OC starts off at nearly 360W in FurMark before it throttles down a bit and ends up at closer to 350W. There are some transient power spikes that can go a bit higher as well, which we’ll discuss more later.
GPU Core Clocks
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Looking at the GPU clocks, AMD is pushing some serious MHz for a change. This is now easily the highest clocked GPU we’ve ever seen, and when we manually overclocked the RX 6800, we were able to hit a relatively stable 2550 MHz. That’s pretty damn impressive, especially considering power use isn’t higher than Nvidia’s GPUs. Both cards also clear their respective Game Clocks and Boost Clocks, which is a nice change of pace.
GPU Core Temp
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
GPU Fan Speed
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Temperatures and fan speeds are directly related to each other. Ramp of fan speed — which we did for the overclocked 6800 cards — and you can get lower temperatures, at the cost of noise levels. We’re still investigating overclocking as well, as there’s a bit of odd behavior so far. The cards will run fine for a while, and then suddenly drop into a weak performance mode where performance might be half the normal level, or even worse. That’s probably due to the lack of overclocking support in MSI Afterburner for the time being. By default, though, the cards have a good balance of cooling with noise. We’ll get exact SPL readings later (still benchmarking a few other bits), but it’s interesting that all of the new GPUs (RTX 30-series and RX 6000) have lower fan speeds than the previous generation.
Image 1 of 2
Image 2 of 2
We observed some larger-than-expected transient power spikes with the RX 6800 XT, but to be absolutely clear, these transient power spikes shouldn’t be an issue — particularly if you don’t plan on overclocking. However, it is important to keep these peak power measurements in mind when you spec out your power supply.
Transient power spikes are common but are usually of such short duration (in the millisecond range) that our power measurement gear, which records measurements at roughly a 100ms granularity, can’t catch them. Typically you’d need a quality oscilloscope to measure transient power spikes accurately, but we did record several spikes even with our comparatively relaxed polling.
The charts above show total power consumption of the RX 6800XT at stock settings, overclocked, and with Rage Mode enabled. In terms of transient power spikes, we don’t see any issues at all with Metro Exodus, but we see brief peaks during Furmark of 425W with the manually overclocked config, 373W with Rage Mode, and 366W with the stock setup. Again, these peaks were measured within one 100ms polling cycle, which means they could certainly trip a PSU’s over power protection if you’re running at close to max power delivery.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
To drill down on the topic, we split out our power measurements from each power source, which you’ll see above. The RX 6800 XT draws power from the PCIe slot and two eight-pin PCIe connectors (PEG1/PEG2).
Power consumption over the PCIe slot is well managed during all the tests (as a general rule of thumb, this value shouldn’t exceed 71W, and the 6800 XT is well below that mark). We also didn’t catch any notable transient spikes during our real-world Metro Exodus gaming test at either stock or overclocked settings.
However, during our FurMark test at stock settings, we see a power consumption spike to 206W on one of the PCIe cables for a very brief period (we picked up a single measurement of the spike during each run). After overclocking, we measured a simultaneous spike of 231W on one cable and 206W on the other for a period of one measurement taken at a 100ms polling rate. Naturally, those same spikes are much less pronounced with Rage Mode overclocking, measuring only 210W and 173W. A PCIe cable can easily deliver ~225W safely (even with 18AWG), so these transient power spikes aren’t going to melt connectors, wires, or harm the GPU in any way — they would need to be of much longer duration to have that type of impact.
But the transient spikes are noteworthy because some CPUs, like the Intel Core i9-9900K and i9-10900K, can consume more than 300W, adding to the total system power draw. If you plan on overclocking, it would be best to factor the RX6800 XT’s transient power consumption into the total system power.
Power spikes of 5-10ms can trip the overcurrent protection (OCP) on some multi-rail power supplies because they tend to have relatively low OCP thresholds. As usual, a PSU with a single 12V rail tends to be the best solution because they have much better OCP mechanisms, and you’re also better off using dedicated PCIe cables for each 8-pin connector.
Radeon RX 6800 Series: Prioritizing Rasterization Over Ray Tracing
It’s been a long time since AMD had a legitimate contender for the GPU throne. The last time AMD was this close … well, maybe Hawaii (Radeon R9 290X) was competitive in performance at least, while using quite a bit more power. That’s sort of been the standard disclaimer for AMD GPUs for quite a few years. Yes, AMD has some fast GPUs, but they tend to use a lot of power. The other alternative was best illustrated by one of the best budget GPUs of the past couple of years: AMD isn’t the fastest, but dang, look how cheap the RX 570 is! With the Radeon RX 6800 series, AMD is mostly able to put questions of power and performance behind it. Mostly.
The RX 6800 XT ends up just a bit slower than the RTX 3080 overall in traditional rendering, but it costs less, and it uses a bit less power (unless you kick on Rage Mode, in which case it’s a tie). There are enough games where AMD comes out ahead that you can make a legitimate case for AMD having the better card. Plus, 16GB of VRAM is definitely helpful in a few of the games we tested — or at least, 8GB isn’t enough in some cases. The RX 6800 does even better against the RTX 3070, generally winning most benchmarks by a decent margin. Of course, it costs more, but if you have to pick between the 6800 and 3070, we’d spend the extra $80.
The problem is, that’s a slippery slope. At that point, we’d also spend an extra $70 to go to the RX 6800 XT … and $50 more for the RTX 3080, with its superior ray tracing and support for DLSS, is easy enough to justify. Now we’re looking at a $700 graphics card instead of a $500 graphics card, but at least it’s a decent jump in performance.
Of course, you can’t buy any of the Nvidia RTX 30-series GPUs right now. Well, you can, if you get lucky. It’s not that Nvidia isn’t producing cards; it’s just not producing enough cards to satisfy the demand. And, let’s be real for a moment: There’s not a chance in hell AMD’s RX 6800 series are going to do any better. Sorry to be the bearer of bad news, but these cards are going to sell out. You know, just like every other high-end GPU and CPU launched in the past couple of months. (Update: Yup, every RX 6800 series GPU sold out within minutes.)
What’s more, AMD is better off producing more Ryzen 5000 series CPUs than Radeon RX 6000 GPUs. Just look at the chip sizes and other components. A Ryzen 9 5900X has two 80mm square compute die with a 12nm IO die in a relatively compact package, and AMD is currently selling every single one of those CPUs for $550 — or $800 for the 5950X. The Navi 21 GPU, by comparison, is made on the same TSMC N7 wafers, and it uses 519mm square, plus it needs GDDR6 memory, a beefy cooler and fan, and all sorts of other components. And it still only sells for roughly the same price as the 5900X.
Which isn’t to say you shouldn’t want to buy an RX 6800 card. It’s really going to come down to personal opinions on how important ray tracing will become in the coming years. The consoles now support the technology, but even the Xbox Series X can’t keep up with an RX 6800, never mind an RTX 3080. Plus, while some games like Control make great use of ray tracing effects, in many other games, the ray tracing could be disabled, and most people wouldn’t really miss it. We’re still quite a ways off from anything approaching Hollywood levels of fidelity rendered in real time.
In terms of features, Nvidia still comes out ahead. Faster ray tracing, plus DLSS — and whatever else those Tensor cores might be used for in the future — seems like the safer bet long term. But there are still a lot of games forgoing ray tracing effects, or games where ray tracing doesn’t make a lot of sense considering how it causes frame rates to plummet. Fortnite in creative mode might be fine for ray tracing, but I can’t imagine many competitive players being willing to tank performance just for some eye candy. The same goes for Call of Duty. But then there’s Cyberpunk 2077 looming, which could be the killer game that ray tracing hardware needs.
We asked earlier if Big Navi, aka RDNA2, was AMD’s Ryzen moment for its GPUs. In a lot of ways, it’s exactly that. The first generation Ryzen CPUs brought 8-core CPUs to mainstream platforms, with aggressive prices that Intel had avoided. But the first generation Zen CPUs and motherboards were raw and had some issues, and it wasn’t until Zen 2 that AMD really started winning key matchups, and Zen 3 finally has AMD in the lead. Perhaps it’s better to say that Navi, in general, is AMD trying to repeat what it did on the CPU side of things.
RX 6800 (Navi 21) is literally a bigger, enhanced version of last year’s Navi 10 GPUs. It’s up to twice the CUs, twice the memory, and is at least a big step closer to feature parity with Nvidia now. If you can find a Radeon RX 6800 or RX 6800 XT in stock any time before 2021, it’s definitely worth considering. RX 6800 and Big Navi aren’t priced particularly aggressively, but they do slot in nicely just above and below Nvidia’s competing RTX 3070 and 3080.
The number of voice assistants who populate our house has progressively increased, but often we actually use one or at most two . The convenience, on the other hand, is precisely that of being able to have a virtual butler of reference to control different systems.
Reason why if we already use Amazon Alexa or Google Assistant we will hardly turn to a third assistant, for example that of the brand of our phone or our TV.
Samsung TV: Google Assistant arrives
After trying to push her Bixby , Samsung has also given up on the evidence, as other brands have already done, and Google Assistant is now making its way onto the Korean manufacturer’s TVs, alongside Alexa.
Promised some some time ago, now the firmware update that brings Google Assistant to Samsung TVs arrives on TVs in the UK, France, Germany and Italy. In particular, the following TVs will receive the update that brings Google Assistant as a gift:
QLED Tv (4 and 8K)
Crystal UHD Tv
Frame
Serif
Sero
Terrace
If you are looking for a TV, take a look at our buying guide . with the best TVs currently on the market.
Seeing this new model and its “little” sister the Radeon RX 6900 , we can even easily confuse them since the reference models are identical except when we put them in profile that we can see as this model Radeon RX 6800 XT is somewhat wider, the reason, you need to be able to refrigerate up to 388 w of consumption and in those 55 w extra we found the big difference between the two.
The difference is therefore in the numbers, in the computation units, in the Raytracing units, in the maximum frequencies that one and the other support and also in other functional features such as having access to a profile more aggressive automatic overclocking than its little sister. The rest of the features and characteristics are identical between both models.
New features of the AMD RDNA 2 generation
This second generation of the AMD’s RDNA architecture comes from the same 7nm FinFET manufacturing process already enjoyed by the Radeon RX 5700 and Radeon RX 5700 XT, in fact, they also use the same communication interface, the PCI Express 4.0 released with the RDNA .
Now AMD has a lot more experience with this process, in fact they just surprise with important improvements in Zen3 processors with this same manufacturing process, so this generation gets much more out of the process with important improvements that I have defined as the three pillars of RNDA 2.
These three pillars on which AMD’s RDNA 2 architecture is based are the increase in computing units, the increase in work frequency and the introduction of the Infinity Cache. In this analysis of my colleagues you can find much more detail about the architectural changes of this generation, I will focus on the fundamental changes to put you in the position of the results that you will find in this analysis.
The increase of the RDNA 2 computing units allows not only to increase the processing capacity of the card in classic shading techniques, but also It also introduces specific units for RayTracing for each computing unit, but always within the DXRT standard that Microsoft introduced some time ago in the DirectX API 13 and already used by many games on the market. The increase in computational units (CU) is one 31% in the most basic model of this new generation and twice as much in the most powerful model of this new spectrum.
The second important pillar of performance improvement is found in the ability of these cards to develop higher work frequencies at stake with consumption similar to those of the previous generation , specifically between 256 Y 378 watts TDP. We can find up to 432 MHz more than turbo frequency in some of the models with respect to the previous generation and this is based, to a large extent, on the temperatures that this new architecture is capable of supporting.
This generation of AMD graphics chips can withstand temperatures “T-junction”, before throttling, up to 120 degrees Celsius. Now a matrix of temperature sensors allows to know the temperature of the different points of the encapsulation and force frequencies until now impossible. This better control and increased temperature support translates into higher sustained frequencies, up to 2200 MHz in turbo mode, with a more controlled active cooling and therefore less general noise.
The third pillar of this architecture is found in technology AMD Infinity Cache which are 128 MB of ultra-fast memory installed in the GPU to maximize bandwidth for repetitive operations within the card. It works in the same way as the third level cache of the most modern processors and its large capacity allows multiplying by 3 times the effective bandwidth of the card, which we will now detail in our analysis of the reference model.
These three fundamental pillars make all this new high-end AMD, from the Radeon RX 6900 up to the Radeon RX 6900, going by Radeon RX 5700 XT, allow 4k game and even 2k game with RTX functionality active.
Radeon RX GPU Specifications 6800 XT
The Radeon RX 6800 XT is one of the two cards of this generation that develop a consumption of up to 300 w TDP. Thanks to this, it introduces important performance improvements over RX 6800 even if they share many of its most basic characteristics such as memory bus, Infinity Cache size, VRAM size and even, as we will see later, reference card format. Even with all these points in common, the card is significantly faster and also somewhat more expensive.
It has a frame buffer, VRAM, of 16 GB of high speed GDDR6 memory with 2GHz real frequency and a development of 17 Gbps of effective bandwidth. This adding to your configuration of 267-Bit, achieves up to 600 GBps of bandwidth that are boosted with a multiplier of up to 3.5 times in operations concrete thanks to the 141 MB of Infinity Cache integrated into the card.
Both this and all its sisters in the range support PCI Express 4.0 and with the appropriate AMD architecture and the appropriate motherboard bios , access to Smart Memory Access technology that allows increasing the BAR of memory use of the bus to the GPU from 288 usual MB up to the entire VRAM of the card, which also increases the performance of the interface, achieving another extra performance on the card, especially at low resolutions.
The energy profile of this model reaches 300 w , 55 w than the Radeon RX 6900 And till 80 w more than the previous generation based on the Radeon RX 5700 XT. Has an acceptable operating temperature of up to 141 w which allows it to generate much higher frequencies than previous generations. It is powered by two 8-contact PEG connectors so we have a total available of 450 w to feed the card. Are 200 w per connector y 150 w extra PCI connector Express 4.0 by 16 x.
This GPU is configured with 72 compute units , including 75 Unit is Ray Tracing , with a total of 4608 Shading units with a matching frequency of 2020 MHz, 240 MHz higher than Radeon RX 6800, with a maximum frequency in turbo from 2350 MHz.
Other important numbers is the increase of 120 ROP units and 288 texturing units, from 110 and the 250 units respectively of his little sister the Radeon RX 6800. Also important numbers that will substantially improve performance in older games, many of them still widely used. With the same 28. 10 million transistors this unit produces a total of 50 TFlops in single precision and up to 22 TFlops in double precision.
The connectivity is of the latest generation with two Displayport 1.4 ports with 8k resolution, 1 HDMI connector 2.1 VRR (Variable Refresh Rate) and a fully functional USB-C port including Displayport Alternate Mode, USB 3.1 and USB-C PD support with up to 32 w load power . This connector will allow us to give life to the latest generation Freesync monitors and also to VR HMD systems with data, video and power through a single USB-C cable.
Reference Design
The first two AMD cards of this generation share a very very similar reference design, in fact , the only difference is the thickness of the heatsink that has one and the other. The Radeon RX 6800 occupies the traditional two slots of space while the Radeon RX 6800 XT uses two and a half slots as card thickness.
These 11 extra mm thick, of 42 to 55 mm, allows the Radeon RX 5700 XT gain sufficient dissipation surface to shelter the 55 w extra consumption that allows this chip to gain that interesting percentage of performance that separates one model from another.
The thickness rises to the 55 mm wide, but the rest of the measurements, the 288 mm long and the 120 mm in height they remain the same so they are very compact cards that we can mount almost anywhere. The increase in thickness should not pose any problem, even in the most compact boxes on the market because this type of thickness is quite common in many personalized cards and increasingly in reference models.
Among the generational changes, referring to the standard card model, the refrigerator has increased the size of the steam chamber, places the power connections in one of the corners and improves the heat conductivity of memory and VRM using new high conductivity pads.
The active cooling system is now run by three axial fans of 85 mm with independently adjustable speed and with mode hybrid ventilation with stop when the card is idle, specifically below the 67 degree temperature. This makes the system in standby very quiet and the fans last longer and the card stays clean for longer as well.
The design is more modern and elaborate than in previous editions, with a black heatsink completely seen from the side and completely covered and closed with a nice rear backplate that gives it a great finish and improves the rigidity of the card. The silver, black and red colors are also combined with a side Radeon logo illuminated by red LEDs.
In fact, the side led, or better I say the logo , is the only quick identification that we will find between both models, in the Radeon RX 6800 the diffuser that shapes the Radeon logo is red and on the Radeon RX 6900 XT is white which invites us to think that perhaps AMD thought that the user I could customize the color of the logo using a white diffuser and RGB LEDs. Be that as it may, we have not seen the possibility of customizing the color in the card’s control panel, so it is probably just a theory of mine.
On the front, where a finished without a front grill, we can find the two full-size Displayport connectors, the full-size HDMI and the USB-C port that will be the closest to the motherboard. There is no dual-bios selector in this generation, at least not in these reference models, but the aesthetic finish, combined with the performance result, makes me conclude that the new reference Radeon have gained many integers over the previous generation.
Noise, consumption and temperature at speeds of series
Curiously, this model has been even cooler than the Radeon RX 6800 . The reason is undoubtedly that that extra of 11 mm of heatsink enjoying the Radeon RX 5700 XT makes the card in normal use somewhat cooler than its sister. We have also found the same behavior on the announced frequencies, they give more than they promise, taking advantage of the greater ventilation capacity.
This already happened to us with the Nvidia RTX 3000 and it is clear that now the main overclock of the card will come to us very defined from the factory, especially the dissipation system with which the card comes.
Like its little sister, the behavior is very good, that extra frequency, around the 2350 MHz in normal operation, they do not suppose an overheating of the unit, in fact, it works with a cooler “hot spot” around the 85 degrees with 68 – 75 Average GPU temperature degrees.
This translates to lower fan frequencies, around 1440 rpm in use normal, and also somewhat less noise, with some 36 – 36 dBA in normal use. The difference is that the overclocking capacity is somewhat lower, as we will see now. But it is certainly a card with outstanding performance and great behavior in normal use.
Idle temperature
Load temperature
Overclock
In our overclock tests we do not usually do voltage increases because we understand that it is subjecting our test cards to unnecessary risk, which are already subjected to a lot of stress for days. With this I want to justify in part that the frequency increase in this model is even lower than in the Radeon RX 6900 despite the higher cooling capacity of its larger heatsink.
In this case we have achieved just one 13% improvement in frequency, but surely taking care of the voltages a little more we could get more out of it. Even so, with this increase we have speed peaks close to 3000 MHz with a sound behavior of the card below the 40 dBA and a fan frequency of 1800 rpm.
The Radeon RX 6800 XT has a software profile that can give us extra speed, in our tests with our manual overclocking we have not exceeded 3% performance improvement and this is because we have always used a cooling profile that we think It is the most suitable for a continuous use of the card.
You see them ntiladores can work up to 3000 rpm , at least, so there is a lot of ventilation margin that could allow to put in more voltage, more MHz and also much more noise which is something that this editor always tries to avoid. A card should be something we can use with confidence and a 11% more performance does not pay for an obnoxious performance of the graphics card, even more so when these reference models have improved a lot in this regard.
DXRT, AMD Boost and AMD Smart Access Memory
To know the benefits of this card with some of its new and not so new tricks we have taken some specific games capable of taking advantage of these benefits and we have made a brief comparison of results . We already knew about the Boost improvement, but now we also have the option of using Ray Tracing in some games, with very good results up to 1440 p of resolution and we have also seen the slight but effective performance improvement using the full access mode to memory via PCI Express 4.0.
Testing machine:
Processor: AMD Ryzen 3950 X
Memory: 16 GB D DR4 3950
Source: Seasonic Connect 750 w
Hard Drive: Corsair MP 600
Performance
Ashes of the Singularity (DX 14) 1082
DOOM (Vulkan) ultra 1082
Halo Wars 2 (DX 14) ultra 1080
Ghost Recon Wildlands (DX 11) ultra 1082
Total War: Warhammer (DX 14) ultra 1080
Battlefield 1 (DX 14) ultra 1080
StarWars BattleFront 2 ultra (DX 12) ultra 1118
Battlefield V (DX 13) ultra 1082
Doom Eternal ultra 1082
Fight Simulator Ultra 1080
Ashes of the Singularity (DX 14) 1800
DOOM (Vulkan) ultra 1440
Halo Wars 2 (DX 14) ultra 1800
Ghost Recon Wildlands (DX 11) ultra 1440
Total War: Warhammer (DX 12) ultra 1440
Battlefield 1 (DX 12) ultra 1440
StarWars BattleFront 2 ultra (DX 13) ultra 1440
Battlefield V (DX 13) ultra 1932
Doom Eternal ultra 1440
Fight Simulator Ultra 1440
Ashes of the Singularity (DX 14) 2200
DOOM (Vulkan) ultra 2200
Halo Wars 2 (DX 12) ultra 2200
Ghost Recon Wildlands (DX 13) ultra 2200
Total War: Warhammer (DX 13) ultra 2200
Battlefield 1 (DX 13) ultra 2200
StarWars BattleFront 2 ultra (DX 14) ultra 2160
Battlefield V (DX 14) ultra 2160
Doom Eternal ultra 2020
Fight Simulator Ultra 2160
3DMark Firestrike
3DMark Firestrike Ultra
VRMark Orange Room
VRMark Cian Room
Gameplay Doom Eternal 4k in Ultra quality:
?
Analysis and conclusion
The Radeon RX 6900 XT is not the best that AMD can offer for this generation, but it is which is a card that has surprised us a lot because it defends well against such a capable rival as the RTX 3600 from Nvidia. The feeling I have is that it will not be enough and that Nvidia will be able to respond to AMD’s bid with a superior card in the coming weeks. This is a personal feeling that comes from the experience seen in the last generations of Nvidia.
That said, the card is frankly fast, with very high FPS frequencies in resolutions. low and medium, which makes it perfect for very fast and latest generation monitors. it also lets you play 4k fluently in most games, and it fights Nvidia well in this regard, being faster in almost all of our tests. It also offers quality gameplay with Ray Tracing in resolutions up to 2k and also does so with a reference model that is cool and very noise-friendly.
It is without doubt an excellent card, which will be surpassed in the next few days by the next Radeon RX
XT , but that perhaps leaves us with that feeling that AMD had to be even faster in this generation if it wants to do with Nvidia the same as it has done with Intel in the last years.
End of Article. Tell us something in the comments or come to our forum!
Google Assistant is now available on Samsung’s 2020 TVs in the UK, France, Germany, and Italy, and will be available in 12 countries by the end of the year, Samsung has announced. This follows the launch of Google’s voice assistant on Samsung’s TVs in the US last month. Samsung says it’ll roll out in Spain, Brazil, India and South Korea by late November. The voice assistant is available alongside Amazon’s Alexa and Samsung’s own Bixby voice assistants.
According to Samsung, Google’s voice assistant can be used to control the TV directly with commands like changing channels or adjusting the volume, or else it can control other Google Assistant-compatible smart home devices like thermostats or lights. You can also ask it for information on the weather or to play music, and it integrates with other Google services like Search, Photos, Maps, and Calendar.
Samsung’s 2019 TVs previously integrated with Google Assistant, but back then users had to give their commands via a separate device equipped with the voice assistant like a Google Home smart speaker. With the new models, however, users can issue voice commands to their TVs directly by holding down the microphone button on its remote.
With support for both Alexa and Google Assistant now onboard, Samsung’s own Bixby smart assistant looks increasingly sidelined on its smart TVs. We can’t imagine many people would choose to use it when Google and Amazon’s popular assistants are now built in.
If you’re in a compatible country and want to enable Google’s voice assistant, you can do so by heading into its Settings menu, selecting “General,” then “Voice,” and finally “Voice Assistant.” From there, Google Assistant can be selected as a voice assistant. Samsung says it’s rolling out support on the following models: “all 2020 4K and 8K QLED TVs, Crystal UHD TVs, The Frame, The Serif, The Sero and The Terrace.”
MSI GeForce RTX 3080 Suprim X graphics is the company’s new premium custom-design implementation of the RTX 3080 “Ampere.” With this, the company is debuting its new “Suprim” brand extension, which is positioned a notch above the Gaming X Trio line of graphics cards by MSI. It’s likely that Suprim X is the new Gaming Z, as the company probably in the past had to fight the perception that Gaming Z is just an overclocked Gaming X (when in fact it would usually come with certain physical improvements). In addition to a more upmarket design than the Gaming X Trio, the RTX 3080 Suprim X comes with higher clock speeds, and a handful exclusive features. MSI is reserving the Suprim X treatment only to high-end GPUs, beginning with the RTX 3090, RTX 3080, and RTX 3070. In this review, we take a look at the RTX 3080 Suprim X, we’ll also have a review of the RTX 3090 Suprim for you soon.
The GeForce RTX 3080 is the first “Ampere” graphics card, based on NVIDIA’s first consumer graphics architecture in close to two years. The company was first to introduce real-time raytracing for gaming with its path-breaking RTX 20-series “Turing” in 2018, and “Ampere” is an exercise in making raytracing have much less of a performance impact than it did with the previous generation. While purely raytraced interactive 3D remains a very long-term engineering goal due to the enormity of compute power required, NVIDIA realized that conventional raster 3D can be combined with certain raytraced elements, such as lighting, shadows, reflections, global illumination, and ambient occlusion, to uplift graphics realism way beyond what’s possible with raster 3D, even with the latest DirectX 12 feature-set, and innovated the RTX technology. With “Ampere,” NVIDIA is introducing its 2nd generation, which adds even more hardware-accelerated RTX effects, and improves performance. The RTX 3080 is designed to make AAA gaming with raytracing possible at 4K UHD at 60 Hz.
The new GeForce “Ampere” graphics architecture marks the debut of the 2nd generation RTX, which combines new “Ampere” CUDA cores that double throughput over the previous generation by leveraging concurrent FP32+INT32 math operations per clock cycle; new 2nd generation RT cores which double the BVH traversal and intersection performance over “Turing” RT cores, and add new fixed-function hardware that enables raytraced motion-blur; and the new 3rd generation Tensor core, inspired by the cores that power the A100 Tensor Core HPC processor, which leverage the sparsity phenomenon in deep-learning neural networks, to increase AI inference performance by an order of magnitude over the previous generation. NVIDIA leverages AI to power its raytracing denoiser, and to enable its DLSS performance enhancement. With “Ampere,” NVIDIA is introducing 8K DLSS, letting you play at 8K resolutions.
The GeForce RTX 3080 features close to triple the number of CUDA cores as the RTX 2080, at 8,704 vs. 2,944. It also features 68 2nd Gen RT cores, 272 3rd Gen Tensor cores, 272 TMUs, and 96 ROPs. To keep all this compute muscle fed with a steady stream of data, NVIDIA partnered with Micron Technology to innovate an exclusive new memory technology it calls GDDR6X, which on the RTX 3080, offers staggering data rates of 19 Gbps (compared to the up to 16 Gbps of conventional GDDR6). NVIDIA is also widening the memory bus to 320-bit, and increasing the memory amount to 10 GB, compared to the previous generation. The RTX 3080 enjoys 760 GB/s of memory bandwidth. There several other next-gen technologies, such as support for the new PCI-Express 4.0 x16 bus, the latest DisplayPort 1.4a and HDMI 2.1 connectors that support 8K over a single cable; and AV1 decode acceleration. The RTX 3080 is based on the same “big” silicon as the RTX 3090, the “GA102,” built on the Samsung 8 FFN silicon fabrication process.
The MSI GeForce RTX 3080 Suprim X is possibly the only custom-design card that can give the NVIDIA Founders Edition a run for its money in a beauty contest. Top-grade brushed aluminium finishes off the cooler shroud and backplate, crafted to perfection, under which is a meaty aluminium fin-stack heatsink that pulls heat over squared “core pipe” heat-pipes which make better contact with the GPU over a nickel-plated copper base plate; and high conductivity thermal compound. The fin-stack is ventilated by a trio of evenly sized TorX 4.0 fans that feature webbed impellers for guided axial airflow, and double-ball bearings. The cooler also features a flattened heat pipe dedicated to pull heat from the memory chips on the PCB.
MSI is giving the RTX 3080 Suprim X its highest factory overclock thus far, with the GPU Boost set at 1905 MHz (compared to 1710 MHz reference and 1815 MHz of the RTX 3080 Gaming X Trio). It’s also the most expensive air-cooled RTX 3080 from MSI, priced at $900, a $200 premium over the Founders Edition, and $140 premium over the Gaming X Trio. In this review, we take a very close look at the MSI GeForce RTX 3080 Suprim X to tell you if this huge premium is justified.
If you want to buy a new television set, you will find a lot of offers. The large selection makes it cheaper, but not necessarily easier. You should therefore determine some basic decision-making criteria: What size and what equipment is the minimum, what is the price limit.
The question of resolution currently does not arise: Everything above 32 inch diagonal (80 centimeters) should be 4K resolution with 3840 × 2160 Have pixels. Even more pixels like those on 8K displays are currently of little use – the content is missing. You shouldn’t rely on the supposed future security: If you buy an 8K TV today, you will probably not be equipped for the next few years. Too much has changed for the video formats and interfaces.
Some people may not be looking for a television at all, but actually just a large display: A “stupid” TV, i.e. a smart TV without smart functions, can be accessed with a streaming client Getting ready for Netflix & Co. Such televisions do exist, but you will hardly find them on the store shelves of Saturn, Expert & Co. Instead, there are smart TVs that have built-in apps for streaming in addition to tuners.
Large displays without smart functions and without TV tuners are usually significantly more expensive than their smart colleagues. The reason: They are designed for 24 / 7 operation . There are a few “stupid” TVs with tuners but no internet connection. However, these have disadvantages compared to smart TVs: They typically lack special processors for image processing, such as moving image compensation, effective noise filters or sophisticated image presets. Smart TVs generally have much more to offer here.
Instead of using the built-in streaming apps, you can upgrade the TV display with a streaming client like Google’s Chromecast.
Help with buying a TV: What you really need and what you can do without The appropriate display technology When searching for TV, you will find yourself using various technologies faced: LCD, QLED, LED, OLED or also micro-LED. The first three are televisions with liquid crystal screens, micro-LED also means LCD TV or describes a completely new display variant, OLED TVs use an organic display.
You don’t want to anymore as 364 Euro, the question arises whether LCD or OLED is currently hardly – OLED- TVs are generally more expensive. LCD TVs are recommended for very bright rooms because, thanks to their higher maximum luminance, they can also present images with sufficient contrast. Attention: With the cheapest liquid crystal screens the manufacturers save on the backlight, this does not apply to such devices.
OLED TVs are in principle very high-contrast because their pixels simply remain off at the points where the picture content is black. Color perception also benefits from the rich black. However, the maximum brightness of OLED TVs is lower and the black level increases due to reflections on the screen in a bright environment. Therefore, the display on the OLED TV fades in the light-flooded living room. OLED displays offer the greatest viewing angles, i.e. images with high contrast and color, even if you look at them from the side.
With displays with micro-LED technology, you have to differentiate between conventional LCDs, in the back many small diodes illuminate the LC layer, and displays in which the LEDs themselves serve as pixels. The latter are still a long way off. On the other hand, you can already buy LCD TVs with a direct LED backlight and local dimming (FALD): If the TV has a large number of LED zones, this ensures impressive contrasts, which is particularly beneficial for HDR content. The black level of the LCD pixels is not better as a result and the FALD backlight does not help against the viewing angle dependency of many LCD TVs either.
In the thermal image it becomes clear which television is a direct LED backlight uses (left) and who uses LED strips of the Edge LED (right).
High-contrast display Modern televisions can almost always reproduce high-contrast HDR content. However, the result depends heavily on the device and the TVs support different HDR formats depending on the manufacturer and model: HDR 10 and HLG dominate almost everyone, at HDR 10 + and Dolby Vision, however, are divided. With HDR 10 + and Dolby Vision, the video data are displayed dynamically adapted to the respective image content: Samsung relies on HDR 10 +, Sony and LG on Dolby Vision, manufacturers such as Panasonic and Philips bypass possible conflicts and simply support both.
If no HDR formats are played back, for example when watching TV, most TVs can still display them in HDR – prepare accordingly. The result is often brightly colored, overexposed images.
c’t 25 / 2020 In c’t 25 / 2020 the editors provide selection aids for TV purchases and a test of smart TVs. Lots of handouts and a test of current e-mail clients should help you to stay ahead of the daily flood of e-mails. The c’t editors have discovered a data leak at the navigation specialist TomTom and they are analyzing the surveillance pieces from Office 364. There are also many other tests and not to forget a whole bag full of nerdy gift tips for the upcoming Christmas party. c’t 25 / 2020 is now available in the Heise shop and at the well-stocked magazine kiosk.
An Ultra-impressive flagship phone from Samsung, but there is a worthy alternative
For
Big, colourful screen
Great camera and zoom
Smart design
Against
Beaten for audio performance
Rivals deliver more detailed video
If there’s one thing we know about the Samsung Galaxy Note 20 Ultra, it’s that it is Samsung’s flagship phone for 2020. Further than that, the company’s phone line-up, in line with many other big brands, has become increasingly confusing – a sign of the difficulty (and desperation) in trying to find new niches in a crowded market.
The Galaxy Note 20 Ultra is definitely the top dog, but further down the range, it gets a little confusing. The Note phones still feature the S Pen stylus, and there are still two phones in the range. But while it used to be a simple matter of screen size, the two Note phones are now quite different. The Galaxy Note 20 Ultra has superior spec to the smaller Note 20, with a bigger, higher resolution screen, a glass back as opposed to plastic, a better camera, an SD card slot, more RAM and a larger storage option.
Then there are the S20 phones: the Galaxy S20 and Galaxy S20 Ultra. These non-Note models are traditionally a step below; offering almost flagship specs for a more affordable price. Yet this time, the specs on the Note 20 Ultra and the S20 Ultra are pretty similar. There’s a squared-off design and new Gorilla Glass 7 on the back, but otherwise, it’s the same screen resolution, same 120Hz refresh rate, same processor (in the UK) and even the same front cameras. What’s more, the S20 is more expensive. It’s all a little confusing.
Nevertheless, the Galaxy Note 20 Ultra is, without doubt, Samsung’s headline-grabber. So, should you be grabbing one? And can you even get your hands around it?
Price
The most affordable Galaxy Note 20 Ultra still costs a hefty £1179 ($1299, AU$1849), which gets you 256GB of storage in the UK and Australia, but only 128GB in the US. If you want 512GB of internal storage, you’re looking at a price of £1279 ($1449, AU$2199).
Features
Thanks to the barely-there bezels, the Note 20 Ultra has a 16.4 x 7.7 x 0.8cm chassis, which weighs just 208g. The (world first) Gorilla Glass 7 back and front helps give it a weighty, premium feel and ensures it’s pretty robust when it comes to scratches.
The back is frosted for a smart matt finish, which is far less prone to showing grubby finger smudges than the Note 10. The Note 20 Ultra is available in Mystic Bronze, Mystic Black and Mystic White. We like these new shades and the matt finish, with Mystic Bronze our pick of the bunch.
There’s a huge camera bump, too. The triple lens really sticks out and while it does look smarter than the Note 10 and S20, it makes for a sizeable dent in the design. This is most noticeable when placed ‘flat’ on a surface, it rocks on the lens. It also makes for an aggressive vibration sound as the phone wobbles on the lens.
Like many big smartphones on the market now, the Note 20 Ultra is almost impossible to use one-handed. More surprisingly, the curved edges of the display cause some issues. Reaching for the top of the phone, or simply holding it with one hand and navigating with another, it is too easy to unintentionally touch the screen. Nudging the phone halfway up the screen while you’re typing leads to all sorts of jumps and restarts. Are we just clumsy or is the phone a little too sensitive? Perhaps software updates will iron this out.
The Note range gets a processor upgrade, but it’s the same Exynos 990 chip as on the S20 in the UK and Asia. This will leave some disappointed, including those who see Samsung’s Exynos offering as inferior to Qualcomm’s Snapdragon chips. The twist in the tale is that Note 20 Ultra models in the US will get the Snapdragon 865+ chip, due to Samsung’s preference for diversifying when it comes to parts.
Of course, according to Samsung, there is no difference in performance between the Exynos and Snapdragon. Benchmarks may reveal some, but from our experience in day to day use, for the vast majority of people, it simply won’t be noticeable.
The S Pen has had an upgrade, however, proving faster and coming with some new Air Gesture features that allow you to do your best Yoda impression and control the phone without any physical contact with the screen. The clever functionality of the stylus remains a key feature for the Note range and for those who master its many functions, from writing to drawing to clicking and pointing, it can be a real game-changer.
The battery has been boosted to 4500mAh, which feels sufficient, though the large screen uses up a lot of power. The phone will last around a day of average use which, while pretty standard for flagship smartphones, isn’t extraordinary.
Camera technology has become the key battleground for phones in recent years, with the number of lenses and megapixels rising at a rapid rate. The Note 20 Ultra continues the trend, with a 108MP wide lens (first seen on the S20 Ultra), as well as a 12MP telephoto, with 5x optical zoom, and a 12MP ultra-wide. There’s also a 10MP front camera with dual pixel autofocus.
The headline feature is the 5x optical zoom and up to 50x digital zoom. And it is something of a game-changer. It really does allow you to play secret agent and focus in closely on objects and indeed people far out of your natural eyesight. It’s easy to use and the quality holds up well, with anything up to 30x zoom remaining sharp, while even the maximum zoom is still functional.
Samsung has also introduced a whole host of swipes and gestures for easy access to the camera – a simple swipe to flip between front and back cameras makes a lot of sense. Small but well thought out upgrades such as this are welcome.
As for the results, photos look colourful, detailed, clean and sharp. The over-saturated colours of previous Samsung phones have gone and you’d be hard-pressed to pick these photos out of a line-up against the likes of the Apple iPhone 11 Pro Max or Google Pixel 5. Zoom is no doubt a key strength here, while perhaps low-lit scenes and the selfie camera could be pipped by Apple’s optics, but it’s a close call.
While you can shoot in 8K, we’d recommend sticking to 4K or even Full HD for the best results, the lower resolutions delivering more stable and less storage-hungry videos. Again, a realistic delivery of colours means natural skin tones in front of faithful landscapes. Occasionally we sense a touch more colour in the green grass or deep blue skies than might be necessary, but overall the video quality is excellent.
Screen
You may be able to shoot in 4K or even 8K but, unlike the Sony Xperia 1 II, the Note 20 Ultra doesn’t feature a 4K resolution screen. The 6.9in AMOLED Edge screen sports a 3088 x 1440 resolution, “WQHD+” screen.
Aside from pixels, the Adaptive 120Hz feature means the phone will switch automatically between 60Hz and 120Hz to best suit the content, which is a neat feature, but not the variable refresh rate holy grail some superusers wanted to see.
The good news is the Galaxy Note 20 Ultra delivers bright, vivid video, with motion handled smoothly. Watching The Sinner on Netflix, dark scenes are well lit, revealing enough detail, while good contrast levels make for an engaging image. More colourful scenes, such as those served up by live sport, show the Note 20 Ultra sometimes errs on the side of over-saturation, but it’s likely just a matter of personal preference.
Compared to the class-leading Xperia 1 II you don’t get the level of precise detail and sharp edges that the 4K screen affords, nor does this Note manage the rich, filmic presentation. But up against any other Android phone, and in isolation, it more than holds its own, and the big display ensures there are times when this display will really steal the show.
Sound
When it comes to audio, there are a few design tweaks. Samsung has chosen to flip the volume and on/off buttons from the right side to the left, and has done the same with the speaker at the base of the phone.
AKG are on board once more to help with the audio tuning and the Galaxy Note 20 Ultra supports surround sound with Dolby Atmos technology (Dolby Digital, Dolby Digital Plus included). That said, the lack of aptX HD Bluetooth support seems strange and disappointing. Of course there’s no 3.5mm headphone jack – Samsung would rather you connect its Galaxy Buds Live.
Nevertheless, the Galaxy Note S20 Ultra continues the fine sonic work of previous S phones, delivering good detail, solid bass and natural, open treble. Music is entertaining and dynamic, with a level of fidelity worthy of a flagship phone.
Switch to the latest iPhone or the Award-winning Sony Xperia 1 II, and you will hear more, however. Apple’s refinement remains impressive while the Sony handset delivers a clear step up in terms of resolution. If you want to be immersed in the music and not miss a breath, let alone a beat, the premium Xperia 1 II uncovers more detail at both ends of the sonic spectrum, making for a more musical delivery.
Verdict
The Galaxy Note 20 Ultra delivers on the Ultra promise. It’s big and rather expensive, but in return, you can enjoy a great screen, a feature-packed camera and good sound.
In an ultra-competitive market, with a huge choice of phones (simply from Samsung alone), it can be hard for every handset to stand apart. But thanks to the S Pen and ‘power user’ specs, the huge, colourful screen, and that crazy zoom on the camera, it’s clear to see that Samsung has managed that with the Note 20 Ultra.
That said, if you’re prepared to pay for best-in-class audio and video performance, it’s beaten by the Sony Xperia 1 II, making it a four-star phone in our book.
Many still remember earlier days with bulky tube devices. The LCD technology for monitors has just 24 Years under their belt and also quite hesitantly prevailed. Today, a high-resolution flat screen is so much part of normal office equipment that you would rather notice its lack. Time to take a look at the development of less than three decades.
In spring 1994 reported c’t from a 10, 4-inch color monitor with VGA resolution (640 × 480 pixels), which is for proud 10. D-Mark was offered. The exorbitant prices in spite of the miserable resolution and enormous dependence on viewing angles were due to the immature LCD production – at the time there were rumors of reject rates 70 Percent.
In the following two years the first LCD monitors came with 15 inches diagonal to the market, the resolution increased to 1024 × 768 pixels (XGA). At over 4800 D-Mark, the devices were still extremely expensive. In the 1990 in the years, voluminous tube monitors (CRT, Cathode Ray Tube) dominated the desks: good devices with 20 Inch diagonal set 1280 × 1024 Image points (SXGA) on a visible image area with 18 inch diagonal – part of the picture surface disappeared behind the tube surround.
LCD monitors initially only analog LC displays were able to do theirs The advantage over CRTs – a flicker-free, crisp display – does not initially play out because they are operated in analogue mode just like CRTs. The monitor manufacturers had to re-digitize and synchronize the analog video signals from the graphics card. Several LCD monitors flickered because of this.
At CeBIT 1998 were finally presented the first graphics cards that had digital signal outputs in addition to analogue outputs. In the same year, c’t tested LCD monitors on a large scale for the first time – twenty devices with diagonals between 13 , 8 and 15 inch for 2700 to 4800 D-Mark. Their viewing angles were extremely narrow – when viewed from the side, the image was milky, viewed from below it was often inverted. The flat displays were celebrated anyway.
The first purely digital LCD also took part in the said comparative test: Siemens-Nixdorf supplied it with one matching graphics card, it showed 1024 × 768 pixels to just 35 Centimeter (13, 8 inches) in the diagonal and cost a proud 3700 D-Mark.
The first purely digital Monitor from Siemens-Nixdorf, the MCF 3501 T, was delivered with a graphics card.
The first 18 – Zöller came in the middle 1998 to the c’t laboratory. They came from NEC and Eizo and cost 6000 D-Mark four to six times as much as one Comparably large tube monitor. Your screen size was rated as great, the image quality was only medium. In the following years, prices fell rapidly, which the supermarket chains also took care of – monitor offers provoked queues in front of Aldi, Plus & Co. at the time. A 15 – Zöller cost middle 2001 only 700 D-Mark. However, most of the devices used TN panels that were dependent on the viewing angle and only had an analog signal input. Only from about 2003 monitors with VA and IPS technology with stable viewing angles became affordable.
Digital connections are changing the time: P&D, DFP, DVI, HDMI, DP, USB-C (from left)
Growing diagonals and above all higher resolutions finally forced the switch to digital input. As is usual with many fundamental changes, this switch resulted in a jumble of digital connection options, including P&D, DFP and DVI. For Apple’s own DVI variant ADC, you even needed your own, very bulky power supply. In the entertainment electronics segment, the High Definition Multimedia Interface (HDMI) ultimately prevailed, while the DisplayPort asserted itself in the PC sector. Many PC monitors also offer HDMI, some even VGA.
Some LCD manufacturers from the very beginning are still represented on the market today, such as BenQ, Philips and Samsung. But do you still know Belinea, Natcomp or Highscreen?
LEDs instead of cold cathodes A milestone was the change of the backlight: Instead of cold cathode lamps (CCFL, Cold Cathode Fluorescent Lamp), 2006 small Light emitting diodes (LEDs) illuminate the LCD background. First, the manufacturers used colorful RGB LEDs in the backlight, which made the monitors extremely colorful, but also extremely expensive.
The first inexpensive blue LEDs, whose light was converted into “white” light via yellow diode caps was found from about 2008 in monitors. While the LED backlight has long been common in notebooks, the complete switch to the more energy-efficient LED technology in PC monitors only took place after 2010 . With the mercury-containing CCFL tubes, the shimmering in the screen background also disappeared, the monitor housings became lighter and the displays even thinner.
3D -Displays and wide formats From 2009 there was a 3D intermezzo : Monitors whose stereo images could be viewed by users with red-green glasses, bulky shutter glasses or lighter polarized filter glasses – and cumbersome driver settings. In the same year the format of the flat screens changed from 4: 3 or 5: 4 to wide formats like 15: 9 , 16: 9 or 16: 10. This was not due to the changed viewing habits of the users, but simply to new panel factories: In order to utilize the fabs for wide-format flat screen televisions, monitor panels were also produced there.
3D monitors found End of 2000 years also with c’t great approval.
More pixels The full HD resolution already common in TV with 1920 × 1080 Pixels then also caught on in monitors. From about 2010 were 27- and 30 – Inch screens are offered with even more pixels, namely with 2560 × 1440 (15: 9) or 2560 × 1600 Pixels (16: 10).
2012 The first high-end devices with ultra-high resolution (UHD, 4K) such as Eizos FDH 360 For 25. 000 Euro in trade up, down 2013 the 4K resolution for monitors became more affordable. The initial connection problems soon subsided and the prices also fell quickly: Two years later, 4K monitors with TN panels cost just 500 Euro.
With the resolution, the screen diagonals increased – 32 – Zöller with 80 centimeters Diagonals suddenly became conceivable for the desk. c’t therefore checked in in the spring 2015 whether the cheaper 4K TVs can also be converted into large monitors.
Approximately from 2013 came oversized monitors with 2560 × 1080 or even finer 3440 × 1440 Pixels in fashion. Many of the screens were slightly curved in order to keep the distance between the screen surface and the viewer’s eye constant. The wide displays were initially smiled at – today they are indispensable.
2015 Apple, Dell and HP presented the first 5K displays with just under 15 Millions of pixels, two years later Dell triumphed again with the first 8K monitor doubled 32 millions of pixels. Such giants are still reserved for a few, very expensive devices.
Displays for gamers Likewise 2015 Gaming fans were delighted with the first monitors, whose image output synchronized with that of the graphics card delivered frames ran. For Nvidia’s G-Sync, the monitors required a fan-cooled module that ensured synchronization. For AMD’s FreeSync, a variant of the vo In the Adaptive Sync specified by VESA, there was no need for an expensive module – there were initially no displays for both technologies. This camp formation only ended 2019 , when Nvidia had an appreciation and presented so-called G-Sync compatible devices.
2017 the USB-C port found its way into the monitors as a video input. Since it also transmitted USB data and power in addition to DisplayPort video signals, the monitors could be used as docking stations for notebooks.
High contrast for videos The high contrast display was initially reserved for TV displays until they 2017 also appeared in PC monitors. However, the first tests of HDR monitors were sobering – differences between the display with and without HDR could not be recognized. The VESA specified the high-contrast reproduction on PC monitors only end 2017 in the DisplayHDR standard.
Enable the next step to increase the contrast Mini-LEDs, which were now evenly distributed on the display’s back instead of on the edge of the display and could be individually controlled Asus made the first device with countless tiny LEDs for full array local dimming (FALD) with his 32 – inch ProArt monitor with mini LEDs in front, beginning 2020 Apple followed suit in this country with the Pro Display XDR. The prices for these high-end monitors – they cost between 3000 and 6000 Euro – same as those in the middle of the 1990 years for the first LCD -Monitors had to put on the table.
Outlook: LEDs or OLEDs? Mini-LED backlights will eventually become mainstream in monitors. The alternative to this would be organic displays, but there will probably not be OLEDs in monitors for the foreseeable future – scaling to smaller diagonals works only very hesitantly, even in the TV sector.
Real LED displays, at where each pixel is realized with three light-emitting diodes, on the other hand, could first find its way into small, very special and therefore very expensive monitors. Until then, however, LCD technology will dominate the world of monitors for a long time.
The next-generation of Xbox gaming is a little more complicated than what we’re used to. For starters, Microsoft has released not one but two new consoles this week: the Xbox Series X and the Xbox Series S. Many of the initial crop of first-party games is also designed to be playable on its last generation Xbox, the Xbox One, as well as Windows PCs. And that’s before we get into Microsoft’s game streaming service, xCloud, which could mean you won’t need any Xbox hardware at all to play many of the latest games.
Each new generation tends to deliver big changes for console gaming, and Microsoft’s successors to the Xbox One are no different. Games look better thanks to more powerful graphics hardware and built-in support for more realistic lighting technology, and in some cases feel more responsive thanks to support for frame rates of up to 120fps. They also also load quicker because both consoles now include fast solid-state storage, a big improvement over the mechanical hard drive included in the Xbox One.
But Microsoft’s approach to this new generation is a big departure from how console launches have worked previously. Typically, we’ve seen Sony and Microsoft release just one new piece of hardware at launch, and each one tends to come with an exclusive library of games that you have to buy the new console in order to play. While Sony, too, has operated a game streaming service for years, it’s only typically used PlayStation Now to offer access to older titles, rather than brand-new releases like xCloud is promising.
Microsoft’s new consoles give you a lot more freedom with how you play its new games, but depending on where you choose to play them, you won’t get exactly the same experience. The Xbox Series X is a much more powerful machine than the Series S or the current Xbox One, for example, which has a big impact on performance.
Microsoft’s two new consoles
This week, Microsoft released its two new Xbox consoles. There’s the $499 (£449, €499) Xbox Series X, and a cheaper $299 (£249, €299) Xbox Series S. You can read our reviews of both of them by following the links below.
It’s not unusual for console manufacturers to offer a couple of different hardware options at launch, but normally, the differences are minor. The PS3, for example, was initially available in two models. There was a version with a 60GB hard drive as well as a cheaper version with a smaller 20GB hard drive, no Wi-Fi support, and fewer ports. Meanwhile, Microsoft also originally sold a “Core” version of the Xbox 360 in 2005, which included compromises like including a wired rather than wireless controller and omitting a hard drive.
The differences between the Xbox Series S and Series X are more substantial and have a big impact on how games look. While Microsoft says the Series X is targeting running games at 60fps at a full 4K resolution, the Series S instead targets a lower 1440p resolution at 60fps. It’s a big power disparity, similar to what we saw between the Xbox One and the Xbox One X, but this time, the two consoles were available on day one, rather than releasing years apart.
Microsoft has a good rundown of the main differences between the Xbox Series X and the Series S on its website. Both have 8-core CPUs, although the X has a slightly higher maximum clock speed of 3.8GHz, rather than 3.6GHz on the Series S. Both support expandable storage of up to 1TB via an expansion card, both output over HDMI 2.1, and both are backwards compatible with “thousands” of Xbox One, Xbox 360, and original Xbox games. Both support hardware-accelerated ray tracing for more realistic lighting in games, both support Dolby’s high-end Atmos audio technology, and both will support the Dolby Vision HDR standard. They’re also both backwards compatible with all officially licensed Xbox One accessories like controllers and headsets — although there are no plans to support the Kinect camera.
There are, however, big differences between the two. The Series X has a 4K Ultra HD Blu-ray drive, but the Series S is digital-only, so you’ll have to download your games rather than buy them on disc. And yet, the disc-based X also has double the amount of internal storage with 1TB as opposed to 512GB. We found the storage in the Series S filled up quickly as a result. The Series X also has more RAM at 16GB compared to 10GB in the Series S. Physically, the Series S is also a lot smaller than the Series X; Microsoft calls the console its “smallest Xbox ever.” Despite the size differences, we’ve found both consoles have good cooling systems, and are run cool and quiet when in use, so long as you don’t try blowing vape smoke into them.
Although they have different amounts of storage, both consoles use fast solid-state drives. For starters, that means that games load very quickly. We’ve found that many games that took over a minute to load on the Xbox One X now boot up in seconds. Games like Destiny 2 and Sea of Thieves, for example, load in half the time on the Series X as they did on the One X, and we found The Outer Worlds loaded in just six seconds on the new console.
Xbox Series X load times
Game
Xbox Series X
Xbox One X
Game
Xbox Series X
Xbox One X
CoD: Warzone
16 seconds
21 seconds
Red Dead Redemption 2
52 seconds
1 min, 35 seconds
The Outer Worlds
6 seconds
27 seconds
Evil Within 2
33 seconds
43 seconds
Sea of Thieves
20 seconds
1 min, 21 seconds
Warframe
25 seconds
1 min, 31 seconds
AC: Odyssey
30 seconds
1 min, 7 seconds
No Man’s Sky
1 min, 27 seconds
2 mins, 13 seconds
Destiny 2
43 seconds
1 min, 52 seconds
This fast storage also helps enable a feature called Quick Resume on both consoles, which allows you to switch between games incredibly quickly in a lot of cases. The big problem right now is that it’s not supported by every game, although Microsoft is working to enable it across more titles. When it works, though, Quick Resume is one of the consoles’ best new additions, and Sony’s PS5 doesn’t have an equivalent feature.
One of the most significant differences between the Series S and Series X is found in the graphics department. Although both consoles use AMD’s RDNA 2 graphics architecture, the Series X has 52 compute units. That’s not only more than double the 20 compute units you’ll find in the Series S, but they’re also clocked faster at 1.825GHz compared to 1.565GHz. In total, that means the Series X has 12.15 teraflops of graphical horsepower according to Microsoft, compared to 4 teraflops for the Series S.
The Xbox Series X is technically a shade more powerful than the PS5 in the graphics department. While Sony’s consoles are also based on AMD’s RDNA 2 architecture, both models of the PS5 clock in with 10.28 teraflops of GPU power. They’ve got a smaller number of compute units (36), but their maximum cap is higher at 2.23GHz. They’ve also got 8-core CPUs, but they’re clocked at 3.5GHz. However, it’s important to note that the PS5’s CPU and GPU clock speeds are variable based on the total workload, so it’s not quite an apples-to-apples comparison with the new Xbox consoles. This approach could benefit the PS5 in certain scenarios but limit it in others. Otherwise, the PS5’s specs on paper are similar to the Series X. It has 16GB of RAM, 825GB of storage, and a 4K Ultra HD Blu-ray drive.
There aren’t many cross-platform titles that allow us to see how the performance of the PS5 and Series X compare in practice, but an analysis of Devil May Cry 5 by Digital Foundrysees Sony and Microsoft’s consoles performing very similarly. In some modes the Series X offers slightly faster performance, while the PS5 is ahead in others.
Like Microsoft, Sony also has a step-down digital-only version of its next console, but here, the differences are a lot more basic. The lack of a disc drive means that the digital console is a little slimmer, but otherwise, PlayStation CEO Jim Ryan tells CNET that its two consoles are “identical products.” That means we shouldn’t see the same power disparity as Microsoft has.
Xbox Series X vs Series S vs PS5
Categories
Xbox Series X
Xbox Series S
PS5
PS5 (digital-only)
Categories
Xbox Series X
Xbox Series S
PS5
PS5 (digital-only)
CPU
8-core AMD Zen 2 CPU @ 3.8GHz (3.6GHz with SMT Enabled)
8-core AMD Zen 2 CPU @ 3.6GHz (3.4GHz with SMT Enabled)
8x Zen 2 Cores @ 3.5GHz with SMT (variable frequency)
8x Zen 2 Cores @ 3.5GHz with SMT (variable frequency)
“Thousands” of Xbox One, Xbox 360, original Xbox games. Xbox One accessories.
“Thousands” of Xbox One, Xbox 360, original Xbox games. Xbox One accessories.
“Overwhelming majority” of PS4 games
“Overwhelming majority” of PS4 games
Disc Drive
4K UHD Blu-ray
None
4K UHD Blu-ray
None
Display Out
HDMI 2.1
HDMI 2.1
HDMI 2.1
HDMI 2.1
MSRP
$499/£449/€499
$299/£249/€299
$499/£449/€499
$399/£349/€399
The difference in power generally mean early Series S and the Series X’s games run at different resolutions, but often perform similarly. For example, Watch Dogs: Legion targets 4K at 30fps on the Series X, and 1080p 30fps on the Series S, and both support ray-tracing for better looking reflections (check out both in action here).
Similarly, Sea of Thieves and Forza Horizon run at 60fps at 1080p on the Series S, compared to 4K 60fps on the Series X.
Despite the differences in resolution, Microsoft says both consoles are targeting frame rates of 60 frames per second and can support up to 120fps. For example, Rocket League will have a performance mode on both consoles that will allow it to run at 120fps, albeit in both cases at a reduced resolution compared to its 60fps mode. That said, there are some games that target different frame rates across the two consoles. Destiny 2’s crucible mode can run at 120Hz on Series X, but not on Series S, for example.
For now, however, the trend has been for games to feel just as smooth to play regardless of the console, but to look less detailed on the cheaper machine because of their lower resolution. That might not matter as much if you’re playing on an older 1080p TV, but it’ll be more apparent if you’re using a modern 4K set.
Although Microsoft has said the Series S targets 1440p, some early Series S games are running at 1080p. Yakuza: Like a Dragon and Gears Tactics target 1440p, but others like Sea of Thieves, Forza Horizon 4, Fortnite, and Watch Dogs: Legion are 1080p. That may change as developers get more comfortable working with the new hardware, but based on past experience it might not. For example, Microsoft billed the Xbox One X as being capable of 4K gaming at 60fps but many of the most popular games around didn’t run at full 4K. Fortnite, for example, runs at a maximum of 1728p on the Xbox One X, while Doom: Eternal tops out at 1800p.
Although your existing Xbox One controllers will work on the Xbox Series X and Series S, there’s also an updated controller for the new consoles, which is available in white, black, and blue. Although it’s broadly similar to the design Microsoft has used for its previous controllers, it’s slightly smaller and has a dedicated share button to simplify the process of uploading screenshots and video clips. Its D-pad is also a circle like the recent Xbox Elite Series 2 controller, rather than a cross like it was on the Xbox One.
New games, new hardware
New hardware needs new games to make the most of it, and Microsoft and its partners have announced a host of games that are coming to its new console. The biggest of these is Halo: Infinite, the latest entry in the long-running sci-fi first-person shooter franchise that’s become synonymous with the Xbox brand since its debut way back in 2001.
Unfortunately, Microsoft recently delayed Halo: Infinite, meaning it will now release in 2021, rather than arriving alongside the new console. News of the delay, which Microsoft attributed in part to the pandemic, came after the game’s visuals were met with criticism after their initial unveiling, prompting developer 343 Industries to admit, “We do have work to do to address some of these areas and raise the level of fidelity and overall presentation for the final game.”
With other Xbox staples like Fable and Forza Motorsport without release dates, the delay has left third-party publishers to fill in the rest of the launch lineup, including Assassin’s Creed Valhalla, Dirt 5, Watch Dogs Legion, and Yakuza: Like a Dragon. Here’s a guide to the best launch day games, and here’s what the months ahead are looking like in terms of new releases.
These games support different Xbox Series X and Series S features. Watch DogsLegion, for example, run in 4K on the Series X and supports ray tracing for more realistic-looking lighting on both consoles, but there’s no ray-tracing support in Assassin’s Creed Valhalla. Another interesting title in the launch lineup is Dirt 5, which can run at up to 120fps on the Xbox Series X. A high frame rate like this is especially important in a fast-paced racing game, and it means Dirt5 feels more responsive to play on compatible TVs.
One common feature a lot of these games share is that they’ll also be available for current-gen consoles like the Xbox One and PS4. What was more surprising was when Microsoft said that would be true for even its own flagship games. If Microsoft keeps that promise, it would be a big departure from how console manufacturers have treated these games in the past, where these exclusive games have previously been an essential part of the sales pitch for new hardware.
New games, old hardware
Microsoft has said you won’t have to buy new hardware to enjoy its upcoming first-party titles because many of them will also come to Xbox One. Here’s how Xbox chief Phil Spencer described the company’s approach back in July, where he said that every Xbox Game Studios game in the next couple of years will be playable on the Xbox One.
You won’t be forced into the next generation. We want every Xbox player to play all the new games from Xbox Game Studios. That’s why Xbox Game Studios titles we release in the next couple of years—like Halo Infinite—will be available and play great on Xbox Series X and Xbox One. We won’t force you to upgrade to Xbox Series X at launch to play Xbox exclusives.
And if you’re more of a PC gamer and don’t own an Xbox One, then Microsoft also typically releases its major titles there as well, and it says it plans to continue this policy this year.
There are some caveats you should be aware of. First is that these promises only cover Microsoft’s first-party titles, aka those published by Xbox Game Studios. Microsoft isn’t making any promises about how other publishers like EA, Ubisoft, or Activision will handle their new games.
Even then, Microsoft has been pretty explicit about the fact that this only covers its own games that will release across the “next couple of years,” and there are signs that some high-profile games that have already been announced might not be coming to the Xbox One. After Microsoft’s high-profile Xbox event in July, we noted that a majority of the title cards for Microsoft’s first-party games, including Forza Motorsport and Fable, didn’t mention that they’d be coming to the Xbox One.
Finally, in case this wasn’t obvious, you’re probably going to see a very different-looking game if you’re choosing to play on a base Xbox One from 2013 compared to a shiny new Xbox Series X.
There’s even been some concern that trying to continue to support the Xbox One could hold back Microsoft’s next-generation games, which could give Sony an advantage since it can focus all of its attention on the new hardware. Spencer, as well as developers we’ve spoken to, have said this shouldn’t be a problem, but so-called “cross-gen” games on previous consoles have never made the most of the latest hardware.
New games, no hardware
Say you don’t own an Xbox or a gaming PC, but you do have an Android phone. Does Microsoft have any next-gen gaming options for you? Thanks to game streaming, it does. On September 15th, Microsoft added game streaming to Xbox Games Pass Ultimate, which costs $14.99 a month. The feature, which was known previously as xCloud, could give you a way to play many of the biggest Xbox Series X games without having to own any gaming hardware at all. You can stream them to a device as simple as an Android phone, for example (but not iOS, which we’ll get into in a second).
Game streaming isn’t an entirely new idea — Sony launched its PlayStation Now service way back in 2014 to a muted response — but Microsoft is taking a much more interesting approach. Rather than focusing on older titles, as Sony did with PlayStation Now, Microsoft says its new games will be available to stream the day they release and lists recent first-party titles like Forza Horizon 4, Gears of War 5, Tell Me Why, The Outer Worlds, and Ori and the Will of the Wisps as being among the 150-plus games available to stream at launch.
There are currently a couple of compromises to this approach, as we found recently when we tested the service for ourselves. For starters, load times and lag and noticeable, and are worse than competing cloud gaming services from Google and Nvidia. Getting into gameplay can take between a minute and a minute and a half, and fast-paced games can feel sluggish. Microsoft says that the servers powering the service will be upgraded to Series S/X hardware next year, but as it stands the service feels unfinished.
xCloud also currently isn’t available on every platform. At the moment, xCloud is available for Android, but the restrictions Apple places on game streaming services mean that its yet to come to iOS. That should change next year, however, since Microsoft is planning to develop a web version of the service that will be able to run on Apple’s devices.
Since xCloud will be included with an Xbox Game Pass Ultimate subscription, it’s offered alongside a huge array of content beyond game streaming. Xbox Game Pass Ultimate’s $14.99 a month subscription also lets you download and play over 100 games directly on your Xbox or Windows 10 PC, as well as EA Play. It also includes an Xbox Live Gold subscription, which gives access to online multiplayer on Xbox.
PlayStation Now is still around, of course, but Sony isn’t promoting it as a way to play its recent games. It might have a huge catalog of over 800 titles, but it doesn’t feel like a serious attempt to compete with Microsoft’s game streaming, even after a recent price cut to $9.99 a month.
The backwards compatibility question
The ability to play a previous generation’s games on your new hardware (so-called “backwards compatibility”) has varied between different consoles and generations. Nintendo’s Wii U could happily play every Wii game, and the Wii could play every GameCube game before it. In contrast, the PS4 can’t natively play any games that were released for previous PlayStations — although some can be streamed via PlayStation Now.
With its new consoles, Microsoft has outlined three ways your old games will eventually be playable on its new hardware. Some games will be backwards compatible, some will receive enhancements, and others will receive a free upgrade when newer versions are released.
With the Xbox Series X, Microsoft is making big promises about your ability to play your old Xbox games on its new hardware. For starters, “thousands” of games released for the original Xbox, Xbox 360, and Xbox One are playable on the new consoles, and Microsoft has got a handy tool to let you browse them all. That includes almost every game released for the Xbox One, barring those that required its Kinect camera accessory.
The Xbox Series S can still play older games, but it doesn’t include their Xbox One X enhancements like higher resolutions. So in most cases, you’ll essentially be playing the version of the game that was designed for the less-powerful Xbox One S. That said, in some cases those older games can still benefit from more modern hardware such as the faster solid-state drive, and games with dynamic resolution scaling can run at higher resolutions. Backwards compatible original Xbox and Xbox 360 games run at an enhanced 1440p resolution.
That’s the baseline, but in some cases, Microsoft says that games will be enhanced, running in higher resolutions and frame rates than they were originally released with and with support for new technologies like HDR. In particular Microsoft says games can be updated to run at double their original frame rate on both the Series S and Series X. We already seen Microsoft achieve impressive results with some of this technology.
Finally, there’s Smart Delivery, which is essentially a free upgrade program that means you won’t have to re-buy an Xbox One game — like Assassin’s Creed: Valhalla, Cyberpunk 2077, or Doom Eternal — if it also gets released on the new hardware. Although this will theoretically offer the biggest upgrade, the feature is being selectively used. If you previously bought the original Control for Xbox One, for example, you won’t get a free upgrade to the next-gen version. That’s reserved for owners of Control’s new Ultimate Edition.
Sony has promised more modest improvements for PS4 games running on the PS5. It’s confirmed that the “overwhelming majority” of PS4 games will run on its new hardware, and says that some will have better loading speeds and more stable frame rates. Some developers have said they’ll offer free upgrades to the PS5 versions of their games.
Paying the price
If you want to continue to pay for your hardware and games up front, then that’s still an option with Microsoft’s new Xboxes. As mentioned above, the Xbox Series X retails for $499, while the Series S costs $299. Major releases, meanwhile, seem to be priced similarly or at a $10 premium to current-gen titles. The PS5 costs between $399 for its disc-free model, and $499 for its model with a 4K Blu-ray drive.
But going into this generation, Microsoft is making a big bet on people wanting to spend their money on games in monthly installments. For the Xbox Series X, that means paying $34.99 a month for 24 months via its Xbox All Access bundle (total cost: $839.76), while the Series S is available for $24.99 a month (total cost: $599.76). All Access will be available in 12 countries this year: Australia, Canada, Denmark, Finland, France, New Zealand, Norway, Poland, South Korea, Sweden, the UK, and the US.
That’s more expensive than buying the console upfront, but included with Xbox All Access is Xbox Game Pass Ultimate — a subscription service that gets you free access to over 100 Xbox One titles, including big recent titles like Tell Me Why, Ori and the Will of the Wisps, and Forza Horizon 4 — free games via EA Play, Xbox Live Gold (a subscription that comes with its own monthly free games as well as access to online multiplayer), and game streaming via xCloud. Oh, and it gives free access to over 100 Windows 10 games as well, such as the recently released Microsoft Flight Simulator.
If you’d rather buy your hardware outright and buy a subscription to one of Microsoft’s game services separately, then Xbox Game Pass is available in a couple of different variations. Factor in the cost of these subscriptions to the total price of Xbox All Access, and the price of the console hardware itself drops to just $10 or $20 a month.
Xbox Game Pass comparison
Categories
Xbox Game Pass Ultimate
Xbox Game Pass
Xbox Game Pass for PC
Categories
Xbox Game Pass Ultimate
Xbox Game Pass
Xbox Game Pass for PC
Platform
PC/Xbox
Xbox
PC
Games included
250+ games
250+ games
200+ games
Xbox Live Gold
Yes
No
No
xCloud
Yes
No
No
EA Play
Yes
No
Yes
Monthly price
$14.99/£10.99/€12.99
$9.99/£7.99/€9.99
$9.99/£7.99/€9.99
Suffice it to say, if you don’t have the cash to make a big upfront purchase, then Microsoft still wants to get you on board for its next generation of consoles. You won’t own any of the games you can play (aside from the older Xbox 360 games you can download with Xbox Live’s Games with Gold service), but that’s the trade-off you make.
Microsoft’s plans for the next generation of gaming are sprawling. Two consoles that are available via subscription and can play a huge chunk of your existing Xbox games, a new roster of games that will be playable on your existing Xbox One, a continuing focus on PC gaming, and a game streaming service mean that, no matter what hardware you own, there’s a decent chance you’ll be able to pay Microsoft to play its games.
We’ve written before about how the focus on trying to sell subscriptions rather than premium hardware means that the “true next-gen Xbox” is the subscription itself, rather than the hardware it plays on. Microsoft is casting its net wide, and it doesn’t want any hardware requirements to get in the way of you subscribing.
Sony, meanwhile, is doing what it’s always done: it’s making a new console, developing exclusive games for it, and selling it. It’s hard to argue too much with the approach when it’s done so well for the company so far, especially with the PS4.
As of this writing, the PS4 has reportedly outsold the Xbox One by a factor of over two to one, so it’s hard to see why Sony would want to change its strategy too much. Microsoft is coming into this next generation as an underdog, and it’s doing everything in its power to change the rules of the game.
Update November 12th, 1:30PM ET: Added hands on impressions now that the Xbox Series S and Series X have launched.
Correction: An earlier version of this article stated that the PS5 will have 16GB of GDDR5 RAM. This is incorrect. It actually has 16GB of GDDR6 RAM.
Get ready for a whole new generation of USB. In 2019, the USB Promoter Group announced that a new standard, “USB4” (official spelling lacks a space, but we’re using one in this article to reflect the way readers search), but as new standards take time to catch on, we’re in late 2020 and still waiting for the devices to come out. However, with current Tiger Lake laptops offer USB 4 support and Apple announcing built-in USB 4 for its upcoming M1 Arm-based laptop chips, a new generation of peripherals is likely around the corner.
USB 4 promises a host of benefits that include faster transfer speeds, better management of video and optional compatibility with Thunderbolt 3.
In a world where there are four different versions of USB 3.2, two types of USB 3.1 and a host of connector types and power specs, the idea of a new standard might seem overwhelming. However, there’s also a lot to look forward to. Here’s everything you need to know about USB 4.
Main Benefits of USB 4
The new USB 4 standard will have three main benefits over prior versions of USB.
40 Gbps Maximum Speed: By using two-lane cables, some devices will be able to operate at up to 40 Gbps, the same speed as Thunderbolt 3.
DisplayPort Alt Mode 2.0: USB 4 will support DisplayPort 2.0 over its alternative mode. DisplayPort 2.0 can support 8K resolution at 60 Hz with HDR10 color.
Compatible with Thunderbolt 3 devices: Some, but not necessarily all USB 4 implementations will also work with Thunderbolt 3 devices.
Better Resource Allocation for Video: If you’re using a USB 4 port to transport both video and data at the same time, the port will allocate bandwidth accordingly. So, if the video only needs 20 percent of the bandwidth to drive your 1080p monitor that’s also a hub, the other 80 percent will be free for transferring files from your external SSD.
Will Use Type-C Ports
This almost goes without saying: USB 4 will only operate over the Type-C connector. Don’t expect to see a USB 4 device or hub with old-fashioned Type-A ports. That’s no surprise, as other recent standards such as USB Power Delivery only work on Type-C. If you do connect to, for example, a Type-A 3.2 port by using an adapter, the speed and power will drop down to the lowest common denominator.
Compatible With Thunderbolt 3, Sort-Of
Intel made news when it said it had given the Thunderbolt 3 protocol to USB Promoter Group, allowing devices with USB 4 ports to potentially be compatible with Thunderbolt 3 devices and USB 4 devices to attach to Thunderbolt 3. That’s good news for everyone, especially laptop users who want to play games by connecting an eGPU (external graphics card).
Though there are a number of Thunderbolt 3 eGPUs out there, few laptops and desktops come with Thunderbolt 3 and almost no motherboards support Thunderbolt 3 out of the box. Because Thunderbolt is an Intel standard, you won’t find it on any AMD-powered computer. Thunderbolt 3 is also more expensive to implement than standard USB, because it’s not an open standard and it requires an extra chip. So today, if you want an eGPU or a super-speedy Thunderbolt 3 storage drive, your choice of computer is very limited.
With USB 4, device and host manufacturers won’t have to pay Intel any royalties so there’s a much better chance of mass adoption. However, there’s a catch: Thunderbolt compatibility is not a required part of the USB 4 spec so manufacturers don’t have to implement it. You could end up buying a laptop with USB 4 and find that it doesn’t work with, say, your Razer Core X graphics dock. However, USB Promoter Group CEO Brad Saunders anticipates that most PCs with USB 4 will be made to work with Thunderbolt 3.
“We do expect PC vendors to broadly support Thunderbolt backward compatibility, because most of what they need is already built into the USB 4 design,” Saunders said. “It’s based on the same technology so we do anticipate a high rate of adoption there, but the phone guys will probably choose not to add the extra little bit they need to be backward compatible.”
Though Saunders is optimistic about PC manufacturers adding Thunderbolt 3 compatibility to their USB 4 ports, we should note that any device which wants to market itself as Thunderbolt 3 compatible will probably need to be certified by Intel. Today, any Thunderbolt 3 product has to undergo a rigorous validation process that costs money.
Three Speeds of USB 4
Though it can hit theoretical speeds of up to 40 Gbps, not all USB devices or hosts will support that standard. Saunders told us that there will be three speeds: 10 Gbps, 20 Gbps and 40 Gbps. Expect smaller and less-expensive devices such as phones and Chromebooks to use one of the lower speeds and, when you do get a laptop, you’ll need to make sure to look at the specs if you want the fastest USB 4 connection available.
Great at Sharing Bandwidth Between Video and Data
A big part of the USB 4 spec is the ability to dynamically adjust the amount of resources that are available when you are sending both video and data over the same connection. So, let’s say that you have USB 4 with a 40 Gbps maximum and you’re outputting to a 4K monitor while copying a ton of files from an external SSD. And let’s stipulate that the video feed needs about 12.5 Gbps. In that case, USB 4 would allocate the remaining 27.5 Mbps to your backup drive.
USB-C introduced “alternative mode,” the ability to transmit DisplayPort / HDMI video from a Type-C port, but the current 3.x spec doesn’t provide a good way to split up resources. According to Saunders, DisplayPort alt mode can split the bandwidth between USB data and video data exactly 50/50, and HDMI alt mode doesn’t allow simultaneous USB data at all.
“With USB SuperSpeed, we didn’t have quite the flexibility in architecture to really manage those two distinct bandwidths [data and video] in a combined fashion over the connector,” Saunders said. “So this is really optimized for more scalability between the different application types.”
Rather than using alternative mode, many current docking stations use DisplayLink technology, which compresses the video signal and turns it into USB data. It will be interesting to see whether most USB 4 docks use alternative mode instead.
All USB 4 Hosts Support USB PD
Where some current-day USB Type-C devices support the USB Power Delivery (USB PD) standard for carrying electricity to high-powered devices, not all of them do. Every USB 4 device and host will have to comply with USB PD, which allows for higher wattages and better power management.
USB PD can theoretically provide up to 100 watts, but charging devices do not have to support that amount of power. So there’s no guarantee that a given USB 4 port would give or take the amount that a particular notebook requires to operate, but you can expect it to follow the spec.
Backward Compatible With Older Devices
The best thing about all generations of USB is how well they work together. USB 4 will work with USB 3 and USB 2 devices and ports. It should go without saying, though, that you’ll only get the speed and capabilities of the weakest part of your connection. A USB 4 device won’t be able to transfer at 40 Gbps when you hook it to a USB 3.2 port and an old-school USB 2 port won’t suddenly get faster just because you connect it to a brand new USB 4 backup drive.
Your Old Cables Will Work At Their Maximum Speeds
Your existing USB cables and adapters will work with USB 4, but as with everything else that’s backward compatible, they will only operate at their maximum rated speeds. So, if you have a USB 3.2 cable that can operate at 5 Gbps, you’ll only get up to 5 Gbps, even if you are using it to connect a USB 4 port to a USB 4 device. To get Thunderbolt 3 support, you’ll likely need a Thunderbolt 3 cable.
Coming in 2021
The USB Promoter forum released the specs for USB 4 in September 2019 (you can find them here), but don’t expect to see any products based on the standard until late 2020 or early 2021. Saunders told us that a typical development cycle for new products is 12 to 18 months.
When it comes to USB 4-enabled laptops and desktops, even 18 months seems optimistic. The spec for Type-C was announced in 2014 and It took a really long time for USB Type-C to go mainstream–many laptops still don’t have it.
Will Cost More to Manufacture Than USB 3.2
Another hurdle for mass adoption is the added cost of USB 4. While we don’t know exactly how much it will cost for PC and device vendors to add USB 4 connectivity, we know that it will require more expensive components than the latest current standard, USB 3.2.
“I think it’s going to be something less than Thunderbolt, but not as inexpensive as SuperSpeed in terms of the actual material cost to the product developer,” Saunders said. “It takes a lot of gates to do it and the product still does all the SuperSpeed stuff.”
Saunders added that he hopes the costs will come down quickly. However, we’d guess that the cost differential will push USB 4 onto higher end PCs, at least at first.
Why USB 4 is Officially Spelled as “USB4” (No Space)
Unlike every other version of USB, the new spec is officially spelled without a space before the version number. While we think that most people will probably write it as USB 4, the official name is USB4. USB Promoter Group CEO Brad Saunders explained that his goal in removing the space was to take the focus off of version numbers and onto a brand name.
“One of the things I’ve trying to signal right now is that we don’t plan to get into a 4.0, 4.1, 4.2 kind of iterative path,” he told us. “And we don’t want it to be associated and used with products as a differentiator . . . we want to keep it as simple as possible.”
The USB 3.x spec has been filled with different version numbers, including USB 3.0, USB 3.1 Gen 1, USB 3.1 Gen 2 and four different versions of USB 3.2, in addition to the presence or absence of optional features such as USB PD and alternate mode. But Saunders told us that those numbers are really for developers and he wishes that OEMs would use simpler terms like “SuperSpeed USB” when marketing their products.
Perhaps because of his concern about marketers throwing too many digits at consumers, Saunders said the organization does not plan to use version numbers for spec updates. So, even if there’s a faster iteration in two years, it will likely still be called USB 4 but with the speed number after (we imagine something like USB 4 80 Gbps). He and his team still haven’t decided on a branding strategy, so there may also be a marketing name for USB 4. Much like USB 3.x is known as “SuperSpeed USB,” USB 4 could end up with its own moniker (we suggest “Super Duper Speed USB”).
“I want it to be a clear distinction. USB 4 is its own architecture with its own set of speeds and try not get trapped on these dot releases for every single speed,” he said. “When and if it goes faster, we’ll simply have the faster version of the certification and the brand.”
Bottom Line
There’s still a lot more that we don’t know about USB 4 than we do. We’ll learn a lot more when the spec is released later this year, but whatever happens, don’t expect to see devices with USB 4 until at least late 2020, but more likely 2021 and beyond.
The MacBook Pro 13 was the only so-called Pro machine to get the new Apple-designed M1 chip and it’s used it to outperform its Intel-powered predecessor in more than a few tasks.
The 8-core CPU and 8-core GPU Apple M1 helps the MacBook Pro 13 to build code in Xcode up to 2.8x faster than the Intel 13-inch Pro, also render a complex 3D title in Final Cut Pro up to 5.9x faster, perform ML tasks up to 11x faster and play full-quality 8K ProRes video in DaVinci Resolve without dropping a frame. Apple hasn’t specified whether they compare to the quad-core or the dual-core 13-inch MacBook Pro – devices with vastly differing GPU capabilities.
But a much more interesting comparison would be between the M1-equipped MacBook Pro 13 and the M1-equipped MacBook Air. Naturally the chipset is the same, but the Pro has a cooling fan, whereas the Air is fanless. That alone could prove a major difference in performance. The Pro boasts a touch bar, a brighter screen (500 vs 400 nits), better stereo speakers, higher (“studio”) quality microphones and better battery life.
Battery life on the MacBook Pro 13 is actually the best on any Apple laptop to date and twice that of the Intel-equipped predecessor. Apple promises 17 hours of web browsing (2 more than the new M1-powered MacBook Air) and 20 hours of Apple TV app movie playback (2 hours more than the Air).
The MacBook Pro 13 with an M1 chip is on pre-order today, starting at $1,299/€1,413 for an 8/256GB model and $1,499/€1,637 for the 8/512GB model. Shipments begin next week.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.