Last year, AMD released the Ryzen 3000 series desktop processors in one of the most monumental hardware launches of the modern era. This final step completed the Red brand’s ascent back into the forefront of the desktop processor market that began with the launch of the first generation of Ryzen CPUs.
With the Ryzen 3000 launch came the AMD X570 chipset. Featuring PCIe 4.0 support, X570 was an impressive leap from generations past. It was also hot, which had motherboards often include chipset cooling fans, and, more significantly, expensive. The high cost of the chipset increased the average cost of X570 motherboards considerably over previous generations.
While AMD has done a great job of maintaining motherboard compatibility with new generation processors, none of the previous-generation AM4 motherboards featured official PCIe 4.0 support, not even from the storage and PCIe controlled by PCIe 4.0 compatible CPUs. Enter B550, the more value-oriented little brother of X570. While the B550 chipset is PCIe 3.0 only, B550 motherboards support PCIe 4.0 from the CPU to the primary PCIe slot as well as the primary M.2 slot (dependent on a PCIe 4.0 ready CPU).
The BIOSTAR Racing B550GTQ features dual M.2 slots, each with their own full coverage heatsink, RGB lighting, a Micro ATX form factor, and even an integrated rear I/O cover. Perhaps most interesting, the B550GTQ utilizes 90 A power stages in an APU-optimized configuration. How will the VRM stand up to my Ryzen 9 3900X test CPU? Let’s find out!
1x PS/1 keyboard/mouse port 1x HDMI port 1x DisplayPort 1.4 1x DVI-D port 1x LAN (RJ45) port 1x USB 3.2 (Gen2) Type-C port 4x USB 3.2 (Gen1) ports 1x USB 3.2 (Gen2) Type-A ports 2x USB 2.0 ports3x 3.5 mm audio jacks
Audio:
1x Realtek ALC1150 Codec
Fan Headers:
4x 4-pin
Form Factor:
Micro ATX form factor: 9.6 x 9.6 in.; 24.4 x 24.4 cm
A big thank you to Deepcool for supplying the review sample.
Deepcool got its start in 1996 and has from then on out grown to become an industry heavyweight. The company’s focus on the enthusiast DIY market has paid off tremendously, with the company releasing well-regarded cases, power supplies, heatsinks, fans, and all-in-one liquid coolers. Overall, the company has shown an ability to think outside the box, along with a willingness to try new things. When you take that into consideration, it’s no wonder Deepcool has become a popular manufacturer in the DIY PC market.
In today’s review, I look at the Deepcool AS500. The eagle-eyed among you may notice its appearance is similar to the Gamer Storm Assassin III with one less cooling tower, and on the surface, you would be correct. However, a closer inspection shows Deepcool has made some notable changes. First, the cooler uses five heatpipes and comes equipped with an ARGB-illuminated top plate. For those wanting to minimize noise from their system, the fan has a very quiet RPM profile. That said, a quiet cooler doesn’t mean bad cooling as I have reviewed many exceptional air coolers over the years that can be considered nearly silent. Therefore, considering how well the Assassin III performed, it will be quite interesting to see how the AS500 does. So without further ado, let’s take a closer look at what this latest offering from Deepcool can do.
Material: Aluminium (fins) Copper (heat pipes) Dimensions: 142 x 75 x 164 mm (with fan) Heat pipes: Ø6 mm – 5 pcs Weight: 1030 g
Fan 1:
Model: TF140S (DFr1402512CL) Dimensions: 140 x 140 x 25 mm Fan Speed: 500–1200 RPM Fan Airflow: 70.81 CFM (maximum) Fan Noise: ≤29.2 dBA
Features:
Single tower heatsink with five heat pipes and high fin density. Slim profile for maximum RAM height compatibility. High-performance TF140S PWM fan included. ARGB lighting via motherboard sync or controller
AMD’s Radeon RX 6800 XT and RX 6800 officially launched one week ago, joining the best graphics cards and GPU benchmarks hierarchy lists. Today marks the arrival of third-party add-in board (AIB) partner cards. Sort of. Much like Nvidia’s RTX 3090, RTX 3080, and RTX 3070, as well as AMD’s Ryzen 5000 series CPUs, all of the RX 6800 series cards are sold out. But we received Sapphire’s Radeon RX 6800 XT Nitro+ just yesterday, and we’re working on running it through our test suite for a full review. Until then, here’s the quick unboxing and look at what Sapphire has to offer.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
The card itself is quite large, basically matching the longest cards we normally see. It measures 310 x 134.3 x 55.3 mm (12.2 x 5.3 x 2.17 inches), so you’ll need plenty of clearance in your case. It’s a 2.7-slot design as well, blocking the two adjacent expansion slots on your motherboard.
What’s interesting is that the card actually isn’t all that heavy, relatively speaking. It checks in at 1237g, which is less than AMD’s reference card, as well as the RTX 3080 Founders Edition.
Sapphire changes up plenty of other aspects of the card design as well. It has three DisplayPort and one HDMI 2.1 outputs, with no USB-C connector. Most people will be happier with this configuration, we think, though it’s always good to have other options. The rear IO panel also has plenty of ventilation ports, though with the fins on the heatsink running parallel to the IO bracket, we’re not sure how much heat will actually exhaust out the back of the card.
Sapphire says the new design ends up running quieter while delivering better cooling compared to its previous designs. That’s probably thanks once again to the new fan design with an integrated rim, though there are notches in the Sapphire fan. The benefit of the improved cooling is that Sapphire can increase the boost clock on the RX 6800 XT to 2360 MHz, with a 350W TBP (Total Board Power). Interestingly, MSI Afterburner and Asus GPU Tweak II both report the boost clock as 2409 MHz on our sample, but then neither utility has been updated for the RX 6800 series.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Compared to the reference 6800 XT, you can easily see how much larger the Sapphire card is. The reference card measures 268 x 107 x 50 mm, so Sapphire’s model is about 4cm longer and 2.5cm taller, and just a bit thicker. The reference card also weighs 1505g, so the Nitro+ is 268g lighter. That means less stress on your PCIe slot, though we’ve seen GPUs in the 1.5kg range for several years at least (e.g., Zotac’s Amp! Extreme line).
But you’re probably most interested in how the card performs. Unfortunately, that’s several days worth of testing, which means we won’t have a full review until next week. Until then, we can offer this glimpse of performance courtesy of 3DMark.
Image 1 of 3
Image 2 of 3
Image 3 of 3
That’s a bit faster than the reference 6800 XT, as you’d expect considering the difference in TDP and boost clocks. For example, in Port Royal, the reference card scored 9106 in graphics, while the Nitro+ got 9329. But the GPU clocks are more interesting than the score.
The minimum GPU clock during the test sequence on the Sapphire card was 2287 MHz, with a peak clock of 2415 MHz. The reference card ran at 2210-2349 MHz. In general, at ‘factory stock’ settings, you’ll get an extra 100 MHz or so of performance. We’ll be looking to see if we can push the card a bit further in our full review.
Of course, the big story remains the continuing GPU shortages. Many were hoping AMD would do better than Nvidia, but it sounds like stock of the AIB partner cards is even worse than we saw with Ampere. As we noted in our RX 6800 XT review, that’s not really surprising. Given a choice between producing more Ryzen 5000 CPU cores (80mm square per compute die) and more Navi 21 GPU cores (519mm square), AMD makes far more money off the CPUs and can produce more of them. Unless TSMC can start producing more wafers for all of its partners, the shortages could continue for many more months.
Asus’s new beta BIOS updates for B450 and X470 motherboards with AGESA code 1.1.8.0 have arrived, which enables full performance and support of AMD’s latest Ryzen 5000 series processor on the older chipsets, as well as the new Precision Boost Overdrive 2 functionality, which includes the Curve Optimizer.
The BIOS update applies to Asus’ entire B450 lineup and X470 lineup, which means if you have any Asus 400-series chipset board, you can grab the new beta BIOS now. But please only apply this BIOS if you have a Ryzen 5000-series CPU, as it is designed specifically for Zen 3 and should not benefit an older CPU like a Zen 2 or Zen+ processor. Plus, you cannot downgrade to an older BIOS like you could in the past, so it’s a one-way trip. If you install this BIOS and it doesn’t work with an older processor, you’ll have to purchase a new motherboard.
With the AGESA 1.1.8.0 update, you should gain access to AMD’s new Curve Optimizer, a new undervolting utility that can improve efficiency vai undervolting. Plus, there’s a chance you might have access to AMD’s Smart Access Memory technology, though we aren’t sure if it’s enabled on Asus’s beta BIOS.
It’s great to see this BIOS update in time for the holidays; if you have one of these Asus motherboards and were looking to purchase a Ryzen 5000 CPU, now is probably a good time to do so, provided you can find one at retail amidst the ongoing shortages.
AMD introduced Smart Access Memory ( SAM) as an exclusive feature of serious platforms 500, but ASRock seems to go against the tide: the technology is can be enabled in the new firmware for motherboard B 450 with support for Ryzen 5000.
by Manolo De Agostini published 25 November 2020 , at 14: 41 in the channel Motherboards and Chipsets ASRock AMD RyZen
A few days ago ASRock has published the new BIOS for motherboards B 450 with processor support Ryzen 5000 . The choice to upgrade motherboards B 450 first compared to X 470 was already strange in itself, but apparently it was not the only surprise: as illustrated by the German youtuber RawiioliExtras, from the new firmware it is possible to enable Smart Access Memory (SAM) .
The enthusiast did it on an ASRock B 450 Steel Legend, activating the “Above 4G Decoding” and “Re-size BAR Support” items, present as if it were a series motherboard 500. Yeah, because SAM, according to AMD, requires an X motherboard 570 or B 550, a Ryzen CPU 5000 and a Radeon RX series GPU 6800 . Smart Access Memory aims to increase gaming performance (results are variable depending on title, resolution and details) by exploiting a PCI Express feature (called Resizable BAR) capable of accessing the entire VRAM of the new Radeon RX cards .
It is unclear whether SAM support on motherboards B 450 of ASRock is in an error or something intentional , the Taiwanese company has often and willingly accustomed us to go against the stakes imposed by microprocessor manufacturers , especially if they are somehow artificial. We’ll see if others follow it or if AMD convinces ASRock to remove the feature. In conclusion, we remind you that AMD is working with Intel to allow the technology to be enabled on motherboards with Core CPUs and Nvidia is working with AMD and Intel for a version of SAM compatible with GeForce RTX GPUs 3000.
Apparently, ASRock has launched new firmwares for its B450 motherboards to accommodate the latest Ryzen 5000 (Vermeer) processors. German Youtuber RawiioliExtras has discovered that the motherboard manufacturer has also enabled the Smart Access Memory feature. This is important news since AMD only officially supports the Smart Access Memory on the latest 500-series motherboards.
AMD’s requirements for Smart Access Memory are pretty simple. You need to own a Ryzen 5000 (Vermeer) processor, a Radeon RX 6000 (Big Navi) graphics card and a 500-series motherboard. However, ASRock’s latest firmware appears to defy the chipmaker’s conditions as the ASRock B450 Steel Legend motherboard is proof that the setting is available outside of 500-series motherboards.
Smart Access Memory isn’t a proprietary technology. In fact, Smart Access Memory is built upon the foundations of Resizable BAR (Base Address Register), a feature that’s part of the PCIe specification. Smart Access Memory is AMD’s unique fancy way of referring to the technology.
Enabling Smart Access Memory is straightforward and easy. According to AMD’s instructions, you just have to enable the “Above 4G Decoding” and “Re-size BAR Support” settings inside the BIOS. As you can see, both options are present and can be enabled on the ASRock B450 Steel Legend.
ASRock doesn’t go into details with the new firmware. The previous description said “Optimize system performance with AMD Ryzen 5000 Series Desktop Processors.” However, the new description simply states “Optimize system compatibility.” In the case of the ASRock B450 Steel Legend, the latest firmware carries the version 3.70 and it’s dated back to November 19.
Motherboard
Firmware
Date
Size
ASRock B450M/ac
2.30
2020/11/19
9.99MB
ASRock B450 Pro4
4.50
2020/11/19
10.31MB
ASRock B450M Pro4
4.60
2020/11/19
10.31MB
ASRock B450M Pro4-F
2.40
2020/11/19
10.00MB
ASRock B450M/ac R2.0
2.30
2020/11/19
9.99MB
ASRock B450M-HDV R4.0
4.10
2020/11/19
10.25MB
ASRock Fatal1ty B450 Gaming K4
4.50
2020/11/19
10.33MB
ASRock Fatal1ty B450 Gaming-ITX/ac
4.20
2020/11/19
10.38MB
ASRock B450M Steel Legend
3.60
2020/11/19
10.51MB
ASRock B450 Steel Legend
3.70
2020/11/19
10.5MB
ASRock B450M Pro4 R2.0
4.60
2020/11/19
10.31MB
ASRock B450 Pro4 R2.0
4.50
2020/11/19
10.31MB
ASRock B450M-HDV
4.20
2020/11/19
10.22MB
ASRock’s new firmwares for its other B450 motherboards share the same description and date, therefore, we don’t think it’s a coincidence. It’s plausible the entire ASRock B450 product stack supports Smart Access Memory.
It’s weird that ASRock decided to prioritize its budget motherboards over the more expensive X470 offerings. However, it shouldn’t be long until the X470 motherboards receive their corresponding firmwares. The question remains whether ASRock’s enablement of Smart Access Memory on the B450 motherboards was intentional or accidental.
The Razer Blade Stealth is confused about what it wants to be. It’s priced as a premium ultraportable and looks like one too. But as a gaming notebook, it’s quite underpowered for the price. While the OLED screen is beautiful, Razer needs to work on the keyboard.
For
Great build quality
OLED screen is gorgeous
Thunderbolt 4 on both sides
Against
Cheaper gaming laptops offer better graphics
Uncomfortable keyboard
There are a few things I can say with certainty about the Razer Blade Stealth ($1,799.99 to start, $1,999.99 as tested). For one thing, it’s built like a tank. Our option had a beautiful OLED display, and Razer doesn’t heap on bloatware.
What I can’t tell you, though, is who this laptop is for exactly. Razer calls it a “gaming ultraportable,” and prices it among the
best ultrabooks
, which are often expensive partially due to build quality. But the mix of an Intel Core i7-1165G7 and Nvidia GeForce GTX 1650 Ti, while they can play eSports games or AAA games, are low-end for a laptop of this price. Competing gaming notebooks with far superior graphics performance can be found for less money. To make it more confusing, Razer has announced
the Razer Book 13
, a non-gaming ultraportable that houses Intel’s Xe integrated graphics.
Among competing ultrabooks, the
best gaming laptops
and in Razer’s own stack, the Blade Stealth feels more niche than it used to. And yet, somehow, it still gets plenty right.
Design of the Razer Blade Stealth
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
We last reviewed the Razer Blade Stealth in September with a 10th Gen Intel Ice Lake processor, and the design hasn’t changed a bit in the intervening two months. But for those who aren’t familiar with it, the Stealth is a black aluminum notebook. The lid features the Razer tri-headed snake logo, but at least the company made it black on black, so you can mostly ignore it if it doesn’t fit your style.
The no-frills aluminum build continues when you unfold the laptop. It’s all-black aluminum on the deck, with speakers flanking both sides of the keyboard. That’s the one spot with some color, as the keys are lit with single-zone Chroma RGB. The display is surrounded by a moderate, but inoffensive, bezel.
The left side of the laptop has a Thunderbolt 4 port over USB Type-C, USB 3.1 Gen 1 Type-A and a headphone jack. The right side is the same, minus the headphone jack. I do appreciate that Razer has the ports evenly distributed across the laptop, and you can charge on either side via the Thunderbolt ports.
If you consider the Stealth to be a gaming notebook, it’s small at 3.1 pounds and 12 x 8.3 x 0.6 inches. The
Asus ROG Zephyrus G14
, a 14-inch gaming notebook, is 3.5 pounds and 12.8 x 8.7 x 0.7 inches, and even that’s petite for a gaming notebook.
But if the Blade Stealth is an ultraportable, then it’s big. The Dell XPS 13 9310 is 2.8 pounds and 11.6 x 7.8 x 0.6 inches, though it does have fewer ports.
2x USB 3.1 Gen 1 Type-A, 2x Thunderbolt 4, 3.5 mm headphone/mic jack
Camera
720p IR
Battery
53.1 WHr
Power Adapter
100 W
Operating System
Windows 10 Home
Dimensions(WxDxH)
12 x 8.3 x 0.6 inches / 304.6 x 210 x 15.3 mm
Weight
3.1 pounds / 1.4 kg
Price (as configured)
$1,999.99
Gaming and Graphics on the Razer Blade Stealth
The Razer Blade Stealth comes with an Nvidia GeForce GTX 1650 Ti for gaming. To put it flatly, that’s not going to get you strong performance outside of some eSports titles, unless you’re willing to bring down your settings.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
I tested out the Stealth playing a few rounds of Rocket League, an eSports title that’s matches the type of game one should most expect to play on this laptop. In a round, I saw frames fluctuate between 177 and 202 frames per second (fps) on high quality mode at 1080presolution. Since our review unit’s screen only has a 60 Hz refresh rate, it really would have made sense to limit the frames.
You may notice that our primary competitor in gaming, the Asus Zephyrus G14, has a much better GPU: an Nvidia GeForce RTX 2060 Max-Q. This isn’t an accident — you can get that machine for $1,449, which is cheaper than the Blade Stealth we’re reviewing. What the Stealth offers is better than the integrated graphics you get in most ultraportables, but other, cheaper gaming laptops do offer more power.
On the Shadow of the Tomb Raider benchmark, on the highest settings at 1080p, the Stealth ran the game at 26 fps, which is below our 30 fps playable threshold, while the Zephyrus G14 hit 49 fps. Red Dead Redemption 2, at medium settings and 1080p, was also unplayable at 22 fps, while the Zephyrus G14 hit 35 fps.
On Grand Theft Auto V, at very high settings at 1080p, the Stealth played at 35 fps, but the Zephyrus G14 hit 115 fps.
The Blade managed to play Far Cry New Dawn (1080p, ultra) at 47 fps, but the Zephyrus beat it again at 73 fps.
To stress-test the Blade Stealth, we ran the Metro Exodus benchmark 15 times on a loop, which simulates about half an hour of gameplay. On the high preset, the game ran at an average of 29.9 fps, suggesting that you really need to drop down to normal or lower for playable frame rates. It hit 30 fps the first two runs before dropping down to around 29.9 for the rest of the gauntlet.
During that test, the CPU measured an average clock speed of 3.5 GHz and an average temperature of 60.8 degrees Celsius (141.4 degrees Fahrenheit). The GPU ran at an average speed of 1,287.6 MHz and an average temperature of 57.2 degrees Celsius (135 degrees Fahrenheit).
Productivity Performance on the Razer Blade Stealth
We tested the Razer Blade Stealth with an Intel Core i7-1165G7 CPU, 16GB of LPDDR4X RAM and a 512GB PCIeNVMe SSD. The package is a formidable workhorse, though other machines in both the gaming and ultraportable space do have some advantage.
Image 1 of 3
Image 2 of 3
Image 3 of 3
On Geekbench 5.0, the Stealth earned a multi-core score of 4,992, falling to the XPS 13 (also with a Core i7-1165G7) and the Asus ROG Zephyrus G14, a gaming machine with an AMD Ryzen 9 4900HS.
The Blade Stealth copied 4.97GB of files at a rate 946.6 MBps, beating the XPS 13 9310 but still slower than the Zephyrus.
On our Handbrake test, Razer’s laptop took 16 minutes and 19 seconds to transcode a 4K resolution video to 1080p. That’s faster than the XPS 13, but the Zephyrus smashed it in less than half that time.
Display on the Razer Blade Stealth
The 13.3-inch FHD OLED touchscreen on the Stealth sure looks nice. The trailer for Black Widow (is it ever going to come out?) looked excellent. When the titular heroine is surrounded by flames in a car chase with Taskmaster, the orange reflections really stood out on a dark road. The villain’s navy suit contrasted with Red Guardian’s, well, red, knockoff Captain America outfit. In Rocket League, the Blade Stealth’s screen made the orange and blue cars pop against green turf.
Razer’s panel covers 83.2% of the DCI-P3 color gamut, just a smidge higher than the Zephryus’ display. The XPS 13 covers 69.4%.
The Blade Stealth measured an average brightness of 343 nits, while the XPS 13 was the brightest of the bunch at 469 nits. The Zephyrus G14 was a tad behind at 323 nits.
Keyboard and Touchpad on the Razer Blade Stealth
Earlier this year, Razer fixed a long-maligned keyboard layout that put the shift key in an awkward place. That’s a major improvement, and the next step should be to focus on key travel. The keys have soft switches, and I had a tendency to bottom out on the aluminum frame, which tired my fingers. As I got used to the keyboard, I hit 112 words per minute with my usual error rate, which isn’t bad, but I could’ve felt a bit better doing it with more travel.
It wouldn’t be a Razer device without Chroma RGB lighting. The keyboard is single-zone backlit and can be controlled via the Synapse software.
The 4.3 x 3-inch touchpad is tall, making it more spacious than much of the Windows competition (though it’s still not as luxuriously large as what you see on Apple’s MacBooks). Windows Precision drivers ensure accurate scrolling and gestures. This is definitely one of the best touchpads on a Windows 10 laptop.
Audio on the Razer Blade Stealth
I’ll give the Blade Stealth’s audio this: It gets loud. The twin top-firing speakers immediately filled up my apartment with sound — in fact, I found it uncomfortable at maximum volume. When I listened to Blackway and Black Caviar’s “What’s Up Danger,” I got the best results with audio around 85%. The audio was clear, with vocals mixing well with sirens and synths in the background, as well as some drums. Bass, however, was lackluster. In Rocket League, car motors and bouncing balls were all clear.
Of the little software pre-installed on the system, one you might want to check out is THX Spatial Audio. Switching between stereo and spatial audio didn’t make a huge difference, but there are some presets, including games, music and voice to toggle between.
Upgradeability of the Razer Blade Stealth
Ten Torx screws hold the bottom of the Razer Blade Stealth’s
chassis
to the rest of the system. I used a Torx T5 screwdriver to remove them, and the bottom came off without much of a fight.
The SSD is immediately accessible, and the Wi-Fi card and battery are also easily available for upgrades. The RAM, on the other hand, is soldered onto the motherboard.
Battery Life on the Razer Blade Stealth
The Razer Blade Stealth’s history with battery life has been mixed, but this iteration with Intel’s 11th Gen Core processors is decent, especially considering it has a discrete GPU. The laptop ran for 9 hours and 11 minutes on our battery test, which continuously browses the web, run OpenGL tests and streams video over a Wi-Fi connection, all at 150 nits of brightness.
But ithe Razer was outclassed by both the Dell XPS 13 (11:07) and the Zephyrus G14, the longest-lasting gaming notebook we’ve ever seen (11:32), so there’s still room for improvement on Razer’s part.
Heat on the Razer Blade Stealth
Since Razer classifies the Blade Stealth as a “gaming ultraportable,” we took our heat measurements by pushing it to the limits on our Metro Exodus test.
During the benchmark, the keyboard between the G and H keys measured 42.7 degrees Celsius (108.9 degrees Fahrenheit), while the touchpad was cooler at 30.9 degrees Celsius (87.6 degrees Fahrenheit).
Image 1 of 2
Image 2 of 2
The hottest point on the bottom of the laptop reached 47.3 degrees Celsius (117.1 degrees Fahrenheit).
Webcam on the Razer Blade Stealth
Above the display, the Razer Blade Stealth has a 720p resolution webcam with infrared (IR) sensors. The IR sensors let you use facial recognition to log in to Windows 10 with Windows Hello, which was quick and accurate.
The webcam is passable. It caught details about as fine as a 720p webcam can, like hairs on my head, but the picture was still a little grainy and could definitely be sharper. On a laptop that has so many of the little details right, I’m ready for an upgrade on the camera.
Software and Warranty on the Razer Blade Stealth
You won’t spend much time removing bloatware from the Stealth. The only major piece of software that Razer adds to Windows 10 is Synapse, its hub for Chroma RGB lighting, adjusting performance modes, registering products and syncing accessories.
Windows 10 comes with some bloatware of its own, including Roblox, Hulu, Hidden City: Hidden Object Adventure, Spotify and Dolby Access.
Razer sells the Blade Stealth with a one-year warranty.
Razer Blade Stealth Configurations
We reviewed the $1,999.99 top-end variant of the Stealth, with an Intel Core i7-1165G7, Nvidia GeForce GTX 1650 Ti, 16GB of RAM, a 512GB SSD and a 13.3-inch OLED FHD touchscreen.
I suspect that those who are using this system primarily for gaming will prefer the $1,799.99 base model, which has all of the same parts except for the display, which is a 120 Hz FHD screen.
Bottom Line
The Razer Blade Stealth does a lot right, with great build quality, a lovely OLED screen and symmetrical Thunderbolt 4 ports for convenient charging on either side of the system.
If you’re buying this as an ultraportable, it’s expensive at $1,999.99 (with the OLED screen, anyway). But if you’re buying it as a gaming notebook, you should look elsewhere to save money and get better performance. The Asus Zephyrus G14 gives you the best Ryzen mobile processor for gaming around and has an RTX 2060 Max-Q for $450 less at $1,449.99. It doesn’t have Razer’s build quality or an OLED option, but it’s a worthwhile tradeoff in performance and you still get a 120 Hz display.
Razer doesn’t offer a version of the Stealth without the GTX 1650 Ti. That’s saved for the new Razer Book 13, which we haven’t reviewed yet as of this writing. But with that notebook in the wings, and the excellent
Razer Blade 15 Advanced
on the other side of Razer’s lineup, it makes a lot of sense to either spend more for better parts for gaming or spend less for better parts for productivity.
Those who just want to mix in some casual eSports play with work will get what they need out of this laptop, but a high-priced eSports laptop is a bit of a niche. While the Stealth is still a decent laptop, it doesn’t make as much sense as it used to.
A new record for DDR4 RAM overclocking has been set. It was achieved on the Crucial Ballistix Max bones, which managed to spin to dizzying speeds exceeding the “modest” 7000 MHz! This is not the first breakthrough memory record from Crucial. Professional overlockers very often use Ballistix bones. Suffice it to mention that the previous world record, which belonged to a Taiwanese user with the nickname Bianbao XE, was also obtained on the Crucial Ballistix Max. It has been around for three months until now. The Chinese baby-J overlocker significantly surpassed the last record, which was truly devilish 6666, 6 MHz. In both cases, the Ryzen series APU 4000 G was used.
A new record has been set in RAM overclocking that broke the magic barrier 7000 MHz. Was beaten on a single 16 GB of Crucial Ballistix Max DDR bone – 4000.
New record for DDR4 RAM overclocking on AMD Renoir
Overclocker baby-J set a new record for DDR4 overclocking at 7004, 2 MHz with latency 22 – 26 – 26 – 46 – 127 (1T). The screenshot on the HWBot website even shows 7006, 4 MHz, but the lower one, officially confirmed by CPU-Z, was officially recognized. The result was achieved on a single 16 GB of Crucial Ballistix Max DDR4 memory module – 4000, based on Micron B-Die bones. You didn’t have to wait for DDR5 memory to see such an absurdly high clock speed as without a doubt 7000 MHz.
What RAM memory for Intel Core i9 – 10900 K? DDR4 Test 2133 – 4000 MHz
Once again the record was broken on the Ryzen series APU 4000 G, famous for its good memory controller, based on architectures Zen 2 (CPU) and Vega (iGPU). Specifically, we’re talking about the Ryzen 5 Pro 4650 G model, which was, of course, overclocked (to almost 4.2 GHz). The overlocker used the MSI MEG B motherboard 550 Unify-X with a very powerful power section to break the record. It was designed specifically for professional CPU and memory overclocking. And, as you can see in the presented example, it worked flawlessly in extreme conditions. The new record holder cooled the components with liquid nitrogen (LN2).
AMD has officially stated that Ryzen 5000 support for the 400 series chipset will be coming in 2021, but that doesn’t appear to be the case anymore. Three companies have shared (so far) that they will support AMD’s new Ryzen 5000 series CPUs before 2021, Biostar, Asrock, and Asus. With Asrock already having beta BIOS’s ready for the shiny new CPUs.
To ensure compatibility, you’ll need a BIOS for your specific B450 motherboard that supports AGESA 1.0.8.1 at the very minimum. This AGESA code enables Ryzen 5000 Renoir compatibility. In order to get the full performance out of your Ryzen 5000 CPU you’ll need to make sure your B450 motherboard has a BIOS supporting AGESA 1.1.0.0 or greater.
Biostar has said via Twitter that they will be supporting Ryzen 5000 CPUs coming soon, whether that’s before 2021 or after 2021 we are not sure. Regarding Asus, they are more straight forward with the deadline, marking a December release for AGESA 1.1.8.0 BIOS’s that will be available for all its 400 series motherboards (not just B450).
Asrock is the strangest of the three though; reportedly Asrock has released beta BIOS’s for its entire B450 lineup of boards (no word on X470), but quickly withdrew them from the public eye. AnandTech covered the strange occurrence and they have no idea why the BIOS updates were pulled.
As for the remaining motherboard manufacturers like MSI, and Gigabyte (to name a few), they have kept quiet on the matter. Presumably, these remaining companies will be waiting until Q1 of 2021 — as AMD has said (which is the official deadline) — before releasing new BIOS’s for the Ryzen 5000 CPUs.
So for now, there are still no official BIOS updates for B450 motherboards that support the new AGESA codes and thus the new Ryzen 5000 processors. But fortunately, it looks like Asrock, Asus, and presumably, Biostar will offer support before 2021.
SilentiumPC has now introduced a brand new range of PC cases to the public. The new Ventum VT2 line-up offers an entry-level price of just under 32 euros. One of the most striking design features is the large front panel with rounded edges, which is made almost entirely of mesh and is intended to allow an unobstructed flow of air. In addition, most of the top is covered with a magnetic mesh mat.
According to the manufacturer, all four versions of the Ventum VT2 PC case offer space for AIO water cooling with radiators from 120 up to 360 mm. The maximum height of the CPU cooler must not exceed 159 mm. When it comes to graphics cards, models with a length of up to 290 mm can be installed. Up to two 2.5-inch drives and an additional 2.5-inch or 3.5-inch drive can be installed in the lower compartment using vibration-damping rubber buffers. The division of the interior into two chambers should have a positive influence on the air flow inside the housing and enable even easier cable routing.
With the larger Ventum VT2 models, such as the TG ARGB and EVO TG ARGB, the manufacturer is expanding the scope of delivery with ARGB fans and a side panel made of hardened glass. All four Ventum VT2 models can accommodate up to three fans in the front, two in the top panel and one more on the rear. The base model Ventum VT2 is delivered with a pre-installed Sigma HP 120 – mm CF case fan in the back. There are removable dust filters both above and below the case.
The top model of the series also has a large side panel made of tempered glass and three pre-installed Stella HP ARGB 120 – mm CF case fan. The nano-reset ARGB controller is pre-installed in the housing and offers various effects and color schemes. The housing was also equipped with a splitter for a total of five ARGB devices and five fans. In addition to ASRock’s Polychrome Sync, the supported ARGB systems also include ASUS ‘Aura Sync and MSI’s Mystic Light.
With the Ventum VT2 TG ARGB, the manufacturer has installed a Pulsar HP ARGB 120 – mm CF fan and a Sigma HP 120 – mm CF case fan installed on the front. Thanks to the auto-LED function, it is possible to use a rainbow effect without the need for a controller or a compatible motherboard. The supported ARGB systems include ASRock’s Polychrome Sync as well as ASUS ‘Aura Sync and MSI’s Mystic Light as well as Gigabytes RGB Fusion 2.0.
Similar to the base model, the Ventum VT2 TG has a Sigma HP 120 – mm CF case fan has been installed in the rear. There is also another Sigma HP 120 – mm CF fan in the front. In addition, a side panel made of hardened glass is used instead of the usual metal side panel.
All four PC cases of the Ventum VT2 series are available in stores from today.
The SilentiumPC Ventum VT2 is priced at around 32 euros. For the Ventum VT2 40 euros are due. The Ventum VT2 TG ARGB costs 43 euros and the Ventum VT2 EVO TG ARGB 53 euros.
A new world record for memory frequency has been set this week, with one overclocker pushing a 16GB module of Crucial Ballistix Max DDR4 RAM to over 7000MHz.
established by baby-J. Using a single 16GB module of Crucial’s Ballistix Max DDR4-4000 memory, baby-J has hit the highest memory frequency until now, reaching over 7000MHz.
The new record was achieved by overclocker ‘baby-J’, using Crucial Ballistix Max DDR4-4000 memory. In order to achieve the massive 7000MHz memory frequency, LN2 cooling is used. Officially, the record was set at 7004.2MHz with 22-26-26-46 timings, but the screenshot uploaded by the overclocker shows a 7006.4MHz memory frequency. Nonetheless, when compared to the original frequency of this kit, this is an improvement of over 75%.
The previous record-holder bianbao XE, achieved an overclock of 6666MHz using the same memory modules, but the rest of the system differs quite a bit. Baby-J was able to cross the 7000MHz mark by using a Ryzen 5 Pro 4650G overclocked to 4.2GHz on an MSI B550 Unify-X motherboard. For those who don’t know, the MSI B550 Unify-X is part of the company’s enthusiast series and comes with only 2x DIMM slots, offering a clearer memory signal.
Putting this speed into perspective, 7000MHz is within the realm of speeds expected from DDR5 memory. DDR5 speeds are expected to start at 3200MHz and go up to 8400MHz, but we are unlikely to see DDR5 memory hit the market until later next year.
KitGuru says: How many of you overclock your RAM? What’s the highest speed you’ve managed to achieve?
Become a Patron!
Check Also
You can now transfer Spider-Man’s PS4 save to PS5 – but there’s a catch
When Insomniac Games first announced the PS5 remaster of 2018’s Marvel’s Spider-Man, many fans expressed …
The Chinese overclocker baby-j hit a new overclocking record in memory: it brought a Crucial Ballistix Max memory module from 16 GB at over 7 GHz with liquid nitrogen cooling.
by Manolo De Agostini published 24 November 2020 , at 09: 21 in the Memory channel Crucial Ballistix Micron
Chinese overclocker baby-j got a new record of the world regarding the frequency of RAM memories, touching the 7004 MHz with solutions Crucial Ballistix Max and, of course, cooling to liquid nitrogen . The result was obtained with a module from 16 GB to single-sided intended for basic operation at 4000 MHz, on a motherboard MSI MEG B 550 Unify-X with a Ryzen 5 Pro processor 4650 G (Zen 2, Renoir, not for retail ).
With a frequency increase of 75%, BABY-J is became the first overclocker to bring DDR4 memory beyond 7 GHz , as certified on HWBot and CPU-Z Validator. The previous best result was 6666 MHz, always achieved with Crucial Ballistix Max, slightly higher than a similar result obtained with Trident Z Royal by G.Skill.
In September, Micron’s consumer division announced the Max 5100, solutions characterized by a frequency of 5100 MHz (die Micron-E), timing CL 19 – 26 – 26 – 48 and an operating voltage of 1.5V. A similar frequency, not even very long ago, seemed “utopia” with air cooling, but today it is no longer the case and this is also due to the continuous research and development, as well as the overclockers that push companies to constantly innovate.
AMD is getting ready to launch a new version of Precision Boost Overdrive. Also known as PBO. This tool is often used to overclock Ryzen processors, but with Precision Boost Overdrive 2, undervolting will also be possible.
Precision Boost Overdrive 2 behaviour will be based on three aspects: CPU temperature, socket power, and motherboard VRM current. Based on the data collected while monitoring these points, PBO can raise the power consumption limiters to match the VRM capabilities, increasing both the average frequency and boost duration.
With PBO 2, AMD claims that it’s possible to increase single-thread performance without affecting the multi-threaded capacities. Additionally, users will now be able to automatically undervolt their CPUs and optimise the voltage curve to their needs and workloads with Curve Optimizer. When voltage and frequency are low, the undervolting potential increases. At a higher voltage and frequency, the undervolting potential decreases.
Image credit: Anandtech
When using PBO 2, AMD expects to see single-thread improvements of up to 2.5% in single-thread workloads on Ryzen 7 5800X, and almost 2% on Ryzen 9 5900X. In multi-thread workloads, the improvements should be more noticeable, with up to 2.1% in Ryzen 7 5800X and up to 10% in Ryzen 9 5900X.
System requirements for PBO 2 include a Ryzen 5000 series desktop CPU and an AMD 500 or 400 series motherboards with a BIOS update that includes AGESA 1.1.8.0. BIOS availability is expected in early December, but as per Anandtech, some BIOS with AGESA 1.1.0.0 already feature Curve Optimizer.
KitGuru says: Do you own a Ryzen 5000 series desktop processor? Is it mounted on a motherboard with Curve Optimizer? If you already tried this new feature, how much extra performance were you able to squeeze from your CPU?
Become a Patron!
Check Also
Apple to bring OLED displays to iPad Pro in 2021
While the iPhone made the jump to OLED a few years ago now, Apple has …
You can spend thousands of dollars on components when building a PC, but it won’t boot without an operating system (OS). Linux is a viable option, but most people prefer Windows because it runs all of their favorite software, including the latest games. And for those who were still holding on, Windows 7 has officially reached its end of life, meaning it won’t get any more support or security updates. Fortunately, you can get Windows 10 for free or cheap, if you know where to look.
Getting hold of the Windows installer is as easy as visiting support.microsoft.com. Whether you’ve paid for Windows 10 already or not, Microsoft lets anyone download a Windows 10 ISO file and burn it to a DVD, or create installation media on a USB drive for free. Once that’s done, you can boot from your installation media and load Windows 10 onto your PC. During the installation process, Microsoft asks for an activation key. You can skip it, but eventually, Windows will start alerting you that your install isn’t activated.
There are many ways to get a Windows 10 activation / product key, and they range in price from completely free to $309 (£339, $340 AU), depending on which flavor of Windows 10 you want. You can of course buy a key from Microsoft online, but there are other websites selling Windows 10 keys for less. There’s also the option of downloading Windows 10 without a key and never activating the OS. But what, if anything, are you missing out if you don’t activate Windows 10? And does your carefully crafted rig face any risks?
Below we outline the top ways you can get Windows 10 — from the cheapest to most expensive — and the downsides of each option.
Access to all personalization options; Microsoft support access; Free
Free
Access to all personalization options; Microsoft support access; Equivalent to Windows 10 Enterprise; Free
Access to all personalization options; Microsoft support access
Access to all personalization options; Microsoft support access; Refunds
Cons
There’s a small chance Microsoft will reject activation, and you’ll have to contact support
Desktop watermark; Personalization options restricted; Can’t use Microsoft support
You have to be enrolled in an eligible school
There’s a small chance your key won’t work, and you’ll have to contact support to get it fixed; Some third parties don’t offer refunds
Expensive
Upgrade From Windows 7 or 8 to Windows 10: Free
Nothing’s cheaper than free. If you’re looking for Windows 10 Home, or even Windows 10 Pro, it’s possible to get Windows 10 for free onto your PC. If you already have a Windows 7, 8 or 8.1 a software/product key, you can upgrade to Windows 10 for free. You activate it by using the key from one of those older OSes. But note that a key can only be used on one PC at a time, so if you use that key for a new PC build, any other PC running that key is out of luck.
To do this with a Windows 10 compatible PC (after backing up your important data, of course) download Windows 10. When asked, select “Upgrade this PC now.” Note that if you’ve recently changed your PC’s hardware, such as the motherboard, Windows may not find the license for you device. That means you’ll have to reactive the OS. Here are Microsoft’s instructions for reactivating Windows 10 after changing PC hardware.
Downsides of Upgrading From Windows 7 or 8
When using an older Windows key to activate Windows 10, you may run into complications if Microsoft isn’t sure whether you’re eligible to update or not. In that case, you’d have to call a number and go through a process of entering your key and getting a code. But that seems to be happening less in recent months and years.
Don’t Activate Windows: Free
If you don’t have a valid key, you can still use Windows 10 for free on your PC even if you don’t activate the OS. I have colleagues who have used non-activated versions of Windows for years without Microsoft ever shutting it down. In this way, you can have Windows 10 Home or Pro running on your PC nearly flawlessly. Nearly…
Downsides of Not Activating Windows
“If the user [installs Windows 10] before activating Windows, they will see an ‘Activate Windows’ watermark on their desktop, as well an experience a limit on Windows 10 personalization options,” Microsoft told Tom’s Hardware in a statement.
Microsoft brands PCs running an unactivated version of Windows 10 with a watermark in the bottom-right corner of the screen. A Microsoft spokesperson told me that activating Windows 10 ensures you have a legitimate copy of Windows 10, and the watermark is an attempt to alert consumers that their version could be false. However, if you downloaded your ISO directly from Microsoft, there’s no way your copy can be a fake.
If you don’t activate Windows 10, you won’t be able to change Personalization options in the Settings menu. That means you can’t choose personal desktop wallpapers, slideshow backgrounds, Start, taskbar, Action Center or title bar colors, light or dark color schemes, font choices or lock screen options.
The lack of custom aesthetics can be a downer, especially if you like to liven things up by changing colors and images. However, we checked, and you can still change your wallpaper if you right-click an image from the web or a personal photo and set it as your wallpaper. And if you have a wallpaper tied to your Microsoft account, it will appear if you sign into Windows with that account.
Unsurprisingly, Microsoft won’t offer you any Windows 10 technical support if you don’t activate the OS. If you call or chat with their techs, they’ll start off by asking you for your key, and you’ll have no response.
Use the Microsoft Student Discount: Free
Microsoft offers students attending certain universities and high schools the ability to get Windows 10 for free by allowing them to activate Windows 10 Education for free. Meanwhile, teachers can get Windows 10 Education for $14.99. You can see if your school is eligible and download your free Windows 10 key here. The key is yours even after you graduate.
But is Windows 10 Education any different from Windows 10 Home? It’s actually better. Windows 10 Education is the same as Windows 10 Enterprise, which Microsoft calls the most robust version of Windows 10. The OS has features targeting security, device control and management and deployment that Windows 10 Home lacks. Unlike Windows 10 Home, Windows 10 Education has client and host remote desktop and remote app i(nstead of client only), Hyper-V (Microsoft’s hypervisor) and extra apps, like AppLocker and BitLocker. Although, it’s likely you won’t ever use any of those bonus features.
If you’re not currently a student but happen to have a .edu email, we don’t recommend scamming the system. In addition to ethical concerns, if you get caught, Microsoft can make you pay up anyway. “False representations of eligibility voids this offer, and Microsoft reserves the right to collect the full price of product(s) ordered,” Microsoft’s policy states.
Downsides of Using the Microsoft Student Discount
If your school is eligible for the discount, there isn’t really a downside to this method of procuring Windows 10 free. Not all colleges / high schools have it, and you may need to make a special user account to download it. But if you can score Windows 10 Education for free, we don’t see any reason not to.
Buy a Cheap Windows 10 Key From a Third-Party Seller: Around $30
If you can’t stand living with the scarlet letter of an eternal watermark or want the comfort of knowing Microsoft won’t disown your PC’s OS should you call for help, you’ll have to buy a Windows 10 key. And while some turn to Microsoft for this purchase, there are third-party websites selling keys for much cheaper than Microsoft. For example, at the time of writing, Kinguin sells Windows 10 Home for about $40, Amazon charges $129.99, Newegg’s pushing it for $89.99 and even Walmart has it for $99.95, as well as a Pro OEM version .
Now, let’s address the elephant in the room. While we can’t vouch for all of them, websites selling lower-priced Windows keys are likely selling legitimate codes. One popular site, Kinguin, has 37 merchants worldwide selling Windows keys. Mark Jordan, Kinguin’s VP of communications, told me that their merchants acquire the codes from wholesalers who have surplus copies of Windows they don’t need.
“It’s not a gray market. It would be like buying Adidas or Puma or Nike from a discounter, from TJ Maxx,” Jordan said. “There are no legal issues with buying it from us. It’s just another marketplace.”
According to Jordan, Kinguin’s merchants have sold “several hundred thousand” keys and are not one-time sellers posting listings for codes they don’t want. As part of its fraud protection, a Kinguin employee randomly buys a key “every now and then” to make sure they’re legitimate, he said. Jordan added that it’s rare for a customer to get a key that’s been resold, but if they did, customer support would help them get a new one for free.
“If there’s ever a problem with a key being already activated or something like that, our customer support team helps you get a new key… And that merchant would be in deep trouble, so they are very careful with it,” Jordan said.
It’s worth noting that we’ve encountered reports of customer dissatisfaction, including from users who wanted a specific type of key (like non-OEM only), ended up with something different (like an OEM version) and could only get a refund, rather than the type of key they originally tried to buy.
You’ll have to enter a key to activate Windows, but you won’t have a problem doing that if you bought your key from a place like Kinguin (or Amazon, Newegg, etc.). In fact, Microsoft still offers 24/7 technical support online and via phone even if you got your Windows 10 key from somewhere other than Microsoft.
If you do opt to get your key for less, make sure it’s from a legitimate site. A hint will be if that key is too cheap — i.e. free or close to free. And, as with anything else, if you haven’t heard of a seller, check their ratings or go elsewhere.
No matter where you get your product key, you shouldn’t download Windows 10 from anyone besides Microsoft. As noted on Microsoft’s website: “When buying Microsoft software as a digital download, we recommend that you avoid auction sites and peer-to-peer (P2P) file sharing sites. At the moment there are a limited number of sites where you can legally purchase digital downloads of Microsoft software.”
“Genuine Windows is published by Microsoft, properly licensed and supported by Microsoft or a trusted partner. Non-genuine software results in a higher risk of malware, fraud, public exposure of your personal information and a higher risk for poor performance or feature malfunctions,” Microsoft added in a statement to Tom’s Hardware.
Downsides of Cheap Keys
These non-Microsoft websites have varying return policies for software key purchases. While Kinguin seems to have an open return policy,
Meanwhile, Amazon and Newegg both have no-refund policies for software keys. Amazon claims all keys sold on its site are genuine, and any gripes you have with your key must be handled by the individual vendors. If a key you bought from Newegg doesn’t work, you’ll have to contact Newegg’s product support team to get a new key.
Still, most, if not all, sites seem willing to accommodate you should you get a key that’s already been used or doesn’t work. Again, just make sure you’re buying your key from a legitimate source. For that reason we don’t recommend buying Windows 10 keys from individual sellers (or illegally).
This final downside is only applicable if you want to equip your PC with Windows 10 Pro for Workstations. While I was able to find Windows 10 Home on a number of genuine key-selling websites and Windows 10 Pro on some (although fewer) websites, I couldn’t find a place to download a key for Windows 10 Pro for Workstations anywhere besides Microsoft (Amazon sells it to ship for $293.83). The most advanced and pricey ($309) member of the Windows 10 clan, Windows 10 Pro for Workstations offers “support for the next generation of PC hardware, up to four CPUs and 6TB of memory,” according to Microsoft’s website. But it’s unlikely you’ll need the juggernaut of Windows 10 for your personal machine.
Buy a Windows Key From Microsoft: $139+
Want a version of Windows 10 where you can enjoy dynamic slideshows on your home screen and vibrant red, green, pink, or purple taskbars? Do you enjoy the thrills of a watermark-free screen and the comfort of knowing you can call Microsoft support if you have any problems? Then you need a key, which, as discussed, you can get from various retailers. But if you want to avoid any chance of getting an unusable key or want the guaranteed ability to get a full refund even if there’s no problem with the key, your best bet is buying from Microsoft.
In addition to selling keys for Windows 10 Home and Pro, Microsoft is the only place you can get a key for Windows 10 Pro for Workstations. Additionally, Microsoft offers the Assure Software Support Plan for an extra $99 (£95/ AU$120). This plan is valid for a year after activating Windows 10. It’s applicable for up to five devices and entitles you to online and phone support and one-on-one in-store training. One caveat: Microsoft says the plan is “for purchase and activation only in the region in which it was acquired.”
Downsides of Buying from Microsoft
Microsoft charges the most for Windows 10 keys. Windows 10 Home goes for $139 (£119.99 / AU$225), while Pro is $199.99 (£219.99 /AU$339). Despite these high prices, you’re still getting the same OS as if you bought it from somewhere cheaper, and it’s still only usable for one PC.
Plus, the premium price doesn’t entitle you to any support perks. Microsoft’s 24/7 basic phone and online support is available to anyone with a Windows 10 key, even those who didn’t get it from Microsoft. After already investing time and money building a PC , it can be difficult to convince yourself to spend over $100 for an OS that you can get with the same specs and support for cheaper.
What’s the Best Way to Get Windows 10?
If you have an old Windows key you can get Windows 10 free by carrying that key over from a previous build — that’s your best option.
If you don’t have a key on hand, you need to decide whether you’re comfortable using an unactivated version of Windows 10, which limits your customization options, has an ugly watermark and leaves you ineligible for Microsoft support. Many would argue that downloading Windows without paying for or already owning a product key is ethically wrong. That said, Microsoft has made this process easier over various Windows iterations and lessened the limitations and nagging that happens when you don’t activate. The company isn’t trying to close this loophole, probably because it’s more interested in driving user numbers. I’ve even seen well-known vendors and Microsoft partners do press presentations with watermarks on their desktop.
If you must buy a Windows 10 key, you can save a lot with a low-cost seller such as Kinguin, although customer service may be lacking. Microsoft’s price is astronomically high and doesn’t offer any significant benefits. You can save $100 or more by buying a key from one of these third-party sites, which is money you can spend on one of the best graphics cards, a roomier SSD, or a few AAA games for your new PC.
MORE: Running Windows 10 on Raspberry Pi
MORE: PC Building Tips for Beginners MORE: How to Build A PC
MORE: How to Factory Reset a Windows 10 PC MORE: How to Set Up RAID In Windows 10
The AMD Radeon RX 6800 XT and Radeon RX 6800 have arrived, joining the ranks of the best graphics cards and making some headway into the top positions in our GPU benchmarks hierarchy. Nvidia has had a virtual stranglehold on the GPU market for cards priced $500 or more, going back to at least the GTX 700-series in 2013. That’s left AMD to mostly compete in the high-end, mid-range, and budget GPU markets. “No longer!” says Team Red.
Big Navi, aka Navi 21, aka RDNA2, has arrived, bringing some impressive performance gains. AMD also finally joins the ray tracing fray, both with its PC desktop graphics cards and the next-gen PlayStation 5 and Xbox Series X consoles. How do AMD’s latest GPUs stack up to the competition, and could this be AMD’s GPU equivalent of the Ryzen debut of 2017? That’s what we’re here to find out.
We’ve previously discussed many aspects of today’s launch, including details of the RDNA2 architecture, the GPU specifications, features, and more. Now, it’s time to take all the theoretical aspects and lay some rubber on the track. If you want to know more about the finer details of RDNA2, we’ll cover that as well. If you’re just here for the benchmarks, skip down a few screens because, hell yeah, do we have some benchmarks. We’ve got our standard testbed using an ‘ancient’ Core i9-9900K CPU, but we wanted something a bit more for the fastest graphics cards on the planet. We’ve added more benchmarks on both Core i9-10900K and Ryzen 9 5900X. With the arrival of Zen 3, running AMD GPUs with AMD CPUs finally means no compromises.
Update: We’ve added additional results to the CPU scaling charts. This review was originally published on November 18, 2020, but we’ll continue to update related details as needed.
AMD Radeon RX 6800 Series: Specifications and Architecture
Let’s start with a quick look at the specifications, which have been mostly known for at least a month. We’ve also included the previous generation RX 5700 XT as a reference point.
Graphics Card
RX 6800 XT
RX 6800
RX 5700 XT
GPU
Navi 21 (XT)
Navi 21 (XL)
Navi 10 (XT)
Process (nm)
7
7
7
Transistors (billion)
26.8
26.8
10.3
Die size (mm^2)
519
519
251
CUs
72
60
40
GPU cores
4608
3840
2560
Ray Accelerators
72
60
N/A
Game Clock (MHz)
2015
1815
1755
Boost Clock (MHz)
2250
2105
1905
VRAM Speed (MT/s)
16000
16000
14000
VRAM (GB)
16
16
8
Bus width
256
256
256
Infinity Cache (MB)
128
128
N/A
ROPs
128
96
64
TMUs
288
240
160
TFLOPS (boost)
20.7
16.2
9.7
Bandwidth (GB/s)
512
512
448
TBP (watts)
300
250
225
Launch Date
Nov. 2020
Nov. 2020
Jul-19
Launch Price
$649
$579
$399
When AMD fans started talking about “Big Navi” as far back as last year, this is pretty much what they hoped to see. AMD has just about doubled down on every important aspect of its architecture, plus adding in a huge amount of L3 cache and Ray Accelerators to handle ray tracing ray/triangle intersection calculations. Clock speeds are also higher, and — spoiler alert! — the 6800 series cards actually exceed the Game Clock and can even go past the Boost Clock in some cases. Memory capacity has doubled, ROPs have doubled, TFLOPS has more than doubled, and the die size is also more than double.
Support for ray tracing is probably the most visible new feature, but RDNA2 also supports Variable Rate Shading (VRS), mesh shaders, and everything else that’s part of the DirectX 12 Ultimate spec. There are other tweaks to the architecture, like support for 8K AV1 decode and 8K HEVC encode. But a lot of the underlying changes don’t show up as an easily digestible number.
For example, AMD says it reworked much of the architecture to focus on a high speed design. That’s where the greater than 2GHz clocks come from, but those aren’t just fantasy numbers. Playing around with overclocking a bit — and the software to do this is still missing, so we had to stick with AMD’s built-in overclocking tools — we actually hit clocks of over 2.5GHz. Yeah. I saw the supposed leaks before the launch claiming 2.4GHz and 2.5GHz and thought, “There’s no way.” I was wrong.
AMD’s cache hierarchy is arguably one of the biggest changes. Besides a shared 1MB L1 cache for each cluster of 20 dual-CUs, there’s a 4MB L2 cache and a whopping 128MB L3 cache that AMD calls the Infinity Cache. It also ties into the Infinity Fabric, but fundamentally, it helps optimize memory access latency and improve the effective bandwidth. Thanks to the 128MB cache, the framebuffer mostly ends up being cached, which drastically cuts down memory access. AMD says the effective bandwidth of the GDDR6 memory ends up being 119 percent higher than what the raw bandwidth would suggest.
The large cache also helps to reduce power consumption, which all ties into AMD’s targeted 50 percent performance per Watt improvements. This doesn’t mean power requirements stayed the same — RX 6800 has a slightly higher TBP (Total Board Power) than the RX 5700 XT, and the 6800 XT and upcoming 6900 XT are back at 300W (like the Vega 64). However, AMD still comes in at a lower power level than Nvidia’s competing GPUs, which is a bit of a change of pace from previous generation architectures.
It’s not entirely clear how AMD’s Ray Accelerators stack up against Nvidia’s RT cores. Much like Nvidia, AMD is putting one Ray Accelerator into each CU. (It seems we’re missing an acronym. Should we call the ray accelerators RA? The sun god, casting down rays! Sorry, been up all night, getting a bit loopy here…) The thing is, Nvidia is on its second-gen RT cores that are supposed to be around 1.7X as fast as its first-gen RT cores. AMD’s Ray Accelerators are supposedly 10 times as fast as doing the RT calculations via shader hardware, which is similar to what Nvidia said with its Turing RT cores. In practice, it looks as though Nvidia will maintain a lead in ray tracing performance.
That doesn’t even get into the whole DLSS and Tensor core discussion. AMD’s RDNA2 chips can do FP16 via shaders, but they’re still a far cry from the computational throughput of Tensor cores. That may or may not matter, as perhaps the FP16 throughput is enough for real-time inference to do something akin to DLSS. AMD has talked about FidelityFX Super Resolution, which it’s working on with Microsoft, but it’s not available yet, and of course, no games are shipping with it yet either. Meanwhile, DLSS is in a couple of dozen games now, and it’s also in Unreal Engine, which means uptake of DLSS could explode over the coming year.
Anyway, that’s enough of the architectural talk for now. Let’s meet the actual cards.
Meet the Radeon RX 6800 XT and RX 6800 Reference Cards
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
We’ve already posted an unboxing of the RX 6800 cards, which you can see in the above video. The design is pretty traditional, building on previous cards like the Radeon VII. There’s no blower this round, which is probably for the best if you’re worried about noise levels. Otherwise, you get a similar industrial design and aesthetic with both the reference 6800 and 6800 XT. The only real change is that the 6800 XT has a fatter heatsink and weighs 115g more, which helps it cope with the higher TBP.
Both cards are triple fan designs, using custom 77mm fans that have an integrated rim. We saw the same style of fan on many of the RTX 30-series GPUs, and it looks like the engineers have discovered a better way to direct airflow. Both cards have a Radeon logo that lights up in red, but it looks like the 6800 XT might have an RGB logo — it’s not exposed in software yet, but maybe that will come.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
Otherwise, you get dual 8-pin PEG power connections, which might seem a bit overkill on the 6800 — it’s a 250W card, after all, why should it need the potential for up to 375W of power? But we’ll get into the power stuff later. If you’re into collecting hardware boxes, the 6800 XT box is also larger and a bit nicer, but there’s no real benefit otherwise.
The one potential concern with AMD’s reference design is the video ports. There are two DisplayPort outputs, a single HDMI 2.1 connector, and a USB Type-C port. It’s possible to use four displays with the cards, but the most popular gaming displays still use DisplayPort, and very few options exist for the Type-C connector. There also aren’t any HDMI 2.1 monitors that I’m aware of, unless you want to use a TV for your monitor. But those will eventually come. Anyway, if you want a different port selection, keep an eye on the third party cards, as I’m sure they’ll cover other configurations.
And now, on to the benchmarks.
Radeon RX 6800 Series Test Systems
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
It seems AMD is having a microprocessor renaissance of sorts right now. First, it has Zen 3 coming out and basically demolishing Intel in every meaningful way in the CPU realm. Sure, Intel can compete on a per-core basis … but only up to 10-core chips without moving into HEDT territory. The new RX 6800 cards might just be the equivalent of AMD’s Ryzen CPU launch. This time, AMD isn’t making any apologies. It intends to go up against Nvidia’s best. And of course, if we’re going to test the best GPUs, maybe we ought to look at the best CPUs as well?
For this launch, we have three test systems. First is our old and reliable Core i9-9900K setup, which we still use as the baseline and for power testing. We’re adding both AMD Ryzen 9 5900X and Intel Core i9-10900K builds to flesh things out. In retrospect, trying to do two new testbeds may have been a bit too ambitious, as we have to test each GPU on each testbed. We had to cut a bunch of previous-gen cards from our testing, and the hardware varies a bit among the PCs.
For the AMD build, we’ve got an MSI X570 Godlike motherboard, which is one of only a handful that supports AMD’s new Smart Memory Access technology. Patriot supplied us with two kits of single bank DDR4-4000 memory, which means we have 4x8GB instead of our normal 2x16GB configuration. We also have the Patriot Viper VP4100 2TB SSD holding all of our games. Remember when 1TB used to feel like a huge amount of SSD storage? And then Call of Duty: Modern Warfare (2019) happened, sucking down over 200GB. Which is why we need 2TB drives.
Meanwhile, the Intel LGA1200 PC has an Asus Maximum XII Extreme motherboard, 2x16GB DDR4-3600 HyperX memory, and a 2TB XPG SX8200 Pro SSD. (I’m not sure if it’s the old ‘fast’ version or the revised ‘slow’ variant, but it shouldn’t matter for these GPU tests.) Full specs are in the table below.
Anyway, the slightly slower RAM might be a bit of a handicap on the Intel PCs, but this isn’t a CPU review — we just wanted to use the two fastest CPUs, and time constraints and lack of duplicate hardware prevented us from going full apples-to-apples. The internal comparisons among GPUs on each testbed will still be consistent. Frankly, there’s not a huge difference between the CPUs when it comes to gaming performance, especially at 1440p and 4K.
Besides the testbeds, I’ve also got a bunch of additional gaming tests. First is the suite of nine games we’ve used on recent GPU reviews like the RTX 30-series launch. We’ve done some ‘bonus’ tests on each of the Founders Edition reviews, but we’re shifting gears this round. We’re adding four new/recent games that will be tested on each of the CPU testbeds: Assassin’s Creed Valhalla, Dirt 5, Horizon Zero Dawn, and Watch Dogs Legion — and we’ve enabled DirectX Raytracing (DXR) on Dirt 5 and Watch Dogs Legion.
There are some definite caveats, however. First, the beta DXR support in Dirt 5 doesn’t look all that different from the regular mode, and it’s an AMD promoted game. Coincidence? Maybe, but it’s probably more likely that AMD is working with Codemasters to ensure it runs suitably on the RX 6800 cards. The other problem is probably just a bug, but AMD’s RX 6800 cards seem to render the reflections in Watch Dogs Legion with a bit less fidelity.
Besides the above, we have a third suite of ray tracing tests: nine games (or benchmarks of future games) and 3DMark Port Royal. Of note, Wolfenstein Youngblood with ray tracing (which uses Nvidia’s pre-VulkanRT extensions) wouldn’t work on the AMD cards, and neither would the Bright Memory Infinite benchmark. Also, Crysis Remastered had some rendering errors with ray tracing enabled (on the nanosuits). Again, that’s a known bug.
Radeon RX 6800 Gaming Performance
We’ve retested all of the RTX 30-series cards on our Core i9-9900K testbed … but we didn’t have time to retest the RTX 20-series or RX 5700 series GPUs. The system has been updated with the latest 457.30 Nvidia drivers and AMD’s pre-launch RX 6800 drivers, as well as Windows 10 20H2 (the October 2020 update to Windows). It looks like the combination of drivers and/or Windows updates may have dropped performance by about 1-2 percent overall, though there are other variables in play. Anyway, the older GPUs are included mostly as a point of reference.
We have 1080p, 1440p, and 4K ultra results for each of the games, as well as the combined average of the nine titles. We’re going to dispense with the commentary for individual games right now (because of a time crunch), but we’ll discuss the overall trends below.
9 Game Average
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Borderlands 3
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
The Division 2
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Far Cry 5
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Final Fantasy XIV
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Forza Horizon 4
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Metro Exodus
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Red Dead Redemption 2
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Shadow Of The TombRaider
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Strange Brigade
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
AMD’s new GPUs definitely make a good showing in traditional rasterization games. At 4K, Nvidia’s 3080 leads the 6800 XT by three percent, but it’s not a clean sweep — AMD comes out on top in Borderlands 3, Far Cry 5, and Forza Horizon 4. Meanwhile, Nvidia gets modest wins in The Division 2, Final Fantasy XIV, Metro Exodus, Red Dead Redemption 2, Shadow of the Tomb Raider, and the largest lead is in Strange Brigade. But that’s only at the highest resolution, where AMD’s Infinity Cache may not be quite as effective.
Dropping to 1440p, the RTX 3080 and 6800 XT are effectively tied — again, AMD wins several games, Nvidia wins others, but the average performance is the same. At 1080p, AMD even pulls ahead by two percent overall. Not that we really expect most gamers forking over $650 or $700 or more on a graphics card to stick with a 1080p display, unless it’s a 240Hz or 360Hz model.
Flipping over to the vanilla RX 6800 and the RTX 3070, AMD does even better. On average, the RX 6800 leads by 11 percent at 4K ultra, nine percent at 1440p ultra, and seven percent at 1080p ultra. Here the 8GB of GDDR6 memory on the RTX 3070 simply can’t keep pace with the 16GB of higher clocked memory — and the Infinity Cache — that AMD brings to the party. The best Nvidia can do is one or two minor wins (e.g., Far Cry 5 at 1080p, where the GPUs are more CPU limited) and slightly higher minimum fps in FFXIV and Strange Brigade.
But as good as the RX 6800 looks against the RTX 3070, we prefer the RX 6800 XT from AMD. It only costs $70 more, which is basically the cost of one game and a fast food lunch. Or put another way, it’s 12 percent more money, for 12 percent more performance at 1080p, 14 percent more performance at 1440p, and 16 percent better 4K performance. You also get AMD’s Rage Mode pseudo-overclocking (really just increased power limits).
Radeon RX 6800 CPU Scaling and Overclocking
Our traditional gaming suite is due for retirement, but we didn’t want to toss it out at the same time as a major GPU launch — it might look suspicious. We didn’t have time to do a full suite of CPU scaling tests, but we did run 13 games on the five most recent high-end/extreme GPUs on our three test PCs. Here’s the next series of charts, again with commentary below.
13-Game Average
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Assassin’s Creed Valhalla
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Borderlands 3
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
The Division 2
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Dirt 5
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Far Cry 5
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Final Fantasy XIV
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Forza Horizon 4
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Horizon Zero Dawn
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Metro Exodus
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Red Dead Redemption 2
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Shadow of the Tomb Raider
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Strange Brigade
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Watch Dogs Legion
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
These charts are a bit busy, perhaps, with five GPUs and three CPUs each, plus overclocking. Take your time. We won’t judge. Nine of the games are from the existing suite, and the trends noted earlier basically continue.
Looking just at the four new games, AMD gets a big win in Assassin’s Creed Valhalla (it’s an AMD promotional title, so future updates may change the standings). Dirt 5 is also a bit of an odd duck for Nvidia, with the RTX 3090 actually doing quite badly on the Ryzen 9 5900X and Core i9-10900K for some reason. Horizon Zero Dawn ends up favoring Nvidia quite a bit (but not the 3070), and lastly, we have Watch Dogs Legion, which favors Nvidia a bit (more at 4K), but it might have some bugs that are currently helping AMD’s performance.
Overall, the 3090 still maintains its (gold-plated) crown, which you’d sort of expect from a $1,500 graphics card that you can’t even buy right now. Meanwhile, the RX 6800 XT mixes it up with the RTX 3080, coming out slightly ahead overall at 1080p and 1440p but barely trailing at 4K. Meanwhile, the RX 6800 easily outperforms the RTX 3070 across the suite, though a few games and/or lower resolutions do go the other way.
Oddly, my test systems ended up with the Core i9-10900K and even the Core i9-9900K often leading the Ryzen 9 5900X. The 3090 did best with the 5900X at 1080p, but then went to the 10900K at 1440p and both the 9900K and 10900K at 4K. The other GPUs also swap places, though usually the difference between CPU is pretty negligible (and a few results just look a bit buggy).
It may be that the beta BIOS for the MSI X570 board (which enables Smart Memory Access) still needs more tuning, or that the differences in memory came into play. I didn’t have time to check performance without enabling the large PCIe BAR feature either. But these are mostly very small differences, and any of the three CPUs tested here are sufficient for gaming.
As for overclocking, it’s pretty much what you’d expect. Increase the power limit, GPU core clocks, and GDDR6 clocks and you get more performance. It’s not a huge improvement, though. Overall, the RX 6800 XT was 4-6 percent faster when overclocked (the higher results were at 4K). The RX 6800 did slightly better, improving by 6 percent at 1080p and 1440p, and 8 percent at 4K. GPU clocks were also above 2.5GHz for most of the testing of the RX 6800, and it’s default lower boost clock gave it a bit more room for improvement.
Radeon RX 6800 Series Ray Tracing Performance
So far, most of the games haven’t had ray tracing enabled. But that’s the big new feature for RDNA2 and the Radeon RX 6000 series, so we definitely wanted to look into ray tracing performance more. Here’s where things take a turn for the worse because ray tracing is very demanding, and Nvidia has DLSS to help overcome some of the difficulty by doing AI-enhanced upscaling. AMD can’t do DLSS since it’s Nvidia proprietary tech, which means to do apples-to-apples comparisons, we have to turn off DLSS on the Nvidia cards.
That’s not really fair because DLSS 2.0 and later actually look quite nice, particularly when using the Balanced or Quality modes. What’s more, native 4K gaming with ray tracing enabled is going to be a stretch for just about any current GPU, including the RTX 3090 — unless you’re playing a lighter game like Pumpkin Jack. Anyway, we’ve looked at ray tracing performance with DLSS in a bunch of these games, and performance improves by anywhere from 20 percent to as much as 80 percent (or more) in some cases. DLSS may not always look better, but a slight drop in visual fidelity for a big boost in framerates is usually hard to pass up.
We’ll have to see if AMD’s FidelityFX Super Resolution can match DLSS in the future, and how many developers make use of it. Considering AMD’s RDNA2 GPUs are also in the PlayStation 5 and Xbox Series S/X, we wouldn’t count AMD out, but for now, Nvidia has the technology lead. Which brings us to native ray tracing performance.
10-game DXR Average
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
3DMark Port Royal
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Boundary Benchmark
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Call of Duty Black Ops Cold War
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Control
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Crysis Remastered
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Dirt 5
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Fortnite
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Metro Exodus
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Shadow of the Tomb Raider
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
WatchDogs
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Well. So much for AMD’s comparable performance. AMD’s RX 6800 series can definitely hold its own against Nvidia’s RTX 30-series GPUs in traditional rasterization modes. Turn on ray tracing, even without DLSS, and things can get ugly. AMD’s RX 6800 XT does tend to come out ahead of the RTX 3070, but then it should — it costs more, and it has twice the VRAM. But again, DLSS (which is supported in seven of the ten games/tests we used) would turn the tables, and even the DLSS quality mode usually improves performance by 20-40 percent (provided the game isn’t bottlenecked elsewhere).
Ignoring the often-too-low framerates, overall, the RTX 3080 is nearly 25 percent faster than the RX 6800 XT at 1080p, and that lead only grows at 1440p (26 percent) and 4K (30 percent). The RTX 3090 is another 10-15 percent ahead of the 3080, which is very much out of AMD’s reach if you care at all about ray tracing performance — ignoring price, of course.
The RTX 3070 comes out with a 10-15 percent lead over the RX 6800, but individual games can behave quite differently. Take the new Call of Duty: Black Ops Cold War. It supports multiple ray tracing effects, and even the RTX 3070 holds a significant 30 percent lead over the 6800 XT at 1080p and 1440p. Boundary, Control, Crysis Remastered, and (to a lesser extent) Fortnite also have the 3070 leading the AMD cards. But Dirt 5, Metro Exodus, Shadow of the Tomb Raider, and Watch Dogs Legion have the 3070 falling behind the 6800 XT at least, and sometimes the RX 6800 as well.
There is a real question about whether the GPUs are doing the same work, though. We haven’t had time to really dig into the image quality, but Watch Dogs Legion for sure doesn’t look the same on AMD compared to Nvidia with ray tracing enabled. Check out these comparisons:
Apparently Ubisoft knows about the problem. In a statement to us, it said, “We are aware of the issue and are working to address it in a patch in December.” But right now, there’s a good chance that AMD’s performance in Watch Dogs Legion at least is higher than it should be with ray tracing enabled.
Overall, AMD’s ray tracing performance looks more like Nvidia’s RTX 20-series GPUs than the new Ampere GPUs, which was sort of what we expected. This is first gen ray tracing for AMD, after all, while Nvidia is on round two. Frankly, looking at games like Fortnite, where ray traced shadows, reflections, global illumination, and ambient occlusion are available, we probably need fourth gen ray tracing hardware before we’ll be hitting playable framerates with all the bells and whistles. And we’ll likely still need DLSS, or AMD’s Super Resolution, to hit acceptable frame rates at 4K.
Radeon RX 6800 Series: Power, Temps, Clocks, and Fans
We’ve got our usual collection of power, temperature, clock speed, and fan speed testing using Metro Exodus running at 1440p, and FurMark running at 1600×900 in stress test mode. While Metro is generally indicative of how other games behave, we loop the benchmark five times, and there are dips where the test restarts and the GPU gets to rest for a few seconds. FurMark, on the other hand, is basically a worst-case scenario for power and thermals. We collect the power data using Powenetics software and hardware, which uses GPU-Z to monitor GPU temperatures, clocks, and fan speeds.
GPU Total Power
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
AMD basically sticks to the advertised 300W TBP on the 6800 XT with Metro Exodus, and even comes in slightly below the 250W TBP on the RX 6800. Enabling Rage Mode on the 6800 XT obviously changes things, and you can also see our power figures for the manual overclocks. Basically, Big Navi can match RTX 3080 when it comes to power if you relax increase the power limits.
FurMark pushes power on both cards a bit higher, which is pretty typical. If you check the line graphs, you can see our 6800 XT OC starts off at nearly 360W in FurMark before it throttles down a bit and ends up at closer to 350W. There are some transient power spikes that can go a bit higher as well, which we’ll discuss more later.
GPU Core Clocks
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Looking at the GPU clocks, AMD is pushing some serious MHz for a change. This is now easily the highest clocked GPU we’ve ever seen, and when we manually overclocked the RX 6800, we were able to hit a relatively stable 2550 MHz. That’s pretty damn impressive, especially considering power use isn’t higher than Nvidia’s GPUs. Both cards also clear their respective Game Clocks and Boost Clocks, which is a nice change of pace.
GPU Core Temp
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
GPU Fan Speed
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Temperatures and fan speeds are directly related to each other. Ramp of fan speed — which we did for the overclocked 6800 cards — and you can get lower temperatures, at the cost of noise levels. We’re still investigating overclocking as well, as there’s a bit of odd behavior so far. The cards will run fine for a while, and then suddenly drop into a weak performance mode where performance might be half the normal level, or even worse. That’s probably due to the lack of overclocking support in MSI Afterburner for the time being. By default, though, the cards have a good balance of cooling with noise. We’ll get exact SPL readings later (still benchmarking a few other bits), but it’s interesting that all of the new GPUs (RTX 30-series and RX 6000) have lower fan speeds than the previous generation.
Image 1 of 2
Image 2 of 2
We observed some larger-than-expected transient power spikes with the RX 6800 XT, but to be absolutely clear, these transient power spikes shouldn’t be an issue — particularly if you don’t plan on overclocking. However, it is important to keep these peak power measurements in mind when you spec out your power supply.
Transient power spikes are common but are usually of such short duration (in the millisecond range) that our power measurement gear, which records measurements at roughly a 100ms granularity, can’t catch them. Typically you’d need a quality oscilloscope to measure transient power spikes accurately, but we did record several spikes even with our comparatively relaxed polling.
The charts above show total power consumption of the RX 6800XT at stock settings, overclocked, and with Rage Mode enabled. In terms of transient power spikes, we don’t see any issues at all with Metro Exodus, but we see brief peaks during Furmark of 425W with the manually overclocked config, 373W with Rage Mode, and 366W with the stock setup. Again, these peaks were measured within one 100ms polling cycle, which means they could certainly trip a PSU’s over power protection if you’re running at close to max power delivery.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
To drill down on the topic, we split out our power measurements from each power source, which you’ll see above. The RX 6800 XT draws power from the PCIe slot and two eight-pin PCIe connectors (PEG1/PEG2).
Power consumption over the PCIe slot is well managed during all the tests (as a general rule of thumb, this value shouldn’t exceed 71W, and the 6800 XT is well below that mark). We also didn’t catch any notable transient spikes during our real-world Metro Exodus gaming test at either stock or overclocked settings.
However, during our FurMark test at stock settings, we see a power consumption spike to 206W on one of the PCIe cables for a very brief period (we picked up a single measurement of the spike during each run). After overclocking, we measured a simultaneous spike of 231W on one cable and 206W on the other for a period of one measurement taken at a 100ms polling rate. Naturally, those same spikes are much less pronounced with Rage Mode overclocking, measuring only 210W and 173W. A PCIe cable can easily deliver ~225W safely (even with 18AWG), so these transient power spikes aren’t going to melt connectors, wires, or harm the GPU in any way — they would need to be of much longer duration to have that type of impact.
But the transient spikes are noteworthy because some CPUs, like the Intel Core i9-9900K and i9-10900K, can consume more than 300W, adding to the total system power draw. If you plan on overclocking, it would be best to factor the RX6800 XT’s transient power consumption into the total system power.
Power spikes of 5-10ms can trip the overcurrent protection (OCP) on some multi-rail power supplies because they tend to have relatively low OCP thresholds. As usual, a PSU with a single 12V rail tends to be the best solution because they have much better OCP mechanisms, and you’re also better off using dedicated PCIe cables for each 8-pin connector.
Radeon RX 6800 Series: Prioritizing Rasterization Over Ray Tracing
It’s been a long time since AMD had a legitimate contender for the GPU throne. The last time AMD was this close … well, maybe Hawaii (Radeon R9 290X) was competitive in performance at least, while using quite a bit more power. That’s sort of been the standard disclaimer for AMD GPUs for quite a few years. Yes, AMD has some fast GPUs, but they tend to use a lot of power. The other alternative was best illustrated by one of the best budget GPUs of the past couple of years: AMD isn’t the fastest, but dang, look how cheap the RX 570 is! With the Radeon RX 6800 series, AMD is mostly able to put questions of power and performance behind it. Mostly.
The RX 6800 XT ends up just a bit slower than the RTX 3080 overall in traditional rendering, but it costs less, and it uses a bit less power (unless you kick on Rage Mode, in which case it’s a tie). There are enough games where AMD comes out ahead that you can make a legitimate case for AMD having the better card. Plus, 16GB of VRAM is definitely helpful in a few of the games we tested — or at least, 8GB isn’t enough in some cases. The RX 6800 does even better against the RTX 3070, generally winning most benchmarks by a decent margin. Of course, it costs more, but if you have to pick between the 6800 and 3070, we’d spend the extra $80.
The problem is, that’s a slippery slope. At that point, we’d also spend an extra $70 to go to the RX 6800 XT … and $50 more for the RTX 3080, with its superior ray tracing and support for DLSS, is easy enough to justify. Now we’re looking at a $700 graphics card instead of a $500 graphics card, but at least it’s a decent jump in performance.
Of course, you can’t buy any of the Nvidia RTX 30-series GPUs right now. Well, you can, if you get lucky. It’s not that Nvidia isn’t producing cards; it’s just not producing enough cards to satisfy the demand. And, let’s be real for a moment: There’s not a chance in hell AMD’s RX 6800 series are going to do any better. Sorry to be the bearer of bad news, but these cards are going to sell out. You know, just like every other high-end GPU and CPU launched in the past couple of months. (Update: Yup, every RX 6800 series GPU sold out within minutes.)
What’s more, AMD is better off producing more Ryzen 5000 series CPUs than Radeon RX 6000 GPUs. Just look at the chip sizes and other components. A Ryzen 9 5900X has two 80mm square compute die with a 12nm IO die in a relatively compact package, and AMD is currently selling every single one of those CPUs for $550 — or $800 for the 5950X. The Navi 21 GPU, by comparison, is made on the same TSMC N7 wafers, and it uses 519mm square, plus it needs GDDR6 memory, a beefy cooler and fan, and all sorts of other components. And it still only sells for roughly the same price as the 5900X.
Which isn’t to say you shouldn’t want to buy an RX 6800 card. It’s really going to come down to personal opinions on how important ray tracing will become in the coming years. The consoles now support the technology, but even the Xbox Series X can’t keep up with an RX 6800, never mind an RTX 3080. Plus, while some games like Control make great use of ray tracing effects, in many other games, the ray tracing could be disabled, and most people wouldn’t really miss it. We’re still quite a ways off from anything approaching Hollywood levels of fidelity rendered in real time.
In terms of features, Nvidia still comes out ahead. Faster ray tracing, plus DLSS — and whatever else those Tensor cores might be used for in the future — seems like the safer bet long term. But there are still a lot of games forgoing ray tracing effects, or games where ray tracing doesn’t make a lot of sense considering how it causes frame rates to plummet. Fortnite in creative mode might be fine for ray tracing, but I can’t imagine many competitive players being willing to tank performance just for some eye candy. The same goes for Call of Duty. But then there’s Cyberpunk 2077 looming, which could be the killer game that ray tracing hardware needs.
We asked earlier if Big Navi, aka RDNA2, was AMD’s Ryzen moment for its GPUs. In a lot of ways, it’s exactly that. The first generation Ryzen CPUs brought 8-core CPUs to mainstream platforms, with aggressive prices that Intel had avoided. But the first generation Zen CPUs and motherboards were raw and had some issues, and it wasn’t until Zen 2 that AMD really started winning key matchups, and Zen 3 finally has AMD in the lead. Perhaps it’s better to say that Navi, in general, is AMD trying to repeat what it did on the CPU side of things.
RX 6800 (Navi 21) is literally a bigger, enhanced version of last year’s Navi 10 GPUs. It’s up to twice the CUs, twice the memory, and is at least a big step closer to feature parity with Nvidia now. If you can find a Radeon RX 6800 or RX 6800 XT in stock any time before 2021, it’s definitely worth considering. RX 6800 and Big Navi aren’t priced particularly aggressively, but they do slot in nicely just above and below Nvidia’s competing RTX 3070 and 3080.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.