AMD with its Radeon RX 6000 series introduces a feature called Smart Access Memory. The promise is that in specific use-cases where the CPU needs to access a lot of the video memory, it can improve frame rates by up to 6%. Announced alongside the RX 6800 series, Smart Access Memory (SAM) is an AMD branding for the Resizable BAR (Base Address Register) feature the PCI-SIG standardized years ago. AMD realized that this feature can be useful in improving gaming performance.
How it Works
Your processor can typically only access up to 256 MB of your graphics card’s dedicated video memory at any given time. This arbitrary limit dates back to the 32-bit era when address-space was at a premium, and interestingly, carried on even into the 64-bit era. Around this time, newer APIs, such as DirectX 11, relied less on mirroring data between the system and video-memory. Obviously, we want to be able to transfer data to all GPU memory, so a windowing mechanism is used whereby your GPU holds 256 MB of its dedicated memory as a scratchpad any CPU-bound data to be juggled in and out of.
Another reason why nobody even saw this as a “problem” was because of the enormous amount of memory bandwidth at the disposal of GPUs (relative to system memory), which makes this jugglery “free.” When it came to the Radeon RX 6800 series, which is up against RTX 30-series “Ampere” GPUs with wider memory buses and faster memory devices, the company finally bit the bullet and implemented the Resizable BAR feature as Smart Access Memory. Since this is a PCI-SIG feature that can be added at the driver-level, NVIDIA announced that it intends to implement this feature as well, via a driver update.
Resizable BAR requires UEFI firmware support, and AMD has artificially segmented its support to just its Ryzen 5000 “Zen 3” processor + 500-series chipset combination, possibly as a means to promote the two. It’s likely that NVIDIA’s implementation is broader as it doesn’t have a CPU + chipset platform of its own, and AMD will follow.
Once enabled, the CPU sees the entire 16 GB of video memory on the RX 6800 series as one addressable block. AMD calculates that this helps with certain game engines which leverage the CPU in their 3D rendering stages (think certain kinds of post-processing, etc.). One possible explanation as to why AMD restricted SAM to its 500-series chipset platform is PCI-Express Gen 4. As such, PCI-Express 3.0 x16 bottlenecks next-gen GPUs by only a single-digit percentage, as shown in our RTX 3080 PCIe Scaling article; so AMD figured all that untapped PCIe Gen 4 bandwidth could be used by SAM without affecting the GPU’s performance during normal 3D rendering. But this doesn’t explain why you need a Ryzen 5000 processor, and why a Ryzen 3000 “Matisse” won’t do.
To enable SAM, you need a 500-series chipset motherboard with the latest UEFI firmware supplied by your motherboard vendor, a Ryzen 5000 processor, and a Radeon RX 6800 series graphics card. Simply enable the “Resizable BAR Support” toggle in the “Advanced” PCIe settings of your UEFI setup program. For these toggles to be available, CSM has to be disabled. This also means that if you’ve been booting from an MBR partition, using CSM, you’ll have to reinstall Windows on a GPT partition. There’s also a conversion mechanism between MBR and GPT, but I haven’t tested that.
In this review, we’re testing using a 500-series chipset motherboard and a Ryzen 9 5900X processor to tell you if Radeon Smart Access Memory is worth the hype and whether it helps the RX 6800 XT gain more against the RTX 3080.
Test Setup
Test System
Processor:
AMD Ryzen 9 5900X
Motherboard:
ASRock X570 Taichi AMD X570, BIOS v3.59
Memory:
2x 8 GB DDR4-3900 CL16 Infinity Fabric at 1900 MHz
Apple’s new in-house M1 chip is officially on the market. The first reviews and benchmarks are starting to pop up, so we’re gathering everything we know about it into one handy place, which we’ll update as we learn more.
Apple M1 Cheat Sheet: Key details at a glance
Release Date:
Ships Week of 11/16
Found in:
MacBook Air, MacBook Pro, Mac Mini
Architecture:
Arm-based
CPU Cores:
8-core CPU
Nm Process:
5nm
Graphics:
Integrated 8-core GPU with 2.6 teraflops of throughput
Memory:
8GB or 16GB of LPDDR4X-4266 MHz SDRAM
Apple M1 Release Date
The first computers with Apple’s M1 chip are
already up for purchase
. To try it, you’re going to have to choose between one of the three new products that feature the chip: the new
MacBook Air
, the
13-inch MacBook Pro
or the
Mac Mini
. Each comes with two configurations using the M1. The MacBook Pro also still has two Intel configurations on offer, and the Mac Mini has one Intel processor offering.
Apple started shipping out M1 device purchases this week.
Apple M1 Price
The M1 is a mobile chip, so you have to get it built into one of Apple’s machines.
The Mac Mini starts at $699 with 256GB of storage, making it the cheapest way to get an M1 processor. The price range stretches all the way to $2,099, which will net you the 13-inch MacBook Pro with 2TB of storage.
Pricing is largely down to the specifics of your purchase. But so far, it doesn’t seem like M1 Macs will be significantly more expensive than Intel-based Intel counterparts. The M1 MacBook Air configuration that is most similar to the Intel MacBook Air we reviewed earlier this year is $1,249, for instance, which is $50 cheaper than last year’s version. The $999 starting price remains unchanged.
Apple M1 Specs
Here’s the M1’s bread-and-butter. What does Apple’s new Arm-based chip have that Intel’s x86 architecture doesn’t? Well, it uses a
5nm process
, for one. By comparison, even
Intel’s 7nm process
isn’t expected to start hitting its products until at least 2022. Apple’s CPU has 8 cores, which you would typically need to step up to Intel’s H-series product stack to get on mobile chips.
Four of the M1’s cores are dedicated to high-power performance, while the other 4 are for low-power efficiency. That evens out to a 10W thermal envelope overall, with the low power cores supposedly taking up a tenth of the power needed for the high-power cores. The chip also has a total of 16 billion transistors.
The M1 is also a system on a chip (SOC) with integrated graphics and onboard memory. The included GPU has 8 cores as well, with 128 total compute units and 2.6 teraflops of throughput (there is one exception here: the entry level MacBook Air uses a version of the M1 with a 7-core GPU). The “unified memory” replaces the need for separate RAM, meaning that the chip comes with either 8GB or 16GB of LPDDR4X-4266 MHz SDRAM, depending on your device.
The M1 also has a separate 16-core neural engine for machine learning tasks.
Apple M1 Native Performance
The core drawback to the M1 chip right now is that, because it uses a different architecture and instruction set from Intel or AMD parts, it won’t be able to run x86 apps without emulating them. Developers are already on the case, with Microsoft saying it’s working on a version of Microsoft Office that will run natively on M1 machines and Adobe saying that it’s working on an M1-native creative suite. But early adopters might have to wait a bit to get the most performance they can out of their new chips.
When the M1 does get to run natively, though, it seems to pack some serious power.
Engadget
reports that the M1 MacBook Air had Geekbench 5 results of 1,619/6,292. That’s well above their results for the
2020 i7 MacBook Air
, which were 1,130/3,053. Meanwhile, the Tiger Lake
Dell XPS 13 9310
scored in 1,496/5,254 on our own Geekbench 5.0 benchmarks, while the
ThinkPad X1 Carbon Extreme Gen 3
with an Intel Core i7-10850H chip scored 1,221/6,116.
The M1’s single-core score also beats the 27-inch 2020 Core i9 iMac’s single-core score, which only hit 1,246. It loses out to the iMac’s 9,046 multi-core score, but that officially gives the M1 higher single-core test results out of any Intel Macs, even desktops.
Outlets like The Verge also tested the M1, but under different conditions. Using a MacBook Pro and testing with Geekbench 5.3, The Verge found its review unit scored 1,730/7,510 points.
We’re curious to see how the M1 stacks against a potential 8-core Tiger Lake chip down the line, as well as AMD’s new Ryzen 5000 processors, which are also looking to take Intel’s CPU crown. For now, though, the M1 is looking to be the fastest mobile chip you can buy.
Apple M1 Emulated Performance
Finally, we reach the biggest potential drawback for the M1: Since the Apple M1 uses a completely new architecture (at least new for Macs), it can’t natively run apps designed for x86 chips. Instead, it has to emulate them. Apple’s built a tool to let users easily do this, called Rosetta 2, but running apps through Rosetta 2 means they’re going to take a performance hit.
Official reviews are reporting on emulation more anecdotally rather than with official numbers, but user
Geekbench results
show that, even when emulating apps, the M1 chip is still faster than Intel counterparts. On November 14th, a user posted test results for an M1-equipped MacBook Air running the x86 version of Geekbench. The machine earned a single-core score of 1,313 and a multi-core score of 5,888. That’s about 79% as powerful as the native scores for the same machine, which were 1,687 on single-core and 7,433 on multi-core. Still, even the emulated scores are higher than any other Intel Mac on single-core, including the 2020 27-inch iMac with a Core i9 processor. As for the multi-core score, it’s still much higher than the 3,067 score of the Core i7 2020 MacBook Air.
Keep in mind that performance varies from program to program, however. When The Verge tested the x86 version of Adobe Creative Cloud on its MacBook Pro review unit, the publication came across a bug that consistently halved its export bitrate. The publication said that export times stayed flat even when running multiple 4K exports in a row, suggesting strong performance, but it’s a good reminder that emulation still has drawbacks even if benchmark results look strong.
Again, this is a place where we’re looking forward to seeing how the M1 fares against the newest Intel and AMD chips. Because the M1 isn’t going to be running at its best here, other chipmakers might be able to make up the current performance gap more easily in upcoming mobile chip releases.
Apple M1 Graphics Performance
With Apple M1-equipped machines already starting to hit the public, preliminary benchmark results are starting to show up on the GFXBench browser. And while the 8-core, 128 CU, 2.6-teraflop chip’s obviously not going to compete with recent behemoths like the RTX 3000 series or even with older yet higher-end discrete GPUs like the GTX 1080, it does beat old standards like the Radeon RX 560 and the GTX 1050 Ti.
For instance, on high-level GFXBench tests like 1440p Manhattan 3.1.1, the Apple M1 hit 130.9 frames per second, while the 1050 Ti only hit 127.4 fps and the Radeon RX 560 was capped out at 101.4 fps. Meanwhile, on the more intensive Aztec Ruins High Tier test, the M1 hit 77.4 fps while the GTX 1050 Ti maxed out at 61.4 fps. The Radeon RX 560 did perform best in this test, with a score of 82.5 fps, but generally has lower frame rates across most tests.
Meanwhile, Ars Technica found that the M1 scored 11,476 points in 3DMark’s Slingshot Extreme Unlimited GPU test, as compared to the iPad Pro 2020’s score of 9,978 and the iPhone 12 Pro’s score of 6,226.
While it’s tricky to try to judge overall chip performance off of a few online and mobile benchmarks, these tests are the best official benchmark results we have right now. Still, reviews are making strong anecdotal claims as well. Engadget said that The Pathless runs at a solid 60 fps on its review MacBook Air, as does Fortnite at 1,400 x 900.
Apple M1 Battery Life
Despite packing more processing power overall, the M1 chip comes with 4 low-power cores that help it conserve battery life. Apple’s saying that this gives M1-equipped machines “the best battery life ever on a Mac,” which it tested by wirelessly browsing the web with brightness set to “8 clicks from the bottom” and by playing FHD videos under the same brightness settings. These tests are far from comprehensive, but reviews generally tend to place M1 Macs either around or above current Intel counterparts.
According to Engadget’s battery benchmarks, which “involved looping an HD video,” the M1 MacBook Air can stay powered on for up to 16 hours and 20 minutes, which is about 5 hours more than the publication’s numbers for the latest Intel MacBook Air. That’s also about 7 hours more than we got on our own battery benchmark for the the latest Intel MacBook Air.
The Verge found that the M1 MacBook Pro’s numbers are a little less impressive, which is to be expected with more power. The publication claimed to “easily get 10 hours on a charge” and said it had to resort to running 4K YouTube videos on Chrome in the background to drop that down to 8 hours.
The Verge is less optimistic on MacBook Air, though, saying it’s getting “between 8 and 10 hours of real, sustained work.”
macOS Big Sur, iPhone and iPad Apps
One of the coolest new features of the M1 chip is that, because it uses the same processor architecture as the iPhone and iPad, it can now run apps designed for those devices natively. However, reviewers are skeptical of this feature’s current implementation.
First, you’ll have to download these programs through the Mac app store using a filter, since developers still aren’t allowed to directly distribute iOS apps even on more traditional systems. Second, you’ll find that many of your favorites won’t be available, like Gmail, Slack and Instagram. That’s because developers are allowed to opt out of making their apps available on Mac, which plenty seem to be opting for. Third, apps that require touch input direct you to a series of unintuitive “touch alternatives,” like pressing space to tap in the center of a window or using the arrow keys to swipe.
The Verge called using iOS apps on Mac a “messy, weird experience,” in part because the apps that are available are “from developers that haven’t been updated to be aware of newer devices.” While Overcast, a podcast app, worked great for The Verge, HBO Max was stuck to a small window that couldn’t be resized and couldn’t play fullscreen videos.
Playing iOS games also proved to be a chore for some reviewers, as
TechCrunch
noted. The publication tried the iOS version of Among Us on an M1 MacBook Air and found that, while it ran smoothly, using the trackpad to emulate a touchscreen was a chore. There’s also an option to operate a virtual touchscreen with your mouse, but as the reviewer also ran across a fixed window size with no full screen functionality, it’s clear that gaming on M1 still has a way to go.
The elephant in the room here across all experiences seems to be the lack of a touchscreen. We were hoping Apple would announce touchscreen Macs during its ‘One More Thing’ event earlier this month. But with no word on those yet, it’s hard for iOS apps on M1 to feel like more than an afterthought. There’s also the lack of support from big developers, who are probably waiting for these kinks, like no touchscreen support, to work themselves out.
ASRock and PowerColor joined the roster of AMD’s graphics card partners in revealing all-new custom-designed coolers for AMD’s latest RX 6800 and RX 6800 XT GPUs. ASRock has four brand new custom SKUs, while PowerColor is still keeping most of its designs to itself, but the company wasn’t afraid to show a sneak peek of its new Red Devil GPU on Twitter.
The new PowerColor Red Devil looks absolutely monstrous for a graphics card – from the rear/front angle of the image, the cooler looks to be nearly 4 slots wide. Hopefully the card will have monstrous cooling potential to go with that size.
Aesthetically, the card is very different from the previous Red Devil design on the RX 5700 XT. The new Red Devil lacks any sticks and instead opts to go with six red LED strips that start in the middle of the card and go all the way to the back. Unfortunately, that is all we know about this card, but hopefully we’ll get the detailed specs soon.
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
ASRock has four new custom designs. The Radeon RX 6800 XT Taichi X 16G OC, and the Radeon RX 6800 XT Phantom Gaming D 16G OC cover the 6800XT side of the equation, while Radeon RX 6800 Phantom Gaming D 16G OC and Radeon RX 6800 Challenger Pro 16G OC fill in the RX 6800 side.
ASRock’s flagship RX 6800 XT Taichi offers a triple-fan cooler design and measures 2.8 slots thick. Keep in mind it’s also quite tall at 140mm, so case compatibility will be something to look out for. The card features a metal shroud and metal backplate, with RGB lighting up the card in the center fan, sides, and a small area on the backplate. The color theme is very neutral and the design language is simple and striking at the same time. That will make it look good in most cases, especially those with metal finishes.
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
ASRock’s RX 6800 XT Phantom Gaming really does deserve the name, with very bright colors and a shrouded design that screams “gamer.”
The shroud looks to be all metal, similar to the Taichi, but this time you get four brushed aluminum accents flanking all four sides of the card, and in those accents, you get even more accents with several red lines pointing towards the central RGB fan. The backplate also features a black brushed aluminum design accompanied by red accents with the Phantom Gaming and ASRock logos present. This card really stands out – the dark parts of the shroud are highly contrasted by bright red and bright brushed aluminum accents. If you’re looking for something bright and blingy, this is the card for you.
Image 1 of 2
Image 2 of 2
The RX 6800 variant of this card is very similar to its bigger brother, with only slight differences to the design that are barely noticeable. The biggest change you’ll find with the RX 6800 Phantom Gaming is the PCB and the rear backplate that don’t extend all the way to the end of the card. Compare that to the RX 6800 XT Phantom Gaming, which does. Ironically the RX 6800’s cooler could possibly do its job better than the RX 6800 XT model, simply because the heatsink has all that extra space open in the rear for air to escape.
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
Finally, we have ASRock’s more budget-oriented option, the RX 6800 Challenger Pro. Compared to the other three designs, this one is the most humble, which allows it to blend in well with other computer components.
The shroud features a metal finish along with silver accents, but the silver is rather dark, so it blends well with the dark nature of the metal shroud. For RGB, all you get is a single dash of it in the top where the ASRock and Challenger Pro logos are located. For cooling, you get a triple fan design and a 2.7 slot thick cooler. Similar to the Phantom Gaming, the PCB and backplate doesn’t extend all the way to the edge of the cooler. While it does look a little plain (like the Phantom Gaming), you will benefit from having a more efficient cooler that can push air out the rear of the card.
ASRrock’s options for the RX 6800/RX 6800 XT look very pleasing. The metal shrouds are a nice touch, giving the cards a more premium look than other designs. ASRock recently posted the cards to its website but hasn’t revealed pricing or availability.
Just as the CPU is the brains of your computer, the SSD is the brains behind your storage drive. Though many companies produce SSDs, most don’t make their own controllers. Phison is a leader in the SSD controller space and one of only a few companies that produce the hardware that manages your precious data on the latest flash.
Phison has spearheaded the PCIe Gen4 NVMe SSD market with its PS5016-E16 NVMe SSD controller and has enjoyed staying on top for quite a while. Samsung’s 980 PRO recently dethroned Phison the top-ranking title, but Phison’s next-gen PS5018-E18 NVMe SSD controller may lead their way to victory once again, assuming the final firmware quirks get worked out.
The Prototype with a Speed Governor
Phison was gracious enough to send over an early engineering sample of the PS5018-E18 to play with. However, as exciting as early sampling is, ES units aren’t without drawbacks. The unfortunate part here is that the device is roughly 1-2 firmware revisions away from production and paired with slower than optimal flash. The company officially rates the PS5018-E18 to deliver throughput of up to 7.4 / 7.0 GBps read/write as well as sustain upwards of 1 million random read and write IOPS with next-gen flash.
Our prototype comes with 2TB of Micron’s 512Gb B27B 96L TLC flash operating at 1,200 MTps rather than Micron’s recently announced 176L replacement gate TLC flash, capable of saturating the controller’s max interface speed. While this prototype won’t be nearly as fast as the final production units, it is interesting to see how it compares in testing at this point with the current generation flash. A recent news post shows that it is even capable of sustaining a hefty 1.2 million random write IOPS in the configuration we have in our hands today.
Architecture of PS5018-E18 SSD Controller
Built from the ground up and produced one TSMC’s 12nm technology node, Phison’s PS5018-E18 is quite the capable PCIe 4.0 x4 SSD controller in terms of features and performance. Phison crammed in five Arm Cortex R5 CPU cores into this thing with three acting as primary cores for the heavy work while the other two are clocked lower for the company’s Dual CoXProcessor 2.0 code to efficiently help offload some of the strain from main core workloads.
Image 1 of 2
Image 2 of 2
The controller interfaces with the NAND over eight NAND flash channels at up to 1,600 MTps and supports capacities of up to 8TB with 32 chip enables. There are eight packages on our sample, four on each side thanks to the small size of the controller that measures just 12 x 12mm. The design leverages a DRAM-based architecture, too, with our sample containing two SK hynix DDR4 chips, one on each side of the PCB.
Features of Phison PS5018-E18 SSD Controller
Phison’s PS5018-E18 meets the NVMe 1.4 spec and comes with a bunch of features. As per usual, it comes with support for both Trim and S.M.A.R.T. data reporting. Like other controllers, it supports Active State Power Management (ASPM), Autonomous Power State Transition (APST), and the L1.2 ultra-low power state. Thermal throttling is implemented, but isn’t of much concern as the new controller doesn’t get too hot in most use cases, and mind you, that is without a nickel integrated heat sink.
It also leverages the company’s fourth-generation LDPC ECC engine, SmartECC (RAID ECC), and End-to-End Data Path Protection for robust error correction and enhanced data reliability. It even supports hardware-accelerated AES 128/256-bit encryption that is TCG, Opal 2.0, and Pyrite compliant and comes with crypto erase capability.
Phison’s E18 supports a fully-dynamic write caching like the E12S and E16 before. Therefore, the SLC cache size spans 1/3rd of the drive’s available capacity when using TLC flash. The company also implemented SmartFlush, which helps to quickly recover the cache for predictable and consistent performance.
Test Bench and Methodology
Asus X570 ROG Crosshair VIII Hero (Wi-Fi)
AMD 3600X @4.3 GHz (all cores)
2x8GB Crucial Ballistix RGB DDR4 3600 MHz CL16
Sapphire Pulse Radeon RX570 4GB
Corsair RM850x
The initial results you see in this article are with the SSDs tested at 50% full capacity and with the operating system drive using Windows 10 Pro 1909. Also, note that while some of the new PCIe Gen4 SSDs are capable of 1 million IOPS, our lowly 6C/12T Ryzen 5 3600X can only sustain 650-700K IOPS at most. We will soon upgrade our test system’s CPU to a 12C/24T Zen 3 5900X to push next-gen storage to the max.
2TB Performance of Phison PS5018-E18 SSD Controller
We threw in a few of the best SSDs into the mix to gauge the Phison PS5018-E18’s performance. We included two of the top dogs, a WD Black SN850 and Samsung’s 980 PRO as well as Adata’s XPG Gammix S50 Lite, an entry-level Gen4 performer based on SMI’s newest NVMe SM2267 controller and 1,200MTps flash.
We included the Sabrent Rocket NVMe 4.0, which has Phison’s E16 controller and Kioxia’s 96L TLC operating at up to 800MTps, and we added in the Sabrent Rocket Q4, which features Micron’s cheaper 96L QLC flash. Additionally, we threw in Crucial’s P5, Samsung’s 970 EVO Plus, WD’s Black SN750, and AN1500 as some PCIe Gen3 competition.
Game Scene Loading – Final Fantasy XIV
Final Fantasy XIV Stormbringer is a free real-world game benchmark that easily and accurately compares game load times without the inaccuracy of using a stopwatch.
When it comes to game loading, the Phison PS5018-E18 proves more competitive than the E16 before it, but with the current flash, even Samsung’s 970 EVO Plus takes the lead over it in this test. The E18 is not quite as responsive as Samsung’s 980 PRO nor WD’s Black SN850, at least not yet.
Transfer Rates – DiskBench
We use the DiskBench storage benchmarking tool to test file transfer performance with our own custom blocks of data. Our 50GB dataset includes 31,227 files of various types, like pictures, PDFs, and videos. Our 100GB dataset consists of 22,579 files, with 50GB of them being large movies. We copy the data sets to new folders and then follow-up with a read test of a newly-written 6.5GB zip file and 15GB movie file.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
When copying around datasets and reading large files, the Phison PS5018-E18 prototype delivered responsive performance, especially strong read performance, but it isn’t quite on par with the 1TB WD Black SN850 and Samsung 980 PRO. When copying our 50GB and 100GB datasets, the Phison PS5018-E18 ranked fourth place, outperforming most of the Gen3 competitors, but trailing WD’s mighty RAID 0 configured Black AN1500.
Trace Testing – PCMark 10 Storage Tests
PCMark 10 is a trace-based benchmark that uses a wide-ranging set of real-world traces from popular applications and everyday tasks to measure the performance of storage devices. The quick benchmark is more relatable to those who use their PCs for leisure or basic office work, while the full benchmark relates more to power users.
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
While previous tests show minor gains over the E16, PCMark 10 quick results look to have degraded compared to the E16 and are a little on the low side. That’s a little strange considering there is now an additional core in its architecture. PCMark 10’s Full System Drive benchmark shows improvement, but the Phison PS5018-E18 is still ranking behind both the new Samsung and WD.
Trace Testing – SPECworkstation 3
Like PCMark 10, SPECworkstation 3 is a trace-based benchmark, but it is designed to push the system harder by measuring workstation performance in professional applications.
Image 1 of 3
Image 2 of 3
Image 3 of 3
When hit with some harder workloads in SPECWorkstation3, Phison’s E18 delivered fast performance but didn’t ellipse its competition in the way that Samsung 980 PRO’s performance did. The company will need to work a bit harder to improve to Samsung-like levels here.
Synthetic Testing – ATTO / iometer
iometer is an advanced and highly configurable storage benchmarking tool while ATTO is a simple and free application that SSD vendors commonly use to assign sequential performance specifications to their products. Both of these tools give us insight into how the device handles different file sizes.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
In ATTO, we tested Phison’s PS5018-E18 at a QD of 1, representing most day-to-day file access at various block sizes. Based on ATTO’s results, the E18 shows the fastest peak sequential results, but once we bumped up the QD, both the Samsung and WD inched ahead in reads.
The E18 came back and demonstrated very responsive write performance, however, peaking at 6.6 GBps. When it comes to random performance, ranking fourth in read performance and first in write performance, the E18 is fairly competitive with the current flash, but not as tuned as its competitors.
First Thoughts on the PS5018-E18 Prototype
Phison’s PS5018-E18 NVMe SSD controller is impressive on paper and has some fast specs. With five CPU cores, it is just one shy of Crucial’s P5, but isn’t shackled down by a Gen3 PHY and runs much cooler thanks to TSMC’s 12nm technology node.
With our prototype using Micron’s 96L B27B TLC and operating at 1,200 MTps, the controller shows noticeable improvements over the company’s E16 in some workloads, but there are still some kinks to be worked out. Samsung’s 980 PRO and WD’s Black SN850 both have the upper hand for now.
The Phison PS5018-E18’s performance will be a lot more interesting to analyze once we have finalized firmware and NAND configurations. With support for up to 1,600 MTps NAND flash, higher speeds are just around the corner and a lot of the performance gap will shrink.
In fact, while it wasn’t until just days ago that Micron announced supporting NAND, Phison already has new prototypes with Micron’s faster 176L (B47R) flash in hand and development is well underway. Retail products are just around the corner, roughly a month or two away.
AMD is about to launch its new graphics cards tomorrow, and with it, bring ray tracing to PCs with AMD hardware. Its implementation relies on Microsoft DXR Ray Tracing tech, and to showcase this, AMD is coming out with a tech demo that you’ll be able to run on your own systems.
For now, all we have is the above teaser for the demo, so we don’t know exactly what’s in store for us yet — we’ll have to see whether it’s just a scripted, on-the-fly rendered visual demo or a tech demo that actually lets you take control and play.
Either way, AMD isn’t making a secret out of the focus: reflections, shadows, lighting — all key elements in demonstrating Ray Tracing. The demo utilizes DirectX 12 Ultimate and AMD FidelityFX.
For now, that’s all we can share. For our unboxing of the RX 6800 and RX 6800 XT, click here, or you can read our everything-we-know summary of the new AMD Radeon GPUs. The AMD Radeon RX 6800 and RX 6800 XT are set to launch tomorrow for MSRP’s of $579 and $649, respectively, although whether those pricepoints will be met remains to be seen.
The AMD Instinct MI 100 is the first compute accelerator card with the new CDNA architecture and is produced in 7 nanometer technology at TSMC. AMD wants the PCIe 4.0 card and its 32 GByte HBM2 memory Nvidias A 100 compete and has not only revised the compute units known from the GCN architecture, but also built significantly more of them into the chip. The card should be available for system integrators from 6400 US dollars and undercut so Nvidias A 100 PCIe version clearly that from 10. 700 Euro is available.
A lot of flops , a lot of honor When it comes to technical specifications, AMD is cracking it again – probably also to win over competitor Nvidia with the A 100 – to limit accelerators in some areas.
The CDNA chip “Arcturus” has on the MI 100 120 active compute units (CUs) and even if AMD confirmed on request that this is the full expansion u act, recordings of the chip suggest that there are 8 CUs more. We asked AMD again for clarification and are currently waiting for the answer.
Block Diagram AMD Instinct MI 100
(Image: AMD)
The 120 As with the closely related graphics chips with GCN architecture (GraphicsCore Next), CUs each have 64 Shader cores, which results in ALUs for the entire chip 7680. Together with a maximum boost clock rate of 1480 MHz a throughput of , 07 TFlops with single precision (FP 32). As befits an HPC accelerator, the FP is 62 – rate at half, so 11, 5 TFlops – and not only above the 10 – TFlops brand but also round 19 Percent above the comparable value of Nvidias A 100 – Accelerator in the SXM4 -Format.
Arcturus-Die-Shot : CDNA accelerator with 108 active compute units and four HBM2 stacks
(Image: AMD)
The rake beast is fed as it was with the MI 50 / MI 60 of four HBM2 stacks. These hold 8 GB each and are marked with 1200 MHz clocked, what a transfer rate of 1, 228 TByte / s is good. An 8 MByte level 2 cache (6 TByte / s) is supposed to cushion the memory access. From the registers to the HBM2, everything is secured by ECC (SECDED).
In addition to PCI-Express 4.0, every MI 100 – Map with three infinity links à 92 GByte / s – together therefore 276 GByte / s. This means that there are now directly networked groups of four from MI 100 possible, which can form a coherent memory area.
Matrix Core Engines: A bit of Tensor The Compute Units of MI 100 are similar to those of the previous generation Graphics Core Next, but have been further upgraded by AMD for compute use. In order to achieve a higher throughput with matrix-matrix multiplications, AMD has expanded the circuits and register ports and calls the result the Matrix Core Engine.
AMD has a different approach than Nvidia with their tensor cores. The Core Matrix Engines work consistently with full FP 23-Accuracy. However, their maximum throughput is lower and they are not suitable for FP 60 – calculations. Therefore, it is difficult to compare the maximum throughput between the two approaches. Who consistently on full FP 23 – Accuracy is dependent on AMD, whoever also uses the alternative TF 32 or lower accuracy, the Nvidia accelerators promise more performance.
What both approaches have in common is that they use the BFloat format 16 (BF 16) support which with the value range of FP 32 (8-bit exponent) with the precision of FP 16 (7-bit mantissa, plus 1 sign bit) combined and is a de facto alternative to full FP 32 established in AI training Has. AMD gives in the CDNA white paper for BFloat 16 Indeed 10 Bit mantissa and 5 bit exponent to what actually FP 16 corresponds to.
Instinct MI 100 (PCIe) A 100 (SXM) Tesla V 100 Tesla P 100 Manufacturer AMD Nvidia Nvidia Nvidia GPU CDNA Arcturus A 100 (Ampere) GV 100 (Volta) GP 100 (Pascal) CUs / SMs 120 108 80 56 FP 32 Cores / SM 64 64 64 64 FP 32 Cores / GPU 7680 6912 5120 3584 FP 64 Cores / SM 32 32 32 32 FP 64 Cores / GPU 3840 3456 2560 1792 Matrix Multiply Engines / GPU
(Matrix Core Engine / Tensor Cores) 480 432 640 – GPU Boost Clock 1502 N / A 1455 MHz 1480 MHz Peak FP 32 / FP 64 TFlops 23, 07 / 10, 54 19, 5 / 9.7 15 / 7.5 10, 6 / 5.3 Peak Tensor Core TFlops – 156 (TF 32) / 312 (TF 32 Structural Sparsity) 120 (Mixed Precision) – Peak Matrix Core Engine TFlops 46, 1 (FP 32) – – – – Peak FP 16 / BF 16 TFlops 184, 6 / 92, 3 312 / 312 (624 / 624 Structural Sparsity) 125 / 125 21 ,1 / — Peak INT8 / INT4 TOps 184, 6 / 156, 6 624 / 1248 (1248 / 2496 Structural Sparsity) 62 / – 21,1 / — memory interface 4096 Bit HBM2 5120 Bit HBM2 4096 Bit HBM2 4096 Bit HBM2 Memory size 32 GByte 40 GByte 16 GByte 16 GByte Memory transfer rate 1 , 2 TByte / s 1,55 TByte / s 0.9 TByte / s 0, 73 TByte / s TDP 300 Watt 400 Watt (SXM) 300 Watt 300 Watt Transistors (billion) N / A 54 Billion 21, 1 billion 15, 3 billion GPU The Size n / a 826 mm² 815 mm² 610 mm² Manufacturing 7 nm 7 nm 12 nm FFN 16 nm FinFET + AMD Instinct MI 100 with complex IF connection and soldering points for up to three eight-pole connections.
(Image: AMD)
Without Radeon, without displays After Nvidia’s Tesla and Quadro waiver, AMD is now also changing the branding of the accelerator cards and removing the Radeon from the product name. The card is only called AMD Instinct MI 92 – whereby the number, unlike earlier Instinct cards, is no longer for the FP 16 – computing power is available.
In order to a lot of computing power in the TDP framework of 276 Watts, AMD has, according to its own information, omitted many hardwired functions that are necessary for a graphics card in the first CDNA chip “Arcturus”. This includes the rasterization units, tesselator hardware, special graphics buffers, the blending units in the raster output stages and the display engine. The MI 15 do not use it and Crysis does not run on it either.
Not removed However, AMD has the video engines, i.e. the specialized decoders and encoders. Reason: Machine learning is often used to analyze video streams or image recognition.
One of the first rack Server comes from Supermicro (Dell, HPE and Gigabyte also have similar products in their range). With the real cards it is noticeable that only an eight-pin connector is sufficient.
In the run up to our comprehensive Radeon RX 6800 XT and RX 6800 “Big Navi” RDNA2 reviews, we have a picture unboxing article for you. It is becoming a bit of a tradition for NVIDIA and AMD to allow the press to do unboxing articles in the days leading up to the main review, not that we don’t welcome it. It’s just that unlike NVIDIA, which directly sells its Founders Edition cards on the GeForce website, AMD’s reference design cards (pictured in this article) are sold through its add-in-board partners, alongside their custom-design graphics cards. The reference design (made by AMD) package you see in this review will be extremely rare in the retail channel even though the exact same card will be sold under various labels. Since this is going to be a double launch from AMD, with the Radeon RX 6800 XT and Radeon RX 6800 coming out on the same day, we have unboxings of both cards in this article.
Both the Radeon RX 6800 XT and RX 6800 come in premium-looking paperboard boxes that have a matte finish. The logos are finished off in chrome, and you’ll see a large glamour shot of the card on its face. Chrome is a recurring theme with not just the box but also the card underneath, as it denotes accurate reflections from the real-time raytracing technology the card ships with. Unlike the NVIDIA Founders Edition boxes, which ship in a fancy clam-shell design, the RX 6800 XT and RX 6800 ship in two different box designs, as the cards are of different dimensions. The RX 6800 box is conventional thin paperboard and opens sideways, while the RX 6800 XT features a blow-out box made of thicker cardboard, essentially similar to the boxes some motherboards ship in.
Over the following pages, we will unbox each of the two cards, and also give you our first visual impressions of the cards. At this point, we cannot show you pictures of the internals, nor can we can’t share any performance numbers. You’ll have to wait for our comprehensive reviews which go live in the coming days.
João Silva 2 days ago Featured Tech News, Graphics
It seems that Nvidia is developing an alternative technology similar to AMD’s Smart Access Memory to work on RTX 30 series GPUs. Nvidia’s version will also include support for both Intel and AMD CPUs.
During AMD’s Radeon RX 6000 announcement, the company introduced a new technology named Smart Access Memory. For this technology to work, AMD stated that customers would have to pair a Ryzen 5000 series processor and a Radeon RX 6000 graphics card. Additionally, the motherboard needs to have the AGESA 1.1.0.0 firmware update installed.
Usually, Windows-based PCs can only work with a fraction of the total VRAM of the graphics card, therefore limiting its performance. By using Smart Access memory, the data channel is expanded, allowing the system to fully utilise the bandwidth of the PCIe, increasing performance.
As per Nvidia’s statement to GamerNexus, Smart Access Memory/resizable BAR is “part of the PCI Express spec”, meaning that Nvidia graphics cards can also support such technology. Nvidia has been working on this feature internally, and it will roll out support for Ampere GPUs in “future software updates”. AMD’s own performance projections indicate that it is possible to gain up to 11% performance with Smart Access Memory enabled. According to Nvidia, their internal tests are also “seeing similar performance results”.
It’s not clear if AMD will extend its technology to older AMD processors or Intel CPUs.
KitGuru says: Are you still torn between a new Nvidia or AMD GPU for your next upgrade? Do features like Smart Access Memory factor into your buying decision?
Become a Patron!
Check Also
8C/16T Intel Tiger Lake-H processor spotted on Userbenchmark
Intel has already started to ship its first Tiger Lake processors, but higher-end Tiger Lake …
João Silva 3 days ago Featured Tech News, Graphics
It seems that January will be a month full of graphics cards launches. Besides the RTX 3080Ti, new rumours suggest that Nvidia is also planning to launch the RTX 3060, the RTX 3050Ti, and the RTX 3050 in the same month. There is also the possibility that AMD launches the Radeon RX 6700 series in January, making it a very crowded month.
The RTX 3050Ti was initially planned for a February release, but as reported by VideoCardz, it’s now scheduled for a release in January, alongside the other upcoming RTX 30 series SKUs.
Regarding the board used on the RTX 3060, 3050Ti, and 3050, it will be the same as the one used on the RTX 3060Ti, the PG190. This board will be compatible with both the GA104 GPU (RTX 3060Ti) and the GA106 GPU, which will be used on the RTX 3060, 3050Ti, and 3050.
One interesting thing to note is that the RTX 3060 was apparently planned to feature the PG190 SKU 30 with [email protected], but it was then updated to the PG190 SKU 40 featuring [email protected] The last update made to the SKU now includes the PG190 SKU 50, which is equipped with [email protected] If this information proves to be true, we will see the mid-range RTX 3060 featuring more VRAM than the RTX 3060Ti (8GB), the RTX 3070 (8GB) and the RTX 3080 (10GB).
On another note, this same report also claims that the RTX 3050Ti is expected to feature 6GB across a 192-bit bus, complementing what we already knew from this SKU, which should be using the GA106GPU with 3584 CUDA cores.
KitGuru says: Are you looking for a new graphics card for your system? Will you be waiting until early 2021 to see new GPUs launch?
Become a Patron!
Check Also
NVIDIA is developing its own “Smart Access Memory” tech for the RTX 30 GPUs
It seems that Nvidia is developing an alternative technology similar to AMD’s Smart Access Memory …
If the Apple M1’s processing power didn’t leave you impressed, maybe the 5nm chip’s graphical prowess will. A new GFXBench 5.0 submission for the M1 exhibits its dominance over oldies, such as the GeForce GTX 1050 Ti and Radeon RX 560.
The Apple M1 marks an important phase in the multinational giant’s history. It’s the start of an era where Apple no longer has to depend on a third-party chipmaker to power its products. The M1 might be one of the most intriguing processor launches in the last couple of years. Built on the 5nm process node, the unified, Arm-based SoC (system-on-a-chip) brings together four Firestorm performance cores, four Icestorm efficiency cores, and an octa-core GPU in a single package.
Much of the M1’s GPU design continues to remain a mystery to us. So far, we know it features eight cores, which amounts to 128 execution units (EUs). Apple didn’t reveal the clock speeds, but it wasn’t shy to boast about its performance numbers.
According to Apple, M1 can simultaneously tackle close to 25,000 threads and deliver up to 2.6 TFLOPS of throughput. Apple is probably quoting the M1’s single-precision (FP32) performance. If you’re looking for a point of reference, the M1 ties the Radeon RX 560 (2.6 TFLOPS), and it’s just a few TFLOPS away from catching the GeForce GTX 1650 (2.9 TFLOPS).
Apple M1 Benchmarks
GPU
Aztec Ruins Normal Tier
Aztec Ruins High Tier
Car Chase
1440p Manhattan 3.1.1
Manhattan 3.1
Manhattan
T-Rex
ALU 2
Driver Overhead 2
Texturing
Apple M1
203.6 FPS
77.4 FPS
178.2 FPS
130.9 FPS
274.5 FPS
407.7 FPS
660.1 FPS
298.1 FPS
245.2 FPS
71,149 MTexels/s
GeForce GTX 1050 Ti
159.0 FPS
61.4 FPS
143.8 FPS
127.4 FPS
218.3 FPS
288.3 FPS
508.1 FPS
512.6 FPS
218.2 FPS
59,293 MTexels/s
Radeon RX 560
146.2 FPS
82.5 FPS
115.1 FPS
101.4 FPS
174.9 FPS
221.0 FPS
482.9 FPS
6275.4 FPS
95.5 FPS
22,8901 MTexels/s
Generic benchmarks only tell one part of the story. Furthermore, GFXBench 5.0 isn’t exactly the best tool for testing graphics cards either, given that it’s aimed at smartphone benchmarking. As always, we recommend treating the benchmark results with caution until we see a thorough review of the M1.
The anonymous user tested the M1 under Apple’s Metal API, making it hard to find apples-to-apples non-Apple comparisons. At the time of writing, no one has submitted a Metal run with the GeForce GTX 1650. Luckily, there is a submission for the GeForce GTX 1050 Ti, so the Pascal-powered graphics card will have to serve as the baseline for now.
In a clear victory, the Apple M1 bested the GeForce GTX 1050 Ti by a good margin. The Radeon RX 560 didn’t stand a chance, either. Admittedly, the two discrete gaming graphics cards are pretty old by today’s standards, but that shouldn’t overshadow the fact that M1’s integrated graphics outperformed both 75W desktop graphics cards, but within a pretty tight TDP range of its own.
The M1 will debut in three new Apple products: the 13-inch MacBook Pro starting $1,299, the Macbook Air at $999, and the Mac Mini at $699. Nobody really buys an Apple device to game. However, if the in-house SoC lives up to the hype, casual gaming could be a reality on the upcoming M1-powered devices.
Apple’s new exciting M1 CPU was just announced and promises significant performance gains over the companies previous systems equipped with Intel core processors. Unfortunately, in Apple’s presentation, the performance numbers shown were at best vague and non-conclusive. That leaves us in the dark as to where exactly the M1 will actually land in terms of CPU performance. That changes today with a tweet from @andysomerfield, who has real performance numbers for the Apple M1 chip in the Affinity Photo benchmark. The M1 is put against one of Apple’s older 2019 iMacs with an 8th generation Intel Core i5 6-core CPU that features a boost frequency of up to 4.1GHz.
Affinity Photo is a photo editing service with a built-in benchmark that measures vector and rasterization performance. Apple’s M1 processor scored the following:
504 points in Single-Core Vector
2032 points in Multi-Core Vector
538 points in Multi-Core Raster
6966 points for GPU Raster
Combined 532 Single Core
Combined 7907 Single GPU
For the Core i5 and Radeon Pro 580X combo in the 2019 iMac, here are the results:
310 Single-Core Vector
1515 Multi-Core Vector
393 Multi-Core Raster
8133 GPU Raster
407 Single Core
5568 Single GPU
In the CPU tests, the M1 chip wins hands down, being on average 25% faster than the Core i5 CPU. The M1 is also not much slower than the RX 580X in the GPU scores. This is exciting for Apple’s M1 chip, which clearly demonstrates its ARM based architecture can go toe to toe with previous-gen Intel x86 chips in performance. In the past, ARM was a great for attaining great power efficiency and long battery life. Now Apple is demonstrating we can have the best of both worlds: high performance and long battery life.
There are of course caveats. This is only one set of benchmarks, which could be specifically optimized for the M1 chip. This is also against a base model 8th Gen Core i5. We need to see a lot more data to get a handle on how the M1 truly compares against other processors across a variety of tasks. Either way, it’ll be very interesting to see how the M1 stacks up against Intel’s and AMD’s latest offerings. Those have more cores and higher frequencies, which the initial M1 isn’t likely to match, but it should still prove great for laptop battery life.
If Nvidia has things it’s way, AMD’s latest new performance-boosting technology for RX 6000 “Big Navi” graphics cards might not be a huge advantage, after all.
According to a statement Nvidia gave to Gamer’s Nexus, the company says it will soon enable a feature similar to AMD’s Smart Access Memory (SAM) tech on its Ampere graphics cards. In fact, Nvidia already has the feature working in its labs.
Additionally, Nvidia claims its feature will work equally well with Intel and AMD processors and can use the PCIe 3.0 bus, while AMD has already said that its solution requires an AMD Ryzen 5000 series processor, X570 motherboard, and Radeon RX 6000 GPU to work.
Nivida also suggests that AMD’s feature, which it hasn’t fully detailed yet, merely consists of adjusting PCIe’s resizeable bar feature, which can be done on almost any modern motherboard if the manufacturer exposes the option.
From NVIDIA, re:SAM: “The capability for resizable BAR is part of the PCI Express spec. NVIDIA hardware supports this functionality and will enable it on Ampere GPUs through future software updates. We have it working internally and are seeing similar performance results.”November 12, 2020
AMD says that Smart Access Memory allows the CPU and GPU to share information across a broader PCIe pipe, but the company hasn’t divulged the details of the tech fully yet. AMD merely says that the CPU and GPU are usually constrained to a 256MB ‘aperture’ for data transfers. That limits game developers and requires frequent trips between the CPU and main memory if the data set exceeds that size, causing inefficiencies and capping performance. Smart Access Memory removes that limitation, thus boosting performance due to faster data transfer speeds between the CPU and GPU.
Image 1 of 3
Image 2 of 3
Image 3 of 3
However, the feature looks akin to the PCIe resizable bar feature, a standard feature of the PCIe spec. Nvidia’s statement surely suggests that the company feels likewise. If the GPU supports it, adjusting this setting in the motherboard BIOS essentially allows mapping of the full frame buffer, thus improving performance.
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
Nvidia says its hardware already supports the feature, though it will need to be enabled. Any PCIe-compliant CPU, be it either Intel or AMD, should also be able to use the tech with Nvidia’s graphics cards.
That certainly takes the shine off of AMD’s requirement of an AMD GPU, CPU, and high-end X570 motherboard, especially given that Nvidia plans to enable its competing (yet similar) functionality on all platforms – Intel, AMD, and PCIe 3.0 motherboards included.
Nvidia says that its early testing shows similar performance gains to AMD’s SAM and that it will enable the feature through future firmware updates. However, the company hasn’t announced a timeline for the updates.
It certainly feels like Nvidia is trying to steal AMD’s thunder. If Nvidia’s Ampere silicon experiences similar gains from the Smart Access Memory-like tech, it will definitely complicate matters for AMD’s push to create a walled all-AMD PC gaming garden.
ASRock got their start in 2002 with a sole focus on producing motherboards. With their 3C design concept of “Creativity, Consideration, Cost-effectiveness,” they have gone from their humble origins to an enthusiast favorite and industry juggernaut. They have expanded their product portfolio to include not only highly regarded motherboards but also graphics cards, networking components, mini-PCs, and industrial systems.
Today, I look at the ASRock 4X4 Box-4800U barebones system with an AMD Ryzen 7 4800U at its core. To test the system, I was provided with two 32 GB (64 GB) Patriot DDR4 SODIMMs rated at CL22 and 3200 MHz along with a Patriot P300 512 GB M.2 SSD. Considering the system’s small size and mobile CPU, you won’t be seeing insane graphics performance; however, the AMD Radeon Vega 8 IGP should prove more than adequate for daily tasks and light gaming. That said, before heaping on any praise, let’s take a closer look at what ASRock is offering with this barebones system.
512 GB M.2 SSD (not included) SSD Provided by ASRock (supports 1x M.2 SSD and 1x 2.5 in. SATA drive)
Optical Drive:
None
Audio:
Realtek ALC233 high definition audio controller
Connectivity:
1x HDMI 2.0a 1x DisplayPort 1.2a 1x 1 GbE LAN w/DASH 1x 2.5 GbE LAN 2x USB 2.0 Ports 2x USB 3.2 Gen2 Type-C w/DP1.2a support 1x USB 3.2 Gen2 Type-A 1x Audio combo jack
Communications:
Intel Wi-Fi 6 AX200 2×2 802.11ax Bluetooth 5.1 1x Realtek RTL8125BG 2.5 GbE LAN 1x Realtek Realtek R8111FPV 1 GbE LAN w/DASH
Following last week’s teaser posted on its Twitter page, Sapphire Technology this week officially introduced its Nitro+ Radeon RX 6800-series graphics cards. As expected, the boards come with rather fancy triple-fan triple-wide cooling systems that promise higher-than-reference clocks out of the box as well as some additional overclocking potential.
The new family of Sapphire’s custom-designed Nitro+ Radeon RX 6800-series graphics boards consists of three models: the Nitro+ Radeon RX 6800, the Nitro+ Radeon RX 6800 XT, and the Nitro Radeon RX 6800 XT SE. All the graphics cards share the impressive cooling system as well as a custom-designed PCB with an eight-phase VRM (according to a picture published by the manufacturer) that relies on two 8-pin auxiliary PCIe power connectors, but the SE version also has addressable RGB lighting and a USB-C display output.
Since Sapphire’s Nitro+ Radeon RX 6800 printed circuit boards are covered with the company’s cooling system, it is unclear to tell the difference between them and AMD’s reference designs. At this point, we can only say that the PCBs appear to be about the same length as AMD’s own.
Meanwhile, Sapphire’s new cooling systems look like they will be remarkable. They are longer, taller and thicker than those designed by AMD. They are also equipped with multiple heat pipes as well as a high-tech-looking backplate. Furthermore, they feature three fans of different sizes: the smallest one is located in the middle, the largest one is located on the rear side, whereas the medium one is on the front side of the board near the exhaust openings.
The oddest part about Sapphire’s Nitro+ Radeon RX 6800-series announcement is that the company does not disclose actual GPU and memory frequencies of its new graphics cards. We have asked Sapphire about specifications of the new products and are awaiting their response.
All three graphics cards are already listed at Sapphire’s website, so expect them to ship in the foreseeable future, but given the fact that the company does not disclose clocks of the upcoming products, it is hard to say when exactly these boards will ship.
The reference models of the AMD Radeon RX have not yet reached the market 6800 XT and 6800, the new cards company graphics based on the architecture RDNA2, but we have already seen some custom designs from manufacturers such as ASUS or MSI, and now it’s the turn of XFX .
The company has revealed in a small teaser trailer on his twitter account where you can see some traces of the design that will have his custom models of the RX 6800 and XT version . It’s not as much detail as ASUS and MSI, of which we could see practically the entire design, but it does allow us to see what a triple fan heatsink system looks like, with the rear end heatsink fins visible between the backplate. . That indicates that the dissipation module is longer than the PCB.
{twee [1326559922801270790 ” >
Has design touches reminiscent of the RX 5700 XT THICC III from the company itself , although the rear with that opening is different.
The 18 November, the new RX 6800 and RX 6800 XT with custom AMD design, but for custom models we may have to wait a few more weeks.
End of Article. Tell us something in the Comments or come to our Forum!
Antonio Delgado
Computer Engineer by training, writer and hardware analyst at Geeknetic since 2011. I love to gut everything what passes through my hands, especially the newest hardware that we receive here for reviews. In my spare time I fiddle with 3d printers, drones and other gadgets. For anything here you have me.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.