by Jordi Bercial 4 hours ago …
by Jordi Bercial 4 hours ago …
As it has been seen announced in a concise way and almost hidden in a slide on the company’s earnings in this last quarter, Intel has started shipping its first Intel DG1 GPUs in volume , somewhat which means that we are very close to starting to see dedicated graphics cards with this GPU inside them, certainly something very interesting.
However, if we continue reading the slide, we will notice that not only the Intel DG1 are already on the way to become a product that we can see soon in stores, but the following The company’s GPU, the Intel DG2, based on the Intel Xe-HPG architecture, has managed to be booted in labs , a big step that leads to understanding that the Intel DG2 is in an advanced stage of its development.
The fact that Intel is mass-selling their Intel DG1 GPUs is also indicative that rumors that Intel graphics cards were going to go the same way as Larrabee, which ended up being canceled, they were finally totally false and that the development of the new Intel graphics continues from strength to strength.
Finally, It should be noted that we recently saw how it was mentioned that Amnesia: Rebirth specified in its recommended requirements the use of an Intel Xe-HPG, which would be the architecture on which the Intel DG2 will be based .
End of Article. Tell us something in the Comments or come to our Forum!
Jordi Bercial
Avid enthusiast of technology and electronics . I messed around with computer components almost since I learned to ride. I started working at Geeknetic after winning a contest on their forum for writing hardware articles. Drift, mechanics and photography lover. Don’t be shy and leave a comment on my articles if you have any questions.
Intel this week announced that it had taped out and powered on its first graphics processor based on the Xe-HPG architecture. The company also reaffirmed that it is working on a stack of discrete Xe-HPG GPUs that will be used for mid-range and enthusiast-class gaming PCs sometimes next year. In the meantime, Intel started to ship its DG1 discrete GPU based on the Xe-LP architecture for entry-level gaming PCs. (via SeekingAlpha / Intel Earnings Presentation).
“We powered on our next-generation GPU for client DG2,” said Bob Swan, CEO of Intel, during the company’s earnings call with analysts and investors. “Based on our Xe high-performance gaming architecture, this product will take our discrete graphics capability up the stack into the enthusiast segment.”
Intel’s first discrete GPU in two decades — the DG1 — relies on the same Xe-LP architecture that is used for the company’s latest built-in GPUs found in codenamed Tiger Lake processors. Intel is currently shipping its DG1 GPUs for revenue and expects the first PCs with its discrete graphics inside to hit the market later this quarter.
“Our first discrete GPU DG1 is shipping now and will be in systems from multiple OEMs later in Q4,” said Swan.
Intel’s Xe-LP GPUs have a significantly higher performance than Intel’s previous-generation graphics solutions, but since Xe-LP GPUs have to be integrated into CPUs, they are tailored primarily for low power consumption and efficiency in terms of transistor count. This is not the case for other Intel Xe architectures, namely Xe-HP for datacenters, Xe-HPC for supercomputers, and Xe-HPG for gaming PCs. In fact, Xe-HPG combines numerous peculiarities of other three architectures.
“We have been working since 2018 on another optimization of Xe-HP targeted gaming,” said Raja Koduri, chief architect of Intel, at the company’s Architecture Day in August. “That microarchitecture variant is called Xe-HPG. […] We had to leverage the best aspects of the three designs we’ve had in progress to build a gaming optimized GPU. We had a good perf-per-watt building block to start with, Xe-LP. We leveraged the scale from Xe-HP to get a much bigger config and we leveraged the compute frequency optimizations from Xe-HPC.”
Intel’s Xe-HPG GPUs will support hardware-accelerated raytracing along with other features, which will have an impact on the architecture of its execution units and/or sub-slices. Furthermore, since Xe-HPG GPUs will be made at an external foundry (i.e., at TSMC), Intel used a lot of third-party IP (e.g., memory controller and interface, display interfaces, etc.) to optimize design costs.
Intel got its first Xe-HPG silicon from its foundry partner back in mid-August and has tested it internally since then. So far, the company has only confirmed that the GPU could be powered on, but this is a good sign in general.
Intel’s family of Xe-HPG graphics processors will consist of multiple GPUs targeting market segments that will span from mid-range all the way to enthusiast level. So far, Intel has not disclosed how many discrete Xe-HPG graphics chips it plans to launch next year. Intel also did not reveal at this time whether it had taped out and powered on its big flagship Xe-HPG GPU, or something smaller and cheaper. Typically, GPU companies like AMD and Nvidia tend to introduce their big GPUs first and then follow up with smaller processors. However, every rule has an exception. For example, Nvidia started to roll-out its highly-successful Maxwell architecture from mainstream offerings.
In any case, right now Intel is bringing up its Xe-HP graphics processors for datacenters that are made using its 10nm Enhanced SuperFin process technology and demonstrate performance of around 40 FP32 TFLOPS as well as an unknown Xe-HPG graphics processor.
This week, Acer introduced a notebook with Intel’s first discrete GPU DG1. ASUS anticipated this announcement in a certain way and presented such a notebook with an Intel GPU a few days earlier.
As part of the announcement of the figures for the third quarter 2020 spoke to Intel’s CEO Bob Swan about the GPU baptized DG1 and stated that it is currently being delivered to OEM customers. However, further details about the technical data of the DG1-GPU were not revealed. So it remains with the previous assumption of 96 ES, so the same Expansion stage as with the fastest Tiger Lake processors. However, a dedicated GPU with the same expansion level should clock much higher, as it can produce a higher power consumption and waste heat. For the Tiger Lake processors we have for a Core i7 – 1185 G7 with a clock rate of 1, 35 GHz a maximum power consumption 14 W measured – only for the integrated GPU. The dedicated variant should be allowed to accept significantly more without compromising the CPU package.
There is also uncertainty about the memory interface and the memory used. While one could actually assume the use of GDDR6, at least Acer has LPDDR4X as the storage solution of the DG1. The graphics memory is 4 GB – that much seems to be certain. Since the first notebooks with DG1-GPU will not be on the market until the fourth quarter, we will probably have to wait a little longer before we know the first performance data.
Somewhat surprisingly, Intel also spoke about DG2, the second generation of a dedicated GPU. The first tape out was over and Alpha Silicon is now in the laboratories to be tested there.
DG2 is not intended to be a simple successor to DG1, but a significantly more powerful version based on the Xe-HPG architecture. Intel announced this at the Architecture Day 2020. Among other things, the Xe-HPG architecture is intended to provide hardware acceleration for ray tracing calculations and is Intel’s first approach for the gaming GPU generations that will follow and that will range from the entry-level to the high-end segment. DG2 should “take our discrete graphics capability up the stack into the enthusiast segment.”
Just like for DG1, it is still completely unclear what the technical data looks like for DG2. DG2 will not be manufactured by Intel itself, but completely externally. The DG2-GPU should be one of the most important future projects, which will fall back on an external production.
Yesterday, the next @ acer conference was held, where several new products were presented. One of the most interesting laptops, which, however, was not shown at all during the online broadcast is the Acer Swift 3X, which will be the next device after the ASUS VivoBook Flip 14, which will be equipped with a dedicated graphics card Intel Iris Xe MAX Graphics. This chip was previously known as DG1 – Discrete Graphics 1. The notebook will be available for sale from December, with a price starting from 899 euros. We expect Intel to finally reveal details of its first dedicated Xe-based graphics card in the coming days.
Acer Swift 3 X will be based on Intel Tiger Lake-U generation processors 11. We get a choice of 4-core and 8-thread Intel Core i5 units – 1135 G7 and Intel Core i7 – 1165 G7. In the case of the dedicated Intel Iris Xe MAX Graphics card, we only know that it had 4 GB of VRAM and 96 Execute units Units. Unfortunately, the manufacturer did not disclose more information, which means that Intel plans to discuss all the details of its GPU. According to unofficial reports, the Intel Iris Xe MAX Graphics may be presented later this week.
Acer Swift 3 X laptop will receive 14 – inch IPS screen with a resolution of 1920 x 1080 pixels and with a brightness of 300 nits and with 100% sRGB color coverage. Among the ports we can find, among others Thunderbolt 4, 2x USB 3.2 Type A Gen.2, HDMI 2.0 and 3.5 mm audio-jack socket. There will also be fast wireless connectivity in the form of WiFi 6 and Bluetooth 5.0. More details about the notebook have yet to be revealed. The actual premiere will take place in December and then we will hear more about the Acer Swift 3 X .
Source: Notebookcheck
The Iris Xe Max is based on the same Xe-LP architecture as the integrated Xe graphics drivers and has been seen before in Intel’s DG1 development graphics card.
Acer has introduced the new Swift 3X laptop, which at the same time reveals the name of the Iris Xe Max as Intel’s separate Xe-LP graphics chip in the notebook. In addition to integrated Xe-LP solutions, Intel was known to have a separate chip used in DG1 development graphics cards. The standalone chip will not be introduced to the consumer market at all as a standalone graphics card, but will be utilized in some Tiger Lake laptops to improve graphics performance. In addition, the Xe-LP chip will be available for sale on media servers that do not require much graphics performance but benefit from modern media features.
Unfortunately, Acer did not disclose more details about the Iris Xe Max graphics card in the Swift 3X, but from previous leaks, it is known to be equipped with a maximum 96 Execution Unit, like the integrated version. The video bus memory bus has reportedly been 96 bit, at least on DG1 cards, and has been extended by 3 GB of GDDR6 memory.
Acer Swift 3X laptop will be equipped with
Acer has given the Spin 3 a completely new design. The 2020 iger generation now offers an aspect ratio of 16: 10 and thus provides even more screen space and an even slimmer screen frame. As a representative of the Spin family, it has a multi-touch display that can be completely folded down using the 360 degree hinge. The 13, 3 inch screen resolves with 2. ) x 1. 600 pixels and makes sketching or writing possible with the Acer Active Stylus, which has 4. 096 pressure levels, is equipped with AES technology and is already included in the scope of delivery. It can be sunk and loaded directly in the housing. After only about 30 seconds of loading time, thanks to the quick charge function, it should be able to offer its user entire 90 Can support for minutes.
On the hardware side, the Acer Spin 3 relies on Intel’s Tiger Lake processors up to the fast Core i7 model and offers the integrated Xe graphics. Up to two SSDs can be installed, two Type-C ports with Thunderbolt 4 technology and Killer – 1650 – WiFi are also available.
The Acer Spin 5 is the big brother, with a height of 14, 9 mm and a total weight of around 1.2 kg but still very portable. The screen area is slightly larger with a 3: 2 format, but otherwise the same technology is available as in the Acer Spin 3. The Acer Spin 5 should be available in time for the Christmas business from December at a price from 1. Euro follow.
In addition, the Acer Swift 3x, extended by the abbreviation “X”, is due to hit stores this year. The X indicates the new Iris-X graphics, which will come under the code name DG1 (Discrete Graphics 1) as the first dedicated graphics chip from Intel in the notebook and will thus increase significantly in terms of graphics performance. Acer is still silent on further details, however, as the Intel graphics have not yet been officially presented. It should run under the name Iris Xe Max. A current core processor of the 11 serves as the basis. Core generation as well as faster PCIe SSD storage.
For the screen, you use a 13 inch IPS panel with full HD Resolution and high color fidelity, because the NTSC color gray
Acer has introduced the Swift 3x to accompany the 1920 version of the Swift 3 . Both notebooks use Intel’s Core-i processors of the 11 . Generation alias Tiger Lake-U, the manufacturer also equips the Swift 3x with Intel’s DG1 graphics chip. It is similar to the integrated graphics unit of the processor, but comes with fast GDDR6 RAM. Brand name: Iris Xe Max.
The additional GPU is noticeable in a direct comparison on the case: According to Acer, the Swift 3x is almost 2 mm thicker than the previous generation and weighs in around 1.3 kg additional 170 grams. Since the GPU has its own energy budget including the associated cooling, the CPU cores should achieve a higher turbo speed under load.
The Iris Xe Max replaces Nvidia’s entry-level graphics chip GeForce MX 350, which Acer last year in the more expensive Swift-3 Models built. Nvidia announced the successor GeForce MX 350 in August 2020, but did not mention any specifications so far – Notebooks with MX 450 does not yet exist.
Acer Swift 3x (4 images) (Image: Acer) Thunderbolt 4 As usual with upper class notebooks with Tiger Lake processors, there is a Thunderbolt 4 connection in the form of a USB-C port on board. The remaining connections fall with two USB 3.2 Gen 2 Type A (10 Gbit / s), once HDMI and audio jack are manageable. The Swift 3x can wirelessly connect to Wi-Fi 6 networks (WLAN 802. 11 ax) connect and transmit via Bluetooth 5.0.
The 11 inch IPS display solves with 1920 × 1080 pixels (Full HD) and covers the NTSC color space to 72 percent. The battery should last for runtimes of up to 17, 5 hours .
Acer wants to bring the Swift 3x on the market at the end of November – so Intel would have to present the Iris Xe Max beforehand. The basic configuration with the four-core Core i5 – 1080 G7, Iris Xe Max, 8 GByte LPDDR4X-RAM and 512 GByte capital S
Acer today announced its first laptop with Intel’s discrete graphics. The Acer Swift 3X will come with 11th Gen Intel Core processors, as well as Intel Iris Xe Max discrete graphics. It will launch in December starting at $899.99.
Acer didn’t say much about the Iris Xe Max, but we suspect that it is the official name for DG1, an internal name for Intel’s first discrete graphics GPU. The name has been spotted in official Intel marketing, and leaks showed performance information and named the Swift 3X earlier this month.
CPU | Intel Core i5-1135G7 or Intel Core i7-1165G7 |
GPU | Intel Iris Xe Max discrete graphics |
RAM | Up to 16GB LPDDR4X |
Storage | Up to 1TB SSD |
Display | 14-inch, FHD IPS |
Battery Life | Up to 17.5 hours |
WIreless | Intel Wi-Fi 6 AX201 |
Dimensions | 12.7 x 8.4 x 0.7 inches / 322.8 x 212.2 x 18mm |
The laptop will come with either an Intel Core i5-1135G7 or Intel Core i7-1165G7, so no new chips will debut alongside the Xe Max. Acer’s Swift 3X will have up to 16GB of onboard LPDDR4X RAM. Storage options include a 512GB hybrid SSD with Intel Optane memory and QLC storage, or 256GB, 512GB or 1TB PCIe Gen 3 NVMe SSDs.
The Swift 3X will have a 14-inch, IPS display with 1080p resolution, an 84% screen-to-body ratio and offer a variety of ports, including USB Type-C, USB 3.2 Gen 2 and Thunderbolt 4. Acer claims that the laptop weighs 3 pounds (1.4 kg) and will last for up to 17.5 hours on a charge.
The laptop was revealed for the first time at Acer’s second online press conference of the year, which was entitled “Connected Together” and also included consumer 2-in-1s, gaming monitors and new business notebooks.
AMD Big Navi, RX 6000, Navi 2x, RDNA 2. Whatever the name, AMD’s next-generation GPUs are promising big performance and efficiency gains, along with feature parity with Nvidia in terms of ray tracing support. Will Team Red finally take the pole position in our GPU hierarchy and lay claim to the crown for the best graphics card, or will the Nvidia Ampere architecture cards keep the top spots? It’s too soon to say, but here’s everything we know about AMD Big Navi, including the RDNA 2 architecture, potential specifications, performance, release date and pricing.
With Nvidia’s GeForce RTX 3090, GeForce RTX 3080, and GeForce RTX 3070 now revealed, and the first two officially launched, the ball is in AMD’s court. There are various ways of looking at the Nvidia Ampere launch. It’s Nvidia doing its best to bury AMD before Big Navi even steps out the door, or Nvidia is scared of what AMD is doing with RDNA 2, or Nvidia rushed the launch to get ahead of the holiday shopping spree, or … you get the point. The RTX 3080 and 3070 appear to be priced reasonably (relative to the Turing launch at least), and demand right now is very high. Frankly, AMD would have likely benefitted if it could have launched Big Navi already, but it has a lot of other balls it’s juggling (like Zen 3).
We’ve done our best to sort fact from fiction, but even without hard numbers from AMD, we have a good idea of what to expect. The Xbox Series X and PlayStation 5 hardware are basically a marriage of Big Navi with a Zen 2 CPU, giving us clues as to where Big Navi is likely to land in the PC world. If AMD plays its cards right, perhaps Big Navi will finally put AMD’s high graphics card power consumption behind it. Nvidia’s RTX 30-series cards leave plenty of room for AMD to catch up, considering the 3080 and 3090 have the highest Nvidia TDPs for single GPUs ever. Let’s start at the top, with the new RDNA 2 architecture that powers RX 6000 / Big Navi / Navi 2x. Here’s what we know, expect, and occasionally guess for the AMD’s upcoming GPUs.
Every generation of GPUs is built from a core architecture, and each architecture offers improvements over the previous generation. It’s an iterative and additive process that never really ends. AMD’s GCN architecture went from first generation for its HD 7000 cards in 2012 up through fifth gen in the Vega and Radeon VII cards in 2017-2019. The RDNA architecture that powers the RX 5000 series of AMD GPUs arrived in mid 2019, bringing major improvements to efficiency and overall performance. RDNA 2 looks to double down on those improvements in late 2020.
First, a quick recap of RDNA 1 is in order. The biggest changes with RDNA 1 over GCN involve a redistribution of resources and a change in how instructions are handled. In some ways, RDNA doesn’t appear to be all that different from GCN. The instruction set is the same, but how those instructions are dispatched and executed has been improved. RDNA also adds working support for primitive shaders, something present in the Vega GCN architecture that never got turned on due to complications.
Perhaps the most noteworthy update is that the wavefronts—the core unit of work that gets executed—have been changed from being 64 threads wide with four SIMD16 execution units, to being 32 threads wide with a single SIMD32 execution unit. SIMD stands for Single Instruction, Multiple Data; it’s a vector processing element that optimizes workloads where the same instruction needs to be run on large chunks of data, which is common in graphics workloads.
This matching of the wavefront size to the SIMD size helps improve efficiency. GCN issued one instruction per wave every four cycles; RDNA issues an instruction every cycle. GCN used a wavefront of 64 threads (work items); RDNA supports 32- and 64-thread wavefronts. GCN has a Compute Unit (CU) with 64 GPU cores, 4 TMUs (Texture Mapping Units) and memory access logic. RDNA implements a new Workgroup Processor (WGP) that consists of two CUs, with each CU still providing the same 64 GPU cores and 4 TMUs plus memory access logic.
How much do these changes matter when it comes to actual performance and efficiency? It’s perhaps best illustrated by looking at the Radeon VII, AMD’s last GCN GPU, and comparing it with the RX 5700 XT. Radeon VII has 60 CUs, 3840 GPU cores, 16GB of HBM2 memory with 1 TBps of bandwidth, a GPU clock speed of up to 1750 MHz, and a theoretical peak performance rating of 13.8 TFLOPS. The RX 5700 XT has 40 CUs, 2560 GPU cores, 8GB of GDDR6 memory with 448 GBps of bandwidth, and clocks at up to 1905 MHz with peak performance of 9.75 TFLOPS.
On paper, Radeon VII looks like it should come out with an easy victory. In practice, across a dozen games that we’ve tested, the RX 5700 XT is slightly faster at 1080p gaming and slightly slower at 1440p. Only at 4K is the Radeon VII able to manage a 7% lead, helped no doubt by its memory bandwidth. Overall, the Radeon VII only has a 1-2% performance advantage, but it uses 300W compared to the RX 5700 XT’s 225W.
In short, AMD is able to deliver roughly the same performance as the previous generation, with a third fewer cores, less than half the memory bandwidth and using 25% less power. That’s a very impressive showing, and while TSMC’s 7nm FinFET manufacturing process certainly warrants some of the credit (especially in regards to power), the performance uplift is mostly thanks to the RDNA architecture.
That’s a lot of RDNA discussion, but it’s important because RDNA 2 appears to carry over all of that, with one major new addition: Support for ray tracing. It also supports Variable Rate Shading (VRS), which is part of the DirectX 12 Ultimate spec. There will certainly be other tweaks to the architecture, as AMD is making some big claims about Big Navi / RDNA 2 / Navi 2x when it comes to performance per watt. Specifically, AMD says RDNA 2 will offer 50% more performance per watt than RDNA 1, which is frankly a huge jump—the same large jump RDNA 1 saw relative to GCN.
It means AMD claims RDNA 2 will deliver either the same performance while using 33% less power, or 50% higher performance with the same power, or most likely some in between solution with higher performance and lower power requirements. Of course, there’s another way to read things. RDNA 2 could be up to 1.5X performance per watt, if you restrict it to the same performance level as RDNA 1. That’s pretty much what Nvidia is saying with its 1.9X efficiency increase on Ampere. Again, #salt.
The one thing we know for certain is that RDNA 2 / Big Navi / RX 6000 GPUs will all support ray tracing. That will bring AMD up to feature parity with Nvidia. There was some question as to whether AMD would use the same BVH approach to ray tracing calculations as Nvidia, and with the PlayStation 5 and Xbox Series X announcements out of the way, the answer appears to be yes.
If you’re not familiar with the term BVH, it stands for Bounding Volume Hierarchy and is used to efficiently find ray and triangle intersections; you can read more about it in our discussion of Nvidia’s Turing architecture and its ray tracing algorithm. While AMD didn’t provide much detail on its BVH hardware, BVH as a core aspect of ray tracing was definitely mentioned, and we heard similar talk about ray tracing and BVH with the VulkanRT and DirectX 12 Ultimate announcements.
We don’t know how much ray tracing hardware is present, or how fast will it be. Even if AMD takes the same approach as Nvidia and puts one RT core (or whatever AMD wants to call it) into each CU, the comparison between AMD and Nvidia isn’t clear cut. Nvidia for example says it roughly doubled the performance of its RT cores in Ampere. Will AMD’s RT cores be like Nvidia’s RT Gen1, RT Gen2, or something else? There are at least a few rumors or hints that Big Navi might not even have RT cores as such, but will instead use some optimized shader logic and large caches to boost RT shader calculations. The fact is, we don’t know yet and won’t know until AMD says more.
Note that Nvidia also has Tensor cores in its Turing architecture, which are used for deep learning and AI computations, as well as DLSS (Deep Learning Super Sampling), which has now been generalized with DLSS 2.0 (and DLSS 2.1) to improve performance and image quality and make it easier for games to implement DLSS. So far, AMD has said nothing about RDNA 2 / Navi 2x including Tensor cores or an equivalent to DLSS, though AMD’s CAS (Contrast Aware Sharpening) and RIS (Radeon Image Sharpening) do overlap with DLSS in some ways. Recently, Sony patents detailed a DLSS-like technique for image reconstruction, presumably for the PlayStation 5. It may be possible to do that without any Tensor cores, using just the FP16 or INT8 capabilities of Navi 2x.
We also know that AMD is planning multiple Navi 2x products, and we expect to see extreme, high-end and mainstream options—though budget Navi 2x seems unlikely in the near term, given RX 5500 XT launched in early 2020. AMD could launch multiple GPUs in a relatively short period of time, but more likely we’ll see the highest performance options first, followed by high-end and eventually mid-range solutions. Some of those may not happen until 2021, however.
What does all of this mean for RX 6000 / Big Navi / RDNA 2 desktop GPUs? Based on the Xbox Series X, AMD is fully capable of building an RDNA 2 / Big Navi GPU with at least 52 CUs, and very likely can and will go much higher. AMD is also using two completely different GPU configurations for the Xbox Series X and PlayStation 5, and a third configuration for Xbox Series S, though likely none of those precise configurations will actually end up in a PC graphics card. Regardless, the upcoming consoles give us a minimum baseline for what AMD can do with Big Navi.
AMD has a lot of options available. The PC Navi 2x GPUs are focused purely on graphics, unlike the consoles. AMD also doesn’t benefit from the console sales or subsidies from Sony and Microsoft—each of the new consoles will likely ship close to 100 million units over the coming years, and Sony and MS can take a loss on the hardware because they make it back on software sales. There’s a balancing act between chip size, clock speed, and power, and every processor can prioritize things differently. Larger chips use more power and cost more to manufacture, and they typically run at lower clock speeds to compensate. Smaller chips have better yields, cost less, and use less power, but for GPUs there’s a lot of base functionality that has to be present, so a chip that’s half the performance usually isn’t half the size.
Looking at Navi 10 and RDNA 1, it’s not a stretch to imagine AMD shoving twice the number of GPU cores into a Navi 2x GPU. Navi 10 is relatively small at just 251mm square, and AMD has used much larger die sizes in the past. Anyway, let’s cut to the chase. There have been lots of rumors floating around, but with only weeks separating us from the official Big Navi launch, we’re relatively confident in many of the core specs. A GPU’s maximum CU count can’t be exceeded, but disabling parts of each GPU is common practice and has been for years.
The following table lists potential specs, based on our best information. The question marks indicate our own best guesses based on rumors, previous GPU launches, and the current graphics card market. We’ve run some numbers to help fill in the remaining data, though there’s still plenty of wiggle room for AMD. It’s unlikely AMD will go significantly higher or lower than these estimates, but anywhere within about 10% is feasible.
Graphics Card | RX 6900 XT | RX 6800 XT | RX 6700 XT | RX 6500 XT? |
---|---|---|---|---|
GPU | Navi 21 XT | Navi 21 XL | Navi 22? | Navi 23? |
Process (nm) | 7 | 7 | 7 | 7 |
Transistors (billion) | 23? | 23? | ? | ? |
Die size (mm^2) | 536? (#salt) | 536? | ? | 236? |
CUs | 80 | 60 | 40 | 32? |
GPU cores | 5120 | 4096 | 2560 | 2048? |
Max Clock (MHz) | 2100? | 2100? | 2000? | 2000? |
VRAM Speed (MT/s) | 16000 | 14000 | 16000? | 14000? |
VRAM (GB) | 16 | 16 | 12 | 8 |
Bus width | 256 | 256 | 192 | 128 |
ROPs | 64 | 64 | 64 | 32? |
TMUs | 320 | 256 | 160 | 128 |
TFLOPS (boost) | 21.5 | 17.2 | 10.2 | 8.2 |
Bandwidth (GB/s) | 512 | 448 | 384? | 224? |
TBP (watts) | 320? | 275? | 225? | 150? |
Launch Date | Nov 2020 | Nov 2020 | Nov 2020? | Jan 2021? |
Launch Price | $599? | $499? | $399? | $299? |
The highest spec rumors point to a Navi 21 GPU with 80 CUs and 5120 GPU cores, and more than double the size (536mm square) of the current Navi 10 used in the RX 5700 XT. While there are very good sources on the CU and core counts, we’d take the die size with a scoop of salt. It’s entirely possible AMD has gone with a huge die for Navi 21, but if that figure is correct, it’s the biggest AMD GPU since 2015’s Fiji (R9 Fury X).
That also means it’s likely very power hungry, and while some leaks on TBP (Total Board Power) have come out, the way AMD calculates TBP vs. TGP (Total Graphics Power) is a bit muddy. Based on the TBP figures, it looks like AMD will likely have a chip that’s close to GeForce RTX 3080 in terms of power (give or take).
Big Navi / RDNA 2 has to add support for ray tracing and some other tech, which should require quite a few transistors. AMD may also go with very large caches, which would help overcome potential bandwidth limitations caused by the somewhat narrow 256-bit and 192-bit bus widths. Note that Nvidia has opted for a 320-bit bus on the 3080 and 384-bit on the 3090, plus faster GDDR6X memory.
The real question is whether AMD has tuned the shader cores similar to what Nvidia did with Turing, adding concurrent FP32 and INT32 pipelines. If so, performance on the biggest of the Big Navi chips could definitely give the RTX 3080 some needed competition. The ray tracing hardware may still not be up to Turing levels, however, never mind Ampere. Based on some of the information surrounding the Xbox Series X, it seems like the RT support will end up with lower ray/triangle intersection performance than Nvidia’s hardware.
Not surprisingly, clock speeds are all still unknown. So many ‘leaks’ have happened with maximum boost clocks going as high as 2.4GHz, or as low as 1.7GHz. It’s impossible to know for certain where AMD will land, but we’ve aimed at a medium/high value that will deliver the promised performance per Watt gains. TSMC’s N7 process is generally better than the Samsun 8N that Nvidia’s using for Ampere, but then AMD has generally lagged behind Nvidia when it comes to architecture designs (at least for the past seven years).
Getting back to the memory side of things, AMD’s configurations are interesting but leave us with a lot of questions. Most rumors and leaks point to 16GB of GDDR6 for the top RX 6900 XT and RX 6800 XT, but only 12GB for the RX 6700 XT. The memory capacities look good, with all of the GPUs at least matching the RTX 3080, but bus widths and speeds could be a big problem.
If AMD uses 16GB and 256-bit as expected, even with the fastest 16Gbps GDDR6 that’s still only 512GBps of bandwidth. The 6900 XT potentially doubles compute performance of the current RX 5700 XT, with twice the VRAM capacity, and yet it would only have 14% more bandwidth. The 16GB 6800 XT with 14 Gbps GDDR6 would end up with the same bandwidth as the current RX 5700 series, while the RX 6500 would have 8GB and an even narrower bus. Depending on architecture, it could still come close to RX 5700 levels of performance, but we’ll have to wait and see.
This is why many expect Big Navi / RDNA 2 to come with massive L2 caches. Double the cache size and you can avoid hitting memory as hard, which might be sufficient to get around the GDDR6 bandwidth limitations. We’ve also heard ray tracing shader calculations can get a hefty performance boost by adding more cache, and as noted above there are hints this is what AMD is doing. We’ll know more by the end of the month.
What will AMD call the retail products using Big Navi / Navi 2x GPUs? AMD has at least revealed that the Navi 2x family will be sold under the RX 6000 series, which is what most of us expected. Beyond that, there are still a few remaining questions.
AMD has said it will launch a whole series of Navi 2x GPUs. The Navi 1x family consists of RX 5700 XT, RX 5700, RX 5600 XT, and RX 5500 XT (in 4GB and 8GB models), along with RX 5600/5500/5300 models for the OEM market that lack the XT suffix. AMD could simply add 1000 points to the current models, but we expect there will be a few more options this round.
The top model will almost certainly be called RX 6900 XT, with the various performance tiers as RX 6800 XT, 6700 XT, etc. It also looks like AMD is moving into higher performance segments, going by the 6900 and 6800 model numbers, with the 5600 XT replacement ending up as the 6700 XT. The 5500 XT meanwhile will eventually be replaced by 6500 XT is our assumption, so there’s a 200 point gap between the high-end 6700 and the mainstream/budget 6500.
All of this could of course change, as model names are relatively easy to update (though packaging has to be complete as well). So, the above is what leaks and rumors currently indicate. We expect the consumer models will keep the XT suffix, and AMD can continue to do non-XT models for the OEM market. We think it would be great to have more consistent branding, but we’ll have to see what AMD decides to do.
AMD has reiterated many times this year that RDNA 2, aka Big Navi—which AMD is even using now in homage to the enthusiast community’s adoption of that moniker—will arrive before the end of 2020. AMD has now announced a Future of Radeon PC Gaming event that will take place on October 28.
AMD could potentially launch the RX 6000 GPUs at that time, but more likely is that it will first reveal the architecture, specs, and other details similar to what Nvidia did with it’s Ampere announcement. That means actual GPUs will probably arrive in November, just in time for the holiday shoppers.
While the impact of COVID-19 around the globe is immense, AMD still plans on launching at least some Navi 2x parts in 2020. However, given the late date of the event, it’s possible we will only see the top two products from RDNA 2 in 2020. It might be more than that, but most new GPU families roll out over a period of several months.
We provided our own estimated pricing based on the potential performance and graphics card market in the table near the top. We’ve changed those estimates quite a bit since the Nvidia Ampere announcement, as AMD can’t hope to sell slower cards at equal or higher pricing. On the other hand, some rumors suggest RX 6900 XT won’t be far from RTX 3080 performance, so higher prices are certainly possible.
Officially, AMD hasn’t said anything in regards to pricing yet, and that will likely remain the case until the actual launch. Other factors, like the price of competing Nvidia (and maybe even Intel DG1) GPUs, will be considered as well. We can look back at the Navi 10 / RX 5700 XT launch for context.
Rumors came out more than six months before launch listing various prices. We saw everything from RTX 2080 performance for $250 to $500, or RTX 2060 performance for under $200. AMD officially revealed prices of $449 for the RX 5700 XT and $379 for the RX 5700 about a month before launch.
After the initial RX 5700 XT reveal, Nvidia (to the surprise of pretty much no one) launched its RTX 2070 Super and RTX 2060 Super, providing improved performance at lower prices. (The RTX 2080 Super was also announced, but it didn’t launch until two weeks after the RX 5700 series.) Just a few days before launch, AMD then dropped the prices of its RX 5700 XT to $399, and the RX 5700 to $349, making them far more appealing. (The RX 5600 XT arrived about six months later priced at $299.) AMD would later go on to state that this was all premeditated—gamesmanship to get Nvidia to reveal its hand early.
The bottom line is that no one, including AMD itself, knows what the final pricing will be on a new graphics card months before launch. There are plans with multiple contingencies, and ultimately the market will help determine the price. We now have Nvidia’s Ampere pricing of $1,499, $699, and $499 for the 3090, 3080, and 3070, respectively. Only AMD knows for sure how RX 6000 stacks up to RTX 30-series in performance, and it will tweak prices accordingly.
There are also multiple reports of a 500mm square or larger die size, and if that’s correct we have to assume Big Navi / Navi 2x graphics cards will go after the enthusiast segment—meaning, $600 or more. TSMC’s 7nm FinFET lithography is more expensive than its 12nm, and larger chips mean yields and dies per wafer are both going to be lower. Plus, 16GB and 12GB of GDDR6 will increase both the memory and board price. Big chips lead to big prices, in other words.
The only real advice we can give right now is to wait and see. AMD will do its best to deliver RDNA 2 and Navi 2x GPUs at compelling prices. That doesn’t mean we’ll get RTX 3080 performance for $500, sadly, but if Big Navi can give Nvidia some much-needed competition in the enthusiast graphics card segment, we should see bang-for-the-buck improvements across the entire spectrum of GPUs. And if AMD really does have an 80-CU monster Navi 21 GPU coming that will match the RTX 3080 in performance, we expect it will charge accordingly — just like it’s doing with Zen 3 CPUs now that it appears to have a clear lead over Intel.
AMD has a lot riding on Big Navi, RDNA 2, and the Radeon RX 6000 series. Just like Nvidia’s Ampere, AMD has a lot to prove. This is the GPU architecture that powers the next generation of consoles, which tend to have much longer shelf lives than PC graphics cards. Look at the PS4 and Xbox One: both launched in late 2013 and are still in use today. There are also still PC gamers with GTX 700-series or R9 200-series graphics cards, but if you’re running such a GPU, we feel for you.
We’re very interested in finding out how Big Navi performs, with and without ray tracing. AMD’s RX 6000 performance teaser only serves to whet our appetites. 50% better performance per watt can mean a lot of different things, and AMD hasn’t shied away from 300W GPUs for the past several generations of hardware. A 300W part with 50% better performance per watt would basically be double the performance of the current RX 5700 XT, and that’s enough to potentially compete with whatever Nvidia has to offer.
Realistically, AMD’s 50% PPW improvements probably only occur in specific scenarios, just like Nvidia’s 90% PPW improvements on Ampere. Particularly for the higher performance parts, we’re skeptical of claims of 50% improvements, but we’ll withhold any final judgement for now. About all we can say is that Nvidia has left the door open for AMD to walk through.
We also hope AMD will manage to avoid the shortages that have plagued Nvidia’s RTX 3080 and 3090 cards. Part of that comes from demand for new levels of performance, so AMD will need to keep pace with Ampere if it hopes to see similar demand. Also, every Ampere GPU purchase prior to Big Navi’s launch means one less buyer for AMD’s GPUs. Still, TSMC can only produce so many N7 wafers per month, and AMD has Navi 10 chips still in production, along with Zen 2 and Zen 3 CPUs, and now Navi 2x. Add in wafers from other companies (Apple, Nvidia, and Intel are all using TSMC N7) and we could see Big Navi shortages until 2021.
Without actual hardware in hand, running actual gaming benchmarks, we can’t declare a victor. Give it another month and we should have all the final details and data in place. The last months of 2020 are shaping up to be very exciting in the GPU world, which is good as the first part of 2020 sucked. Considering it’s been more than a year since AMD’s Navi architecture launched, we’re definitely ready for the next-gen GPUs.
So far, Intel has only used the first Xe GPU based on the economical Xe LP architecture in the mobile Tiger Lake processors. The GPU benchmarks show a doubling of the performance compared to the predecessor. As with AMD’s Renoir processors, such a powerful integrated graphics unit means that the slow, discrete combination solutions are superfluous.
It is already known that Intel has integrated the DG1 (Discrete Graphics 1) into an end customer product wants to convict. So far, this has only existed as a Software Development Vehicle (SDV) or in an integrated forum in the Tiger Lake processors. Now there are the first concrete indications. ASUS has the product page of the Vivobook Flip 14 put online. This speaks of “First Intel Discrete Graphics” . The exact product name is Iris Xe Max.
In the ASUS Vivobook Flip 14 the Iris Xe Max with the Tiger Lake models Core i5 – 1135 G7 or Core i7 – 1165 G7 combined. There are also 8 or 16 GB to LPDDR4X. Usually the integrated graphics unit also uses this memory, which starts with 61 GB / s is connected to the Xe-LP architecture. This bandwidth is one of the reasons why the Iris-Xe graphics are twice as fast as its predecessor.
For the Iris-Xe-Max variant, the GPU has its own memory. ASUS specifically speaks of 4 GB of video memory. This is likely to be GDDR6, which Intel will also use for the larger Xe-HPG variant in the coming year. This means that the memory bandwidth should also be higher, although we have no information about the width of the memory interface.
There are no more precise technical details about the Iris-Xe-Max-GPU. According to previous information, the same expansion stage comes here with 96 EUs are used, as with the fastest Tiger Lake processors. However, a dedicated GPU with the same expansion level should clock much higher, since it can produce a higher power consumption and waste heat. For the Tiger Lake processors we have for a Core i7 – 1185 G7 with a measure of 1, 35 GHz one power consumption maximum 14 W measured – only for the integrated GPU. The dedicated variant should be allowed to accept significantly more without compromising the CPU package.
Whether and when Intel will officially present the Iris Xe Max is currently not known. The question is also open whether ASUS will be offering the Vivobook Flip 14 has only been advanced and whether or when there will be other models.
With the assumption that the dedicated Iris-Xe-Max-GPU would use GDDR6 memory, we were probably not entirely correct. With the Swift 3X, Acer now also has a notebook with dedicated Intel graphics and mentions 4 GB of LPDDR4X as graphics memory in the technical details. This is also used here for the RAM of the Tiger Lake processor and Intel also uses an LPDDR4X memory interface for the Xe-LP-GPU.
In this article, which our team will regularly update, we will maintain a growing list of information pertaining to upcoming hardware releases based on leaks and official announcements as we spot them. There will obviously be a ton of rumors on unreleased hardware, and it is our goal to—based on our years of industry experience—exclude the crazy ones. In addition to these upcoming hardware release news, we will regularly adjust the structure of this article to better organize information. Each time an important change is made to this article, it will re-appear on our front page with a “new” banner, and the additions will be documented in the forum comments thread. This article will not leak information we signed an NDA for.
Feel free to share your opinions and tips in the forum comments thread and subscribe to the same thread for updates.
Last Update (Oct 19th):
So far, Intel has only used the first Xe GPU based on the economical Xe LP architecture in the mobile Tiger Lake processors. The GPU benchmarks show a doubling of the performance compared to the predecessor. As with AMD’s Renoir processors, such a powerful integrated graphics unit means that the slow, discrete combination solutions are superfluous.
It is already known that Intel has integrated the DG1 (Discrete Graphics 1) into an end customer product wants to convict. So far, this has only existed as a Software Development Vehicle (SDV) or in an integrated forum in the Tiger Lake processors. Now there are the first concrete indications. ASUS has the product page of the Vivobook Flip 14 put online. This speaks of “First Intel Discrete Graphics” . The exact product name is Iris Xe Max.
In the ASUS Vivobook Flip 14 the Iris Xe Max with the Tiger Lake models Core i5 – 1135 G7 or Core i7 – 1165 G7 combined. There are also 8 or 16 GB to LPDDR4X. Usually the integrated graphics unit also uses this memory, which starts with 61 GB / s is connected to the Xe-LP architecture. This bandwidth is one of the reasons why the Iris-Xe graphics are twice as fast as its predecessor.
For the Iris-Xe-Max variant, the GPU has its own memory. ASUS specifically speaks of 4 GB of video memory. This is likely to be GDDR6, which Intel will also use for the larger Xe-HPG variant in the coming year. This means that the memory bandwidth should also be higher, although we have no information about the width of the memory interface.
There are no more precise technical details about the Iris-Xe-Max-GPU. According to previous information, the same expansion stage comes here with 96 EUs used, as with the fastest Tiger Lake processors. However, a dedicated GPU with the same expansion level should clock much higher, since it can produce a higher power consumption and waste heat. For the Tiger Lake processors we have a Core i7 – 1185 G7 a measure of 1, 35 GHz a maximum power consumption 14 W measured – only for the integrated GPU. The dedicated variant should be allowed to accept significantly more without compromising the CPU package.
Whether and when Intel will officially present the Iris Xe Max is currently not known. The question is also open whether ASUS will be offering the Vivobook Flip 14 has only been advanced and whether or when there will be other models.
Asus rushes ahead and shows the first notebook with Intel’s independent graphics chip DG1 – even before the chip manufacturer even presented the GPU. The VivoBook Flip 14 (TP 470 EZ) is a 14 inch 2-in-1 device with IPS touchscreen (1920 × 802 pixels) and pen support.
Asus combines a Tiger Lake U processor with Intel’s DG1 GPU. On the product page for the upcoming VivoBook Flip 14 the manufacturer only speaks still from “Intel’s first stand-alone GPU”. In an earlier version the brand name Iris Xe Max was noted.
This can also be found in the benchmark database from SiSoftware Sandra, which is attached to the graphics chip 768 attributes to shader cores – as much as the integrated graphics unit from Tiger Lake-U. The separate GPU with its own 4 GB memory, probably faster GDDR6 RAM, could achieve a speed advantage. In the previous VivoBook flip notebooks, Asus dispensed with independent graphics chips and relied solely on Intel processors.
Asus VivoBook Flip 14 (8 pictures) (Image: Asus) Tiger Lake + Thunderbolt 4 In the upcoming VivoBook Flip 14 either a Core i7 – 1165 G7 or Core i5 – 1135 G7. Both use four CPU cores with Hyper-Threading (eight threads) and differ in terms of clock frequencies and the size of the level 3 cache. The CPU accesses either 8 or 16 GByte LPDDR4X – 4266 – RAM back. Asus installs a GB to 1 TB PCI-Express SSD as M.2 Card.
As usual with the Tiger Lake generation, Thunderbolt 4 is available as USB-C Connection included. Two type A ports are available as USB 3.2 Gen 2 (10 GBit / s) or USB 2.0 connected. HDMI, audio jack and micro SD card reader round off the connections. The VivoBook Flip 14 transmits via Wi-Fi 6 (WLAN 802. 11 ax) and Bluetooth 5.0. It is 18, 7
Important news in the ASUS convertible notebook: next to an 11th generation Intel Core processor, the first discrete video card of the American company
of Paolo Corsini published on 19 October 2020 , at 09: 44 in the Portable channel
ASUS Asustor Intel Core 2-in-1
On the American website of ASUS , at this address, is available a specific page for the new convertible notebook VivoBook Flip 14 . This is a classic 2-in-1 with screen connected to the base through two hinges that rotate by 360 °, so as to allow you to place the keyboard under the screen transforming it into a tablet.
The novelty is under the body, for now at least in the technical specifications indicated by the Taiwanese company. Next to the 11th Generation Intel Core processor from the Tiger Lake family, with Core i5 models – 1165 G4 or Core i7 – 1165 G7 depending on the configuration, ASUS has chosen to integrate a discrete video card. No GeForce MX, a classic for configurations of this type: we find in fact indicated a “ First Intel Discrete Graphics ” , which in fact corresponds to the card known as DG1 .
It is therefore the first discrete GPU developed by Intel for many years , based on Intel Xe Graphics architecture and expected to debut just for the end of 2020. ASUS is the first Intel notebook manufacturer partner to let leak the use of this GPU in its own product but it is conceivable that others may follow shortly.
The presence of a GPU discrete alongside the integrated GPU in the 11th generation Intel Core processor, both of the Intel Xe Graphics type, leaves to assume that the two chips can operate in parallel in a multi GPU configuration . In this case, even with the same clock frequency and number of Execution Units, the lion’s share belongs to the discrete GPU due to the presence of dedicated video memory; for the GPU integrated in the processor, the GPU component is in fact combined with the system memory.
It will be interesting to evaluate if this configuration allows to exploit the two GPUs in parallel with 3D applications and according to which approach. A lot of work will undoubtedly be required at the driver level to ensure full support from the different applications.
João Silva
1 hour ago
Featured Tech News, Graphics, Laptop / Mobile
Asus has revealed its first laptop featuring an Intel Tiger Lake CPU and the “first Intel Discrete Graphics”. On the VivoBook 14 Flip product page, Asus hasn’t named the Intel discrete graphics card specifically, but the webpage description referred to it as “Iris Xe Max”.
As found by @momomo_us, the Asus VivoBook Flip 14 TP470EZ is the first laptop officially revealed to be equipped with an Intel discrete graphics card. The Vivobook Flip 14 official product page only states that it will be “powered by up to an Intel Core i7 processor with First Intel Discrete Graphics”, but the webpage description, which is accessible through any browser’s developer tools, replaced the “First Intel Discrete Graphics” by “Iris Xe Max”.
We don’t know if the Iris Xe Max is the Intel DG1 graphics or the iGPU of the Tiger Lake processor (Core i5-1135G7 and Core i7-1165G7 options). Both the Intel DG1 and the G7 iGPU of the Tiger Lake processors are set to feature 96 EUs (768 shaders), and as per this SiSoftware entry (via HotHardware), the Iris Xe Max is also.
With Intel introducing a discrete graphics solution to laptops, AMD and Nvidia will surely lose some of their market share in this segment. Just like AMD, Intel also will be able to fully power a laptop with both CPU and discrete GPU.
Regarding the laptop itself, the 14-inch NanoEdge Full HD display covers 100% of the sRGB colour gamut. With up to 16GB of RAM and 1TB of SSD storage, the VivoBook 14 Flip is an all-rounder laptop suitable to work and entertainment, offering a convertible and durable solution that it’s also easy to transport due to its reduced size (18.7mm thick) and weight (1.5Kg). It comes with built-in speakers powered by Harman Kardon, Wi-Fi 6 connectivity, and Numberpad 2.0 (Numpad on the touchpad).
No details about the pricing or availability of the Asus VivoBook Flip 14 were disclosed yet.
KitGuru says: Are you curious about Intel discrete graphics card performance? Will it be enough to battle against AMD and Nvidia on the entry-level mobile GPU segment?
Become a Patron!
Over the course of this generation, we have seen a number of first-party Xbox titles …