A new review of Intel’s Iris Xe DG1 graphics card has popped up, putting Intel’s new discrete GPU through its paces and showing that it is surprisingly capable. While the Xe DG1 is far from being one of the best graphics cards on the market, the review shows that the entry-level graphics card holds some value in a time where the graphics card shortage is still going strong and pricing for Nvidia and AMD GPUs has skyrocketed.
Based on a cut-down Iris Xe Max silicon, the DG1 arrives with just 80 execution units (EUs) or 640 shading units, depending on which metric you prefer. Intel’s discrete graphics card sports a 1.2 GHz base clock and a boost clock that climbs to 1.5 GHz. The DG1 also wields 4GB of LPDDR4X-4266 memory across a 128-bit interface. It conforms to a 30W TDP, so the graphics card doesn’t require active cooling or PCIe power connectors. The DG1 provides one DisplayPort output, one HDMI port, and one DVI-D port for connecting your displays.
A previous generic benchmark revealed that the DG1 was slower than Radeon RX 550, a four-year-old graphics card. However, a single benchmark wasn’t sufficient to really determine a winner, and as we all know, there’s nothing like real-world gaming results. YouTuber ETA PRIME recently acquired a $749.99 CyberPowerPC gaming system that leverages the DG1, more specifically, the Asus DG1-4G. He has put the graphics card through its paces so we can see what kind of performance it brings to the table. We’ve got the quick breakdown of results in the table below, and the full video at the end of the article.
Intel Iris Xe DG1 Benchmarks
Game
Resolution
Graphics Preset
Frame Rate (FPS)
Forza Horizon 4
1080p
Low
60 – 70
Injustice 2
1080p
Low
59 – 60
Overwatch
1080p
Medium
65 – 99
Fortnite
1080p
Performance Mode
106 – 262
Genshin Impact
1080p
Medium
57 – 60
Rocket League
1080p
High
82 -120
Grand Theft Auto V
1080p
Normal
79 – 92
Cyberpunk 2077
720p
Low
25 – 33
Red Dead Redemption 2
900p
Low
32 – 47
The CyberPowerPC system features a Core i5-11400F processor, which explains the DG1’s presence. The curious part here is that Intel had previously stated that the DG1 is only compatible with its 9th-Gen Coffee Lake and 10th-Gen Comet Lake processors. The Core i5-11400F is an 11th-Gen Rocket Lake chip. It would appear that the chipmaker secretly added Rocket Lake support on the DG1.
Do bear in mind that the YouTuber swapped out the 8GB single stick of DDR4-3000 memory for a dual-channel 16GB (2x8GB) DDR4-3600 memory kit. The upgrade likely improves the gaming PC’s performance over the stock configuration.
The results showed that the DG1 could deliver more than 60 FPS at 1080p (1920 x 1080) with a low graphics preset. Only a few titles, like Cyberpunk 2077 and Red Dead Redemption 2, gave the DG1 a hard time. However, the graphics card still pushed more than 30 FPS most of the time.
As we knew from Intel’s DG1 announcement, the entry-level market was DG1’s objective all along. The graphics card’s 1080p performance is more than reasonable if you can live without all the fancy eye candy in your life. If not, you should probably pass on the DG1. It would be interesting to see whether the DG1 can hold its own against one of AMD’s latest Ryzen APUs. Unfortunately, that’s a fight for another day.
Intel introduced the Iris Xe discrete graphics processor months ago, but so far, only a handful of OEMs and a couple of graphics card makers have adopted it for their products. This week, VideoCardz discovered another vendor, Gunnir, that offers a desktop system and a standalone Intel DG1 graphics card with a rare D-Sub (VGA) output, making it an interesting board design.
It’s particularly noteworthy that the graphics card has an HDMI 2.0 and a D-Sub output that can be used to connect outdated LCD or even CRT monitors. In 2021, this output (sometimes called the VGA connector, yet a 15-pin D-Sub is not exclusive for monitors) is not particularly widespread as it does not properly support resolutions beyond 2048×1536. Image quality at resolutions higher than 1600×1200 heavily depends on the quality of the output and the cable (quality is typically low). Adding a D-Sub output to a low-end PC makes some sense because some old LCD screens are still in use, and retro gaming with CRT monitors has become a fad.
As far as formal specifications are concerned, the Gunnir Lanji DG1 card is powered by Intel’s cut-down Iris Xe Max GPU with 80 EUs clocked at 1.20 GHz ~ 1.50 GHz paired with 4GB of LPDDR4-4266 memory connected to the chip using a 128-bit interface. The card has a PCIe 3.0 x4 interface to connect to the host. The card can be used for casual games and for multimedia playback (a workload where Intel’s Xe beats the competition). Meanwhile, DG1 is only compatible with systems based on Intel’s 9th- and 10th-Gen Core processors and motherboards with the B460, H410, B365, and H310C chipsets.
It is unclear where these products are available (presumably from select Chinese retailers or to select Chinese PC makers), and at what price.
Intel lists Gunnir on its website, but the card it shows is not actually a custom Gunnir card but is a typical reference design of an entry-level add-in-board from Colorful, a company that officially denies it produces Intel DG1 products as it exclusively makes Nvidia-powered GPUs.
Google has designed its own new processors, the Argos video (trans)coding units (VCU), that have one solitary purpose: processing video. The highly efficient new chips have allowed the technology giant to replace tens of millions of Intel CPUs with its own silicon.
For many years Intel’s video decoding/encoding engines that come built into its CPUs have dominated the market both because they offered leading-edge performance and capabilities and because they were easy to use. But custom-built application-specific integrated circuits (ASICs) tend to outperform general-purpose hardware because they are designed for one workload only. As such, Google turned to developing its own specialized hardware for video processing tasks for YouTube, and to great effect.
However, Intel may have a trick up its sleeve with its latest tech that could win back Google’s specialized video processing business.
Loads of Videos Require New Hardware
Users upload more than 500 hours of video content in various formats every minute to YouTube. Google needs to quickly transcode that content to multiple resolutions (including 144p, 240p, 360p, 480p, 720p, 1080p, 1440p, 2160p, and 4320p) and data-efficient formats (e.g., H.264, VP9 or AV1), which requires formidable encoding horsepower.
Historically, Google had two options for transcoding/encoding content. The first option was Intel’s Visual Computing Accelerator (VCA) that packed three Xeon E3 CPUs with built-in Iris Pro P6300/P580 GT4e integrated graphics cores with leading-edge hardware encoders. The second option was to use software encoding and general-purpose Intel Xeon processors.
Google decided that neither option was power-efficient enough for emerging YouTube workloads – the Visual Computing Accelerator was rather power hungry itself, whereas scaling the number of Xeon CPUs essentially meant increasing the number of servers, which means additional power and datacenter footprint. As a result, Google decided to go with custom in-house hardware.
Google’s first-generation Argos VCU does not replace Intel’s central processors completely as the servers still need to run the OS and manage storage drives and network connectivity. To a large degree, Google’s Argos VCU resembles a GPU that always needs an accompanying CPU.
Instead of stream processors like we see in GPUs, Google’s VCU integrates ten H.264/VP9 encoder engines, several decoder cores, four LPDDR4-3200 memory channels (featuring 4×32-bit interfaces), a PCIe interface, a DMA engine, and a small general-purpose core for scheduling purposes. Most of the IP, except the in-house designed encoders/transcoders, were licensed from third parties to cut down on development costs. Each VCU is also equipped with 8GB of usable ECC LPDDR4 memory.
The main idea behind Google’s VCU is to put as many high-performance encoders/transcoders into a single piece of silicon as possible (while remaining power efficient) and then scale the number of VCUs separately from the number of servers needed. Google places two VCUs on a board and then installs 10 cards per dual-socket Intel Xeon server, greatly increasing the company’s decoding/transcoding performance per rack.
Increasing Efficiency Leads to Migration from Xeon
Google says that its VCU-based machines have seen up to 7x (H.264) and up to 33x (VP9) improvements in performance/TCO compute efficiency compared to Intel Skylake-powered server systems. This improvement accounts for the cost of the VCUs (vs. Intel’s CPUs) and three years of operational expenses, which makes VCUs an easy choice for video behemoth YouTube.
Offline Two-Pass Single Output (SOT) Throughput in CPU, GPU, and VCU-Equipped Systems
System
Throughput (MPix/s)
Throughput (MPix/s)
Performance/TCO
Performance/TCO
H.264
VP9
H.264
VP9
2-way Skylake
714
154
1x
1x
4x Nvidia T4
2,484
–
1.5x
–
8x Google Argos VCUs
5,973
6,122
4.4x
20.8x
20x Google Argos VCUs
14,932
15,306
7x
33.3x
From performance numbers shared by Google, it is evident that a single Argos VCU is barely faster than a 2-way Intel Skylake server in H.264. However, since 20 VCUs can be installed into such a server, VCU wins from an efficiency perspective. But when it comes to the more demanding VP9 codec, Google’s VCU appears to be five times faster than Intel’s dual-socket Xeon and therefore offers impressive efficiency advantages.
Since Google has been using its Argos VCUs for several years now, it clearly replaced many of its Xeon-based YouTube servers with machines running its own silicon. It is extremely hard to estimate how many Xeon systems that Google actually replaced, but some analysts believe the technology giant could have swapped from four to 33 million Intel CPUs for its own VC. Even if the second number is an overestimate, we are still talking about millions of units.
Since Google needs loads of processors for its other services, it is likely that the number of CPUs that the company buys from AMD or Intel is still very high and is not going to decrease any time soon as it will be years before Google’s own datacenter-grade system-on-chips (SoCs) will be ready.
It is also noteworthy that in an attempt to use innovative encoding technologies (e.g., AV1) right now, Google needs to use general-purpose CPUs even for YouTube as the Argos does not support the codec. Furthermore, as more efficient codecs emerge (and these tend to be more demanding in terms of compute horsepower), Google will have to continue to use CPUs for initial deployments. Ironically, the advantage of dedicated hardware will only grow in the future.
Google is already working on its second-gen VCU that supports AV1, H.264, and VP9 codecs as its needs to further increase the efficiency of its encoding technologies. It is unclear when the new VCUs will be deployed, but it is clear that the company wants to use its own SoCs instead of general-purpose processors where possible.
Intel Isn’t Standing Still
Intel isn’t standing still, though. The company’s DG1 Xe-LP-based quad-chip SG1 server card can decode up to 28 4Kp60 streams as well as transcode up to 12 simultaneous streams. Essentially, Intel’s SG1 does exactly what Google’s Argos VCU does: scale video decoding and transcoding performance separately from the server count and thus reduce the number of general-purpose processors required in a data center used for video applications.
With its upcoming single-tile Xe-HP GPU, Intel will offer transcoding of 10 high-quality 4Kp60 streams simultaneously. Keeping in mind that some of Xe-HP GPUs will scale to four tiles, and more than one GPU can be installed per system, Intel’s market-leading media decoding and encoding capabilities will only become even more solid.
Summary
Google has managed to build a remarkable H.264 and VP9-supporting video (trans)coding unit (VCU) that can offer significantly higher efficiency in video encoding/transcoding workloads than Intel’s existing CPUs. Furthermore, VCUs enable Google to scale its video encoding/transcoding performance independently from the number of servers.
Yet, Intel already has its Xe-LP GPUs and SG1 cards that offer some serious video decoding and encoding capabilities, too, so Intel will still be successful in datacenters with heavy video streaming workloads. Furthermore, with the emergence of Intel’s Xe-HP GPUs, the company promises to solidify its position in this market.
The Intel Iris Xe DG1 graphics card has made a surprising appearance. A US retailer began listing a CyberPowerPC listing, which appears to be the very first system to feature Intel’s desktop graphics card.
The system (via VideoCardz) is an entry-level gaming PC, priced at $750 and bundled with a keyboard and mouse. The main components include an Intel DG1 graphics card, an Intel Core i5-11400F processor, 8GB of RAM, and a 500GB NVMe SSD drive.
The Intel DG1 graphics card inside the system features 80 EUs (640 shading units) and 4GB of LPDDR4X memory on a 128-bit memory bus. For the GPU to work, an Intel B460, H410, B365, or H310C motherboard with a “special BIOS” is needed.
Despite looking like a rather basic gaming system, this desktop marks the entrance of the third competitor in the desktop graphics card market. Now with the DG1 heading into the hands of consumers, we can look ahead to the release of DG2, which should provide decent competition up against AMD and Nvidia.
KitGuru says: Intel is beginning to break into the desktop graphics market – did you ever think this day would come?
Best Buy have listed a $750 Intel Iris Xe powered “CyberPowerPC Gamer Xtreme Gaming Desktop” on their site, and it has already sold out. While the Intel Iris Xe may not be a graphical powerhouse, this budget gaming system comes with an 11th Gen Rocket Lake CPU.
The Intel Iris Xe GPU is interesting, despite its lack of pixel prowess. The Xe DG1 is just visible in the top PCIe slot of the machine pictured on Best Buy’s site, is a passively cooled card with 4GB of LPDDR4X VRAM, and 640 shading units spread across 80 execution units. Best Buy doesn’t supply a pic of the back of the PC, but we’d expect DVI-D, HDMI, and DisplayPort ports, in line with the Asus card already revealed.
The PC in question is a Gamer Xtreme Gaming Desktop from CyberPower going for $750, with an 11th-gen Intel i5-11400F (six cores, 12 threads, boost up to 4.4GHz) with the ‘F’ designation meaning it doesn’t pack integrated graphics. There’s 8GB of RAM and a 500 GB NVMe SSD, so you know you’re not looking at a particularly highly powered model here. Similar money gets you PCs with GTX 1650 GPUs, and the benchmarks that leaked a few months ago don’t look brilliant.
Still, while this is perhaps not the kind of PC that would have us dancing in the streets, it’s an important moment in the history of GPUs: Xe is here, there are now three players in the market, and with the launch of the DG2 cards, built on the more powerful Xe-HPG architecture with hardware-accelerated ray-tracing later this year, things are about to get really interesting.
Intel introduced its long-awaited eight-core Tiger Lake-H H35 chips for laptops today, vying for a spot on our best gaming laptop list and marking Intel’s first shipping eight-core 10nm chips for the consumer market. These new 11th-generation chips, which Intel touts as the ‘World’s best gaming laptop processors,’ come as the company faces unprecedented challenges in the laptop market — not only is it contending with AMD’s increasingly popular 7nm Ryzen “Renoir” chips, but perhaps more importantly, Intel is also now playing defense against Apple’s innovative new Arm-based M1 that powers its new MacBooks.
The halo eight-core 16-thread Core i9-11980HK peaks at 5.0 GHz on two cores, fully supports overclocking, and despite its official 65W TDP, can consume up to 110W under heavy load. Additionally, Intel has also added limited overclocking support in the form of a speed optimizer and unlocked memory settings for three of the ‘standard’ eight-core models.
As with Intel’s lower-power Tiger Lake chips, the eight-core models come fabbed on the company’s 10nm SuperFin process and feature Willow Cove execution cores paired with the UHD Graphics 750 engine with the Xe Architecture. These chips will most often be paired with a discrete graphics solution, from Nvidia or AMD. We have coverage of a broad selection of new systems, including from Alienware, Lenovo, MSI, Dell, Acer, HP, and Razer.
All told, Intel claims that the combination of the new CPU microarchitecture and process node offers up to 19% higher IPC, which naturally results in higher performance potential in both gaming and applications. That comes with a bit of a caveat, though — while Intel’s previous-gen eight-core 14nm laptop chips topped out at 5.3 GHz, Tiger Lake-H maxes out at 5.0 GHz. Intel says the higher IPC throws the balance towards even higher performance regardless of 10nm’s lower clock speed.
The new Tiger Lake-H models arrive in the wake of Intel’s quad-core H35 models that operate at 35W for a new ‘Ultraportable’ laptop segment that caters to gamers on the go. However, Intel isn’t using H45 branding for its eight-core Tiger Lake chips, largely because it isn’t marking down 45W on the spec sheet. We’ll cover what that confusing bit of information means below. The key takeaway is that these chips can operate anywhere from 35W to 65W. As usual, Intel’s partners aren’t required to (and don’t) specify the actual power consumption on the laptop or packaging.
Aside from the addition of more cores, a new system agent (more on that shortly), and more confusing branding, the eight-core Tiger Lake-H chips come with a well-known feature set that includes the same amenities, like PCIe 4.0, Thunderbolt 4, and support for Resizable Bar, as their quad-core Tiger Lake predecessors. These chips also mark the debut of the first eight-core laptop lineup that supports PCIe 4.0, as AMD’s competing platforms remain on the PCIe 3.0 connection. Intel also announced five new vPro H-series models with the same specifications as the consumer models but with features designed for the professional market.
Intel says the new Tiger Lake-H chips will come to market in 80 new designs (15 of these are for the vPro equivalents), with the leading devices available for preorder on May 11 and shipping on May 17. Surprisingly, Intel says that it has shipped over 1 million eight-core Tiger Lake chips to its partners before the first devices have even shipped to customers, showing that the company fully intends to leverage its production heft while its competitors, like AMD, continue to grapple with shortages. Intel also plans to keep its current fleet of 10th-Gen Comet Lake processors on the market for the foreseeable future to address the lower rungs of the market, so its 14nm chips will still ship in volume.
Intel Tiger Lake-H Specifications
Processor Number
Base / Boost
Cores / Threads
L3 Cache
Memory
Core i9-11980HK
2.6 / 5.0
8 / 16
24 MB
DDR4-2933 (Gear 1) / DDR4-3200 (Gear 2)
AMD Ryzen 9 5900HX
3.3 / 4.6
8 / 16
16 MB
DDR4-3200 / LPDDR4x-4266
Core i9-10980HK
2.4 / 5.3
8 / 16
16 MB
DDR4-2933
Core i7-11375H Special Edition (H35)
3.3 / 5.0
4 / 8
12 MB
DDR4-3200, LPDDR4x-4266
Core i9-11900H
2.5 / 4.9
8 / 16
24 MB
DDR4-2933 (Gear 1) / DDR4-3200 (Gear 2)
Core i7-10875H
2.3 / 5.1
8 / 16
16 MB
DDR4-2933
Core i7-11800H
2.3 / 4.6
8 / 16
24M
DDR4-2933 (Gear 1) / DDR4-3200 (Gear 2)
Core i5-11400H
2.7 / 4.5
6 / 12
12 MB
2933 (Gear 1) / DDR4-3200 (Gear 2)
Ryzen 9 5900HS
3.0 / 4.6
8 / 16
4 MB
DDR4-3200 / LPDDR4x-4266
Core i5-10400H
2.6 / 4.6
4 / 8
8 MB
DDR4-2933
Intel’s eight-core Tiger Lake-H takes plenty of steps forward — it’s the only eight-core laptop platform with PCIe 4.0 connectivity and hardware support for AVX-512, but it also takes steps back in a few areas.
Although Intel just released 40-core 10nm Ice Lake server chips, we’ve never seen the 10nm process ship with more than four cores for the consumer market, largely due to poor yields and 10nm’s inability to match the high clock rates of Intel’s mature 14nm chips. We expected the 10nm SuperFin process to change that paradigm, but as we see in the chart above, the flagship Core i9-11980HK tops out at 5.0 GHz on two cores, just like the quad-core Tiger Lake i7-11375H Special Edition. Intel uses its Turbo Boost 3.0, which targets threads at the fastest cores, to hit the 5.0 GHz threshold.
However, both chips pale in comparison to the previous-gen 14nm Core i9-10980HK that delivers a beastly 5.3 GHz on two cores courtesy of the Thermal Velocity Boost (TVB) tech that allows the chip to boost higher if it is under a certain temperature threshold. Curiously, Intel doesn’t offer TVB on the new Tiger Lake processors.
Intel says that it tuned 10nm Tiger Lake’s frequency for the best spot on the voltage/frequency curve to maximize both performance and battery life, but it’s obvious that process maturity also weighs in here. Intel offsets Tiger Lake’s incrementally lower clock speeds with the higher IPC borne of the Willow Cove microarchitecture that delivers up to 12% higher IPC in single-threaded and 19% higher IPC in multi-threaded applications. After those advances, Intel says the Tiger Lake chips end up faster than their prior-gen counterparts. Not to mention AMD’s competing Renoir processors.
Intel’s Core i9-11980HK peaks at 110W (PL2) and is a fully overclockable chip — you can adjust the core, graphics, and memory frequency at will. We’ll cover the power consumption, base clock, and TDP confusion in the following section.
Intel has also now added support for limited overclocking on the Core i7-11800H, i9-11900H, and the i9-11950. The memory settings on these three chips are fully unlocked, although with a few caveats we’ll list below, so you can overclock the memory at will. Intel also added support for its auto-tuning Speed Optimizer software. When enabled, this software boosts performance in multi-threaded work, but single-core frequencies are unimpacted.
Intel also made some compromises on the memory front, too. First, the memory controllers no longer support LPDDR4X. Instead, they top out at DDR4-3200, and that’s actually not the case for most of the 11th-Gen lineup, at least if you want the chip to run in the fastest configuration.
The eight-core Tiger Lake die comes with the System Agent Geyersville just like the Rocket Lake desktop chips. That means the company has brought Gear 1 and Gear 2 memory modes to laptops. The optimal setting is called ‘Gear 1’ and it signifies that the memory controller and memory operate at the same frequency (1:1), thus providing the lowest latency and best performance in lightly-threaded work, like gaming. All of the Tiger Lake chips reach up to DDR4-2933 in this mode.
Tiger Lake-H does officially support DDR4-3200, but only with the ‘Gear 2’ setting that allows the memory to operate at twice the frequency of the memory controller (2:1), resulting in higher data transfer rates. This can benefit some threaded workloads but also results in higher latency that can lead to reduced performance in some applications — particularly gaming. We have yet to see a situation where Gear 2 makes much sense for enthusiasts/gamers.
Intel also dialed back the UHD Graphics engine with Xe Architecture for the eight-core H-Series models to 32 execution units (EU), which makes sense given that this class of chip will often be paired with discrete graphics from either AMD or Nvidia. And possibly Intel’s own fledgling DG1, though we have yet to see any configurations yet. For comparison, the quad-core H35 Core i9 and i7 models come equipped with 96 EUs, while the Core i5 variant comes with 80 EUs.
Image 1 of 8
Image 2 of 8
Image 3 of 8
Image 4 of 8
Image 5 of 8
Image 6 of 8
Image 7 of 8
Image 8 of 8
This is Not The Tiger Lake H45 You’re Looking for – More TDP Confusion
As per usual with Intel’s recent laptop chip launches, there’s a bit of branding confusion. The company’s highest-end eight-core laptop chips previously came with an “H45” moniker to denote that these chips have a recommended 45W TDP. But you won’t find that designation with Intel’s new H-Series chips, this even though the quad-core 35W laptop chips that Intel introduced at CES this year come with the H35 designation. In fact, Intel also won’t list a specific TDP on the spec sheet for the eight-core Tiger Lake-H chips. Instead, it will label the H-series models as ’35W to 65W’ for the official TDP.
That’s problematic because Intel measures its TDP at the base frequency, so a lack of a clear TDP rating means there’s no concrete base frequency specification. We know that the PL2, or power consumed during boost, tops out at 110W, but due to the TDP wonkiness, there’s no official PL1 rating (base clock).
That’s because Intel, like AMD, gives OEMs the flexibility to configure the TDP (cTDP) to higher or lower ranges to accommodate the specific power delivery, thermal dissipation, and battery life accommodations of their respective designs. For instance, Intel’s previous-gen 45W parts have a cTDP range that spans from 35W to 65W.
This practice provides OEMs with wide latitude for customization, which is a positive. After all, we all want thinner and faster devices. However, Intel doesn’t compel manufacturers to clearly label their products with the actual TDP they use for the processor, or even list it in the product specifications. That can be very misleading — there’s a 30W delta between the lowest- and highest-performance configurations of the same chip with no clear method of telling what you’re purchasing at the checkout lane. There really is no way to know which Intel is inside.
Intel measures its TDP rating at the chip’s base clock (PL1), so the Tiger Lake-H chips will have varying base clocks that reflect their individual TDP… that isn’t defined. Intel’s spec table shows base clocks at both 45W and 35W, but be aware that this can be a sliding scale. For instance, you might purchase a 40W laptop that lands in the middle range.
As per usual, Intel’s branding practice leaves a lot to be desired. Eliminating the H45 branding and going with merely the ‘H-Series’ for the 35W to 65W eight cores simply adds more confusion because the quad-core H35 chips are also H-Series chips, and there’s no clear way to delineate the two families other than specifying the core count.
Intel is arguably taking the correct path here: It is better to specify that the chips can come in any range of TDPs rather than publish blatantly misleading numbers. However, the only true fix for the misleading mess created by configurable TDPs is to require OEMs to list the power rating directly on the device, or at least on the spec sheet.
Intel Tiger Lake-H Die
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The eight-core H-series chip package comes with a 10nm die paired with a 14nm PCH. The first slide in the above album shows the Tiger Lake die (more deep-dive info here) that Intel says measures 190mm2, which is much larger than the estimated 146.1mm2 die found on the quad-core models (second image). We also included a die shot of the eight-core Comet Lake-H chip (third image).
We’ll have to wait for a proper die annotation of the Tiger Lake-H chip, but we do know that it features a vastly cut-down UHD Graphics 750 engine compared to the quad-core Tiger Lake models (32 vs 96 EUs) and a much larger L3 cache (24 vs 16MB).
The Tiger Lake die supports 20 lanes of PCIe 4.0 connectivity, with 16 lanes exposed for graphics, though those can also be carved into 2×8, 1×8, or 2×4 connections to accommodate more PCIe 4.0 additives, like additional M.2 SSDs. Speaking of which, the chip also supports a direct x4 PCIe 4.0 connection for a single M.2 SSD.
Intel touts that you can RAID several M.2 SSDs together through its Intel Rapid Storage Technology (IRST) and use them to boot the machine. This feature has been present on prior-gen laptop platforms, but Tiger Lake-H marks the debut for this feature with a PCIe 4.0 connection on a laptop.
The PCH provides all of the basic connectivity features (last slide). The Tiger Lake die and PCH communicate over a DMI x8 bus, and the chipset supports an additional 24 PCIe 3.0 lanes that can be carved up for additional features. For more fine-grained details of the Tiger Lake architecture, head to our Intel’s Tiger Lake Roars to Life: Willow Cove Cores, Xe Graphics, Support for LPDDR5, and Intel’s Path Forward: 10nm SuperFin Technology, Advanced Packaging Roadmap articles for more details.
Intel Tiger Lake-H Gaming Benchmarks
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Intel provided the benchmarks above to show the gen-on-gen performance improvements in gaming, and the performance improvement relative to competing AMD processors. As always, approach vendor-provided benchmarks with caution, as they typically paint the vendors’ devices in the best light possible. We’ve included detailed test notes at the end of the article, and Intel says it will provide comparative data against Apple M1 systems soon.
As expected, Intel shows that the Core i9-11980HK provides solid generational leads over the prior-gen Core i9-10980HK, with the deltas spanning from 15% to 21% in favor of the newer chip.
Then there are the comparisons to the AMD Ryzen 9 5900HX, with Intel claiming leads in titles like War Thunder, Total War: Three Kingdoms, and Hitman 3, along with every other hand-picked title in the chart.
Intel tested the 11980HK in an undisclosed OEM pre-production system with an RTX 3080 set at a 155W threshold, while the AMD Ryzen 9 5900HX resided in a Lenovo Legion R9000K with an RTX 3080 dialed in at 165W. Given that we don’t know anything about the OEM system used for Intel’s benchmarks, like cooling capabilities, and that the company didn’t list the TDP for either chip, take these benchmarks with a shovelful of salt.
Intel also provided benchmarks with the Core i5-11400H against the Ryzen 9 5900HS, again claiming that its eight-core chips for thin-and-lights offer the best performance. However, here we can see that the Intel chip loses in three of the four benchmarks, but Intel touts that its “Intel Sample System” is a mere 16.5mm thick, while the 5900HS rides in an ASUS ROG Zephyrus G14 that measures 18mm thick at the front and 20mm thick at the rear.
Intel’s message here is that it can provide comparable gaming performance in thinner systems, but there’s not enough information, like battery life or other considerations, to make any real type of decision off this data.
Intel Tiger Lake-H Application Benchmarks
Image 1 of 2
Image 2 of 2
Here we can see Intel’s benchmarks for applications, too, but the same rules apply — we’ll need to see these benchmarks in our own test suite before we’re ready to claim any victors. Also, be sure to read the test configs in the slides below for more details.
Intel’s 11th-Gen Tiger Lake brings support for AVX-512 and the DL Boost deep learning suite, so Intel hand-picks benchmarks that leverage those features. As such, the previous-gen Comet Lake-H comparable is hopelessly hamstrung in the Video Creation Workflow and Photo Processing benchmarks.
We can say much the same about the comparison benchmarks with the Ryzen 9 5900HX. As a result of Intel’s insistence on using AI-enhanced benchmarks, these benchmarks are largely useless for real-world comparisons: The overwhelming majority of software doesn’t leverage either AI or AVX-512, and it will be several years before we see broad uptake.
As noted, Intel says the new Tiger Lake-H chips will come to market in 80 new designs (15 of these are for the vPro equivalents), with the leading devices available for preorder on May 11 and shipping on May 17. As you can imagine, we’ll also have reviews coming soon. Stay tuned.
Intel’s Iris Xe DG1 may be shaping up to be a disappointment, but the chipmaker’s approaching Xe-HPG DG2 GPU could be a solid performer. German publication Igor’s Lab has shared the alleged specifications for the DG2 in its desktop and mobile format.
The Xe-HPG DG2 block diagram seemingly suggests that Intel had originally planned to pair the GPU with its Tiger Lake-H chips, which are rumored to launch next week. It would seem that Intel didn’t make the window for Tiger Lake-H, however, as Wallossek claims that the chipmaker will use the DG2 for Alder Lake-P instead. The DG2 reportedly features the BGA2660 package.
Apparently, the DG2 was supposed to communicate with Tiger Lake-H through a high-speed PCIe Gen 4.0 x12 interface. The 12-lane connection is a bit unorthodox, so it’s uncertain if that was a typo. The DG2 would be the first GPU to offer DisplayPort 2.0 support. Oddly, the GPU only supports HDMI 2.0 and not HDMI 2.1. However, Wallossek did mention that this was an outdated diagram and DG2 could perhaps come with HDMI 2.1.
Wallossek shared a drawing of the board layout for a Tiger Lake-H chip that’s accompanied by the DG2. We spotted a total of six memory chips. Evidently, only two of the memory chips are actually attached to the DG2. This would mean that the remaining four memory chips are probably soldered memory chips for the system.
Nevertheless, we can’t discard the possibility that all six memory chips are for the DG2. The leaked specifications suggest that the DG2 can leverage up to 16GB of GDDR6 memory.
Intel Xe-HPG DG2 GPU Specifications
SKU 1
SKU 2
SKU 3
SKU 4
SKU 5
Package Type
BGA2660
BGA2660
BGA2660
TBC
TBC
Supported Memory Technology
GDDR6
GDDR6
GDDR6
GDDR6
GDDR6
Memory Speed
16 Gbps
16 Gbps
16 Gbps
16 Gbps
16 Gbps
Interface / Bus
256-bit
192-bit
128-bit
64-bit
64-bit
Memory Size (Max)
16 GB
12 GB
8 GB
4 GB
4 GB
Smart Cache Size
16 MB
16 MB
8 MB
TBC
TBC
Graphics Execution Units (EUs)
512
384
256
196
128
Graphics Frequency (High) Mobile
1.1 GHz
600 MHz
450 MHz
TBC
TBC
Graphics Frequency (Turbo) Mobile
1.8 GHz
1.8 GHz
1.4 GHz
TBC
TBC
TDP Mobile (Chip Only)
100W
100W
100W
TBC
TBC
TDP desktop
TBC
TBC
TBC
TBC
TBC
Wallossek listed a total of five potential DG2 GPUs. The SKU 1, SKU 2 and SKU 3 could be considered the high-performance versions, while the SKU 4 and SKU 5 are likely the entry-level models. They have one common denominator though. Regardless of the model, the DG2 allegedly utilizes 16 Gbps GDDR6 memory chips. The GPU alone should consume up to 100W, maybe around 125W if we factor in the GDDR6 memory chips. The desktop variants of the DG2 might arrive with a TDP over 200W.
The flagship DG2 GPU seemingly has 512 EUs that can clock up to 1.8 GHz. This particular model is equipped with 16GB of 16 Gbps GDDR6 memory across a 256-bit memory interface. This works out to 512 GBps of memory bandwidth.
The budget DG2 SKUs are limited to 192 and 128 EUs. The boost clock speeds are unknown for the moment. The memory configuration consists of 4GB of 16 Gbps GDDR6 memory that communicate through a 64-bit memory bus. The maximum memory bandwidth on these models is 128 GBps.
Assuming that Wallossek’s time frame is accurate, production for the SKU 4 and SKU 5 models should start between late October and early December. He thinks that they may be ready just in time for the Christmas holidays. Production on the SKU 1 through SKU 3 models should start in between December and early March in 2022.
According to Moore’s Law Is Dead, Intel’s successor to the DG1, the DG2, could be arriving sometime later this year with significantly more firepower than Intel’s current DG1 graphics card. Of course it will be faster — that much is a given — but the latest rumors have it that the DG2 could perform similarly to an RTX 3070 from Nvidia. Could it end up as one of the best graphics cards? Never say never, but yeah, big scoops of salt are in order. Let’s get to the details.
Supposedly, this new Xe graphics card will be built using TSMC’s N6 6nm node, and will be manufactured purely on TSMC silicon. This isn’t surprising as Intel is planning to use TSMC silicon in some of its Meteor Lake CPUs in the future. But we do wonder if a DG2 successor based on Intel silicon could arrive later down the road.
According to MLID and previous leaks, Intel’s DG2 is specced out to have up to 512 execution units (EUs), each with the equivalent of eight shader cores. The latest rumor is that it will clock at up to 2.2GHz, a significant upgrade over current Xe LP, likely helped by the use of TSMC’s N6 process. It will also have a proper VRAM configuration with 16GB of GDDR6 over a 256-bit bus. (DG1 uses LPDDR4 for comparison.)
Earlier rumors suggested power use of 225W–250W, but now the estimated power consumption is around 275W. That puts the GPU somewhere between the RTX 3080 (320W) and RTX 3070 (250W), but with RTX 3070 levels of performance. But again, lots of grains of salt should be applied, as none of this information has been confirmed by Intel. TSMC N6 uses the same design rules as the N7 node, but with some EUV layers, which should reduce power requirements. Then again, we’re looking at a completely different chip architecture.
Regardless, Moore’s Law Is Dead quotes one of its ‘sources’ as saying the DG2 will perform like an RTX 3070 Ti. This is quite strange since the RTX 3070 Ti isn’t even an official SKU from Nvidia (at least not right now). Put more simply, this means the DG2 should be slightly faster than an RTX 3070. Maybe.
That’s not entirely out of the question, either. Assuming the 512 EUs and 2.2GHz figures end up being correct, that would yield a theoretical 18 TFLOPS of FP32 performance. That’s a bit less than the 3070, but the Ampere GPUs share resources between the FP32 and INT32 pipelines, meaning the actual throughput of an RTX 3070 tends to be lower than the pure TFLOPS figure would suggest. Alternatively, 18 TFLOPS lands half-way between AMD’s RX 6800 and RX 6800 XT, which again would match up quite reasonably with a hypothetical RTX 3070 Ti.
There are plenty of other rumors and ‘leaks’ in the video as well. For example, at one point MLID discusses a potential DLSS alternative called, not-so-creatively, XeSS — and the Internet echo chamber has already begun to propogate that name around. Our take: Intel doesn’t need a DLSS alternative. Assuming AMD can get FidelityFX Super Resolution (FSR) to work well, it’s open source and GPU vendor agnostic, meaning it should work just fine with Intel and Nvidia GPUs as well as AMD’s offerings. We’d go so far as to say Intel should put it’s support behind FSR, just because an open standard that developers can support and that works on all GPUs is ultimately better than a proprietary standard. Plus, there’s not a snowball’s chance in hell that Intel can do XeSS as a proprietary feature and then get widespread developer support for it.
Other rumors are more believable. The encoding performance of DG1 is already impressive, building off Intel’s existing QuickSync technology, and DG2 could up the ante signficantly. That’s less of a requirement for gaming use, but it would certainly enable live streaming of content without significantly impacting frame rates. Dedicated AV1 encoding would also prove useful.
The DG2 should hopefully be available to consumers by Q4 of 2021, but with the current shortages plaguing chip fabs, it’s anyone’s guess as to when these cards will actually launch. Prosumer and professional variants of the DG2 are rumored to ship in 2022.
We don’t know the pricing of this 512EU SKU, but there is a 128EU model planned down the road, with an estimated price of around $200. More importantly, we don’t know how the DG2 or its variants will actually perform. Theoretical TFLOPS doesn’t always match up to real-world performance, and architecture, cache, and above all drivers play a critical role for gaming performance. We’ve encountered issues testing Intel’s Xe LP equipped Tiger Lake CPUs with some recent games, for example, and Xe HPG would presumably build off the same driver set.
Again, this info is very much unconfirmed rumors, and things are bound to change by the time DG2 actually launches. But if this data is even close to true, Intel’s first proper dip into the dedicated GPU market (DG1 doesn’t really count) in over 10 years could make them decently competitive with Ampere’s mid-range and high-end offerings, and by that token they’d also compete with AMD’s RDNA2 GPUs.
The first benchmark (via Tum_Apisak) of Intel’s Iris Xe DG1 is out. The graphics card’s performance is in the same ballpark as AMD’s four-year-old Radeon RX 550 – at least in the Basemark GPU benchmark.
If we compare manufacturing processes, the DG1 is obviously the more advanced offering. The DG1 is based on Intel’s latest 10nm SuperFin process node, and the Radeon RX 550 utilizes the Lexa die, which was built with GlobalFoundries’ 14nm process. Both the DG1 and Radeon RX 550 hail from Asus’ camp. The Asus DG1-4G features a passive heatsink, while the Asus Radeon RX 550 4G does require active cooling in the form of a single fan. The Radeon RX 550 is rated for 50W and the DG1 for 30W, which is why the latter can get away with a passive cooler.
The Asus DG1-4G features a cut-down variant of the Iris Xe Max GPU, meaning the graphics cards only has 80 execution units (EUs) at its disposal. This configuration amounts to 640 shading units with a peak clock of 1,500 MHz. On the memory side, the Asus DG1-4G features 4GB of LPDDR4X-4266 memory across a 128-bit memory interface.
On the other side of the ring, the Asus Radeon RX 550 4G comes equipped with 512 shading units with a 1,100 MHz base clock and 1,183 MHz boost clock. The graphics card’s 4GB of 7 Gbps GDDR5 memory that communicates through a 128-bit memory bus to pump out a memory bandwidth up to 112 GBps.
In terms of FP32 performance, the DG1 delivers up to 2.11 TFLOPs whereas the Radeon RX 550 offers up to 1.21 TFLOPs. On paper, the DG1 should be superior, but we know that FP32 performance isn’t the most important metric.
Both systems from the Basemark GPU submissions were based on the same processor, the Intel Core i3-10100F. Therefore, the DG1 and Radeon RX 550 were on equal grounds as far as the processor is concerned. Let’s not forget that the DG1 is picky when it comes to platforms. The graphics card is only compatible with the 9th and 10th Generation Core processors and B460, H410, B365 and H310C motherboards. Even then, a special firmware is necessary to get the DG1 working.
The DG1 puts up a Vulkan score of 17,289 points, while the Radeon RX 550 scored 17,619 points. Therefore, the Radeon RX 550 was up to 1.9% faster than the DG1. Of course, this is just one benchmark so it’s too soon to declare a definite winner without more thorough tests.
Intel never intended for the DG1 to be a strong performer, but rather an entry-level graphics card that can hang with the competition. Thus far, the DG1 seems to trade blows with the Radeon RX 550.
Intel has started to publish documents related to its upcoming Discrete Graphics 2 (DG2) GPUs based on the Xe-HPG microarchitecture, inadvertently revealing some of their specifications. As expected, Intel’s Xe-HPG family looks like it’ll include multiple models and compete across desktops and laptops and different levels of performance.
In order to prepare for a new product launch, Intel not only has to send various samples to its partners, but it also has to publish extensive documentation about the parts. Usually, such documents are hidden in password-protected sections of Intel’s website, but a simple search of the term “discrete graphics2” revealed dozens of documents about Intel’s DG2 family, as well as some of its specifications, as spotted Friday Twitter leakers @momomo_us and @Komachi_Ensaka.
According to the newfound documents, Intel’s DG2 lineup will include at least five different models for notebooks and at least two models for desktops. For some reason, notebook GPUs are referred to as SKU1 through SKU5; whereas, desktop graphics processors are called SoC1 and SoC2.
The new GPUs will support a PCIe 5.0 interface, GDDR6 memory running at 14 GT/s or 16 GT/s, HDMI 2.1 and DisplayPort Alt Mode over USB Type-C, according to Intel’s documents. However, it is unclear whether all the capabilities will be enabled on all SKUs.
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
Based on names of the documents that Intel has made available to its partners, the company and its allies are currently testing five mobile DG2 graphics processors with 96 execution units (EUs), 128 EUs, 256 EUs, 384 EUs and 512 EUs.
Meanwhile, the desktop-oriented SoC1 is believed to feature 512 EUs.
Since Intel’s DG2 products are based on the yet-to-be-revealed Xe-HPG architecture tailored for gaming graphics processors, it’s hard to estimate how the new graphics solutions from Intel will stack up against existing DG1 product or the best graphics cards from AMD and Nvidia.
Assuming this is the full lineup, it’s a bit odd to see Intel prepare a relatively broad lineup of GPUs for notebooks and only two graphics processors for desktops. It’s possible that Intel plans on revealing more desktop GPU models (with more EUs, perhaps). It’s also possible that the company will try to address a sweet spot niche of the gaming graphics cards market with a limited number of offerings.
Matthew Wilson 1 day ago Featured Announcement, Graphics
Ever since Intel hired Raja Koduri and announced plans to create new discrete graphics cards, we’ve been waiting to see what Intel can do in the gaming segment. So far, we’ve seen the Xe graphics architecture introduced in low-power DG1 discrete graphics cards, which are primarily sold to OEMs. Later this month, however, we may get our first look at Intel’s first high-performance gaming GPU.
This week, Intel teased the “Xe HPG Scavenger Hunt” with a web page telling us to come back on March 26th at 9AM PST with a secret code to see what the website reveals. The website itself was hidden in a Xe HPG teaser video, which contained a hidden message:
The hidden message was decoded by wccftech, revealing an IP address that leads straight to the Xe HPG website, teasing an announcement coming next Friday. The timing here is a little odd, as on the 23rd of March, Intel CEO, Pat Gelsinger, will be hosting a webcast focused on ‘the next era of innovation’ at Intel, which is three days before the Xe HPG reveal.
Intel’s upcoming DG2 graphics card is rumoured to feature up to 512 EUs, 4096 cores, 12GB of GDDR6 VRAM and an 1800MHz clock speed. Hopefully we’ll have some official confirmation on specifications by the end of next week.
KitGuru Says: It is going to be very interesting to see Intel competing in the discrete gaming graphics card space. Are any of you looking forward to Intel’s Xe HPG reveal? Would you consider an Intel GPU over a competing Nvidia GeForce or AMD Radeon graphics card?
Become a Patron!
Check Also
AMD Radeon Software update adds Performance Tuning tool and improves Radeon Boost and Anti-Lag
Besides a multitude of fixes, AMD’s latest Radeon Software 21.3.1 driver also comes with some …
Intel formally launched its discrete Iris Xe DG1 graphics card for entry-level desktops late in January. The company confirmed back then that the card featured a cut-down version of the Iris Xe Max GPU, with 80 execution units (EUs). And its compatibility was limited to inexpensive PCs. However, neither Intel nor its partners revealed all specifications of the product. But, Asus finally published specifications of its DG1 board, revealing some surprises.
The Asus DG1-4G graphics card carries exactly what Intel announced: a cut-down Iris Xe Max graphics processor with 80 EUs. But the manufacturer does not mention its maximum frequency, which is not that surprising as we are talking about a product that is supposed to be available only to PC makers. The GPU is accompanied by 4GB of LPDDR4-4266 memory connected to the chip using a 128-bit interface and has a PCIe 3.0 x4 interface (x16 mechanical) to connect to the host. As for display outputs, it has one DisplayPort, one HDMI, and one DVI-D connector to maintain compatibility with legacy monitors.
Intel mentioned earlier this year that its DG1 graphics cards would only be compatible with systems running its 9th- and 10th-Gen Core processors, on motherboards powered by its B460, H410, B365, and H310C chipsets–sorry AMD. One surprising part is that Asus only lists its own Prime H410M-A/CSM and Pro B460M-C/CSM motherboards, and for some reason omits platforms featuring other chipsets.
Another surprise: Asus supplies a quick start guide with its DG1-4G graphics card, which is not quite common for a product aimed solely at PC makers. Perhaps Asus just wants to ensure that if the part actually ends up in an end-user’s hands, they will install it into a compatible PC properly.
The DG1-4G board from Asus is very small by today’s GPU standards, measuring 4.3 × 6.8 inches (11 × 17.3 cm), so it will fit into almost any desktop (based on the aforementioned Intel platforms), except low-profile machines.
Intel’s upcoming Xe HPG “DG2” discrete gaming GPUs might be close to a reveal. The company has teased a “Xe HPG Scavenger Hunt” for March 26th at 12PM ET / 9AM PT.
The website was hidden in a binary sequence segment Intel stuck in its GDC 2021 presentation that briefly teased the upcoming graphics card. Wccftech has managed to decode the message: the IP address for the aforementioned scavenger hunt website, which promises more information on the 26th.
It’s not clear whether Intel will be launching the DG2 cards on the 26th or if the official announcement will come at new Intel CEO Pat Gelsinger’s mysterious announcement on “the new era of innovation and technology leadership at Intel” that’s set to take place on March 23rd. If that’s the case, then it’s possible that the scavenger hunt could award fans more information on the upcoming GPUs (or perhaps even with actual graphics cards if Intel is feeling generous).
While Intel released its first Iris Xe graphics cards earlier this year, based on its “DG1” prototypes that used the company Xe LP architecture. But the lower power cards weren’t really meant for gaming. In fact, they were primarily only sold directly to partners to include in prebuilt machines.
The upcoming “DG2” cards — based on the higher-performance Xe HPG architecture — promise to offer real gaming competition to long-established players like AMD and Nvidia. Wccftech also recently spotted one of the rumored DG2 cards that made an appearance on Geekbench, too, featuring 512 EUs, 4096 cores, 1800MHz clock speeds, and 12GB of GDDR6 VRAM. Other rumors have indicated that Intel is working on an entire lineup of GPUs using the new architecture, spanning cheaper entry-level models to pricier options for power-hungry gaming enthusiasts.
German publication Igor’s Lab has nailed a world-exclusive look at Intel’s DG1 discrete graphics card. The chipmaker showcased the DG1 last year at CES 2020, running Warframe, but Intel’s entry-level Iris Xe development graphics cards are exclusively available to system integrators and OEMs. In fact, Intel has put up some barriers in place to make sure that the DG1 only works on a handful of selected systems. Therefore, you really can’t just rip out the DG1 from an OEM system and test the graphics card on another PC. After analyzing the images, the teardown helps explain why.
Wallossek managed to get his hands on a complete OEM system with the original DG1 SDV (Software Development Vehicle). In order to protect his sources, Igor only shared the basic specifications of the system, which includes a Core i7 non-K processor and a Z390 mini-ITX motherboard.
First up, let’s look at why the card won’t work on most motherboards.
Intel DG1
Image 1 of 2
Image 2 of 2
Intel has limited support for the card to a handful of OEM systems and motherboard chipsets, sparking speculation about why the company isn’t selling the cards on the broader retail market. It turns out there’s a plausible technical explanation.
Hardware-hacker Cybercat 2077 (@0xCats) recently tweeted out (below) that the DG1 cards lack the EEPROM chip that holds the firmware, largely because they were originally designed for laptops and thus don’t have the SPI lines required for connection. These EEPROM chips are present on the quad-GPU XG310 cards for data centers that use the same graphics engines, but as we can see in the naked PCB shot from Igor’s Lab above, those same chips aren’t present on the DG1 board.
According to Cybercat 2077, that means the card’s firmware has to be stored on the motherboard, hence the limited compatibility. Intel hasn’t confirmed this hypothesis, but it makes perfect sense.
While it’s technically possible to shoehorn SPI eeproms on via some tricks, Intel has chosen not to do so on the DG1 OEM/Consumer cards. Image here of a 4chip Xe XG310 for hyperscalers where you can see an eeprom (red dot on them denotes pin 1) located next to each GPU chip. pic.twitter.com/Dq8HG4GLsrJanuary 28, 2021
Image 1 of 2
Image 2 of 2
The DG1 SDV reportedly features a DirectX 12 chip produced with Intel’s 10nm SuperFin process node and checks in with 96 Execution Units (EUs), which amounts to 768 shaders. That’s 20% more shaders than the cut-down version that Asus and other partners will offer. The DG1 features 8GB of LPDDR4 memory with a 2,133 MHz clock speed. The memory is reportedly connected to a 128-bit memory interface and supports PCIe 4.0, although it’s limited to x8 speeds.
At idle, the graphics card runs at 600 MHz with a power consumption of 4W. The fans spin up to 850 RPM and keep the graphics card relatively cool at 30 degrees Celsius. With a full load, the clock speed jumps up to 1,550 MHz, and the power consumption scales to 20W. In terms of thermals, the graphics card’s operating temperature got to 50 degrees Celsius with the fan spinning at 1,800 RPM. Wallossek thinks that the DG1’s total power draw should be between 27W to 30W.
The DG1 is equipped with a light alloy cover with a single 80mm PWM cooling fan and an aluminum heatsink underneath. Design-wise, the DG1 leverages a two-phase power delivery subsystem that consists of a buck controller and one PowerStage for each phase. The Xe GPU is surrounded by four 2GB Micron LPDDR4 memory chips.
Given the low power consumption, the DG1 draws what it needs from the PCIe slot alone and doesn’t depend on any PCIe power connectors. Display outputs include one HDMI 2.1 port and three DisplayPort outputs.
However, Wallossek noted that while you can get an image from the HDMI port, it causes system instability. He thinks that the firmware and driver prevent you from establishing a direct connection with the DG1, which explains why Intel recommends using the motherboard display outputs instead. The DG1 in Wallossek’s hands is a test sample. Despite the many driver updates, the graphics card is still finicky, and its display outputs are unusable.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The DG1’s performance should be right in the alley of Nvidia’s GeForce GT 1030, but there are no benchmarks or tests to support this claim. Wallossek couldn’t provide any, either. Apparently, benchmarks simply crash the system, or they end up in an infinite loop. Wallossek could only get AIDA64’s GPGPU benchmark to budge, but that doesn’t really tell us anything meaningful about graphics performance.
As reported yesterday, Intel’s first discrete graphics cards for desktops in more than two decades will not be available at retail and will only be sold as parts of pre-built mainstream systems. Apparently, they will also only be compatible with select Intel platforms and will not work with AMD’s CPUs at all.
Intel says that only systems running its 9th- and 10th-Gen Core processors on motherboards powered by its B460, H410, B365, and H310C chipset are compatible with its graphics card. According to the chip giant, platforms need a special BIOS to work with its DG1 solution. As a result, Intel’s Iris Xe graphics boards will not work with AMD-based systems, as well as Intel’s advanced machines featuring its Z-series chipsets.
Intel’s Iris Xe standalone graphics board for desktop PCs and the Iris Xe Max discrete GPU for notebooks are based on the company’s Xe-LP architecture that is also used for Tiger Lake’s integrated GPUs. Since the Xe-LP architecture was designed primarily for iGPUs (and to get the ecosystem ready for Xe-HP and Xe-HPG graphics processors), even its standalone DG1 versions don’t really offer decent performance in demanding modern games, but could still serve well inside entry-level PCs used for work and media.
Since entry-level graphics cards are not particularly popular at retail, but there are a bunch of CPUs in the channel that do not feature Intel’s latest Xe graphics, the company apparently decided to reserve its Iris Xe discrete graphics cards for OEMs and only sell them as parts of pre-built PCs. As a matter of fact, there are a bunch of entry-level desktops with low-end graphics cards. These boards don’t consume a lot of power, yet are still noticeably better than many integrated solutions — especially previous generation UHD 630 graphics.
Intel’s statement reads as follows:
“Please note that the Iris Xe add-in card will be paired with 9th Gen (Coffee Lake-S) and 10th Gen (Comet Lake-S) Intel Core desktop processors and Intel B460, H410, B365, and H310C chipset-based motherboards and sold as part of pre-built systems. These motherboards require a special BIOS that supports Intel Iris Xe, so the cards won’t be compatible in other systems.”
One of the reasons why Intel might limit the compatibility of its DG1 graphics board to select systems is to ensure that it will not have to support and ensure compatibility with many PC configurations, which will lower its costs. As an added bonus, it will not provide a low-end graphics option to cheap platforms running entry-level AMD processors. Whatever the reasoning behind restricting the dedicated Xe DG1 cards to specific motherboards, it doesn’t suggest a lot of confidence behind the product. Why buy a DG1 when plenty of previous generation budget GPUs are still around?
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.