for the PC version on Steam. The sequel follows the saga of Ethan Winters, this time with some apparently very large vampire ladies. Based on what we’ve seen, you’ll benefit from having one of the
best graphics cards
along with something from our list of the
best CPUs for gaming
when the game arrives on May 7.
The eighth entry in the series (VIII from Village), this will be the first Resident Evil to feature ray tracing technology. The developers have tapped AMD to help with the ray tracing implementation, however, so it’s not clear whether it will run on Nvidia’s RTX cards at launch, or if it will require a patch — and it’s unlikely to get DLSS support, though it could make for a stunning showcase for AMD’s FidelityFX Super Resolution if AMD can pull some strings.
We’ve got about a month to wait before the official launch. In the meantime, here are the official system requirements.
Minimum System Requirements for Resident Evil Village
Capcom notes that in either case, the game targets 1080p at 60 fps, though the framerate “might drop in graphics-intensive scenes.” While the minimum requirements specify using the “Prioritize Performance” setting, it’s not clear what settings are used for the recommended system.
The Resident Evil Village minimum system requirements are also for running the game without ray tracing, with a minimum requirement of an RTX 2060 (and likely future AMD GPUs like Navi 23), and a recommendation of at least an RTX 2070 or RX 6700 XT if you want to enable ray tracing. There’s no mention of installation size yet, so we’ll have to wait and see just how much of our SSD the game wants to soak up.
The CPU specs are pretty tame, and it’s very likely you can use lower spec processors. For example, the Ryzen 3 1200 is the absolute bottom of the entire Ryzen family stack, with a 4-core/4-thread configuration running at up to 3.4GHz. The Core i5-7500 also has a 4-core/4-thread configuration, but runs at up to 3.8GHz, and it’s generally higher in IPC than first generation Ryzen.
You should be able to run the game on even older/slower CPUs, though perhaps not at 60 fps. The recommended settings are a decent step up in performance potential, moving to 6-core/12-thread CPUs for both AMD and Intel, which are fairly comparable processors.
The graphics card will almost certainly play a bigger role in performance than the CPU, and while the baseline GTX 1050 Ti and RX 560 4GB are relatively attainable (the game apparently requires, maybe, 4GB or more VRAM), we wouldn’t be surprised if that’s with some form of dynamic resolution scaling enabled. Crank up the settings and the GTX 1070 and RX 5700 are still pretty modest cards, though the AMD card is significantly faster — not that you can find either in stock at acceptable prices these days, as we show in our
GPU pricing index
. But if you want to run the full-fat version of Resident Evil Village, with all the DXR bells and whistles at 1440p or 4K, you’re almost certainly going to need something far more potent.
Full size images: RE Village RT On / RE Village RT OffAMD showed a preview of the game running with and without ray tracing during its
Where Gaming Begins, Episode 3
presentation in early March. The pertinent section of the video starts at the 9:43 mark, though we’ve snipped the comparison images above for reference. The improved lighting and reflections are clearly visible in the RT enabled version, but critically we don’t know how well the game runs with RT enabled.
We’re looking forward to testing Resident Evil Village on a variety of GPUs and CPUs next month when it launches on PC, Xbox, and PlayStation. Based on what we’ve seen from other RT-enabled games promoted by AMD (e.g. Dirt 5), we expect frame rates will take a significant hit.
But like we said, this may also be the debut title for FidelityFX Super Resolution, and if so, that’s certainly something we’re eager to test. What we’d really like to see is a game that supports both FidelityFX Super Resolution and DLSS, just so we could do some apples-to-apples comparisons, but it may be a while before such a game appears.
Apple’s computers have been notorious for their lack of upgradeability, particularly since the introduction of Apple’s M1 chip that integrates memory directly into the package. But as spotted via Twitter, if you want to boost the power of your Mac, it may be possible with money, skill, time and some real desire by removing the DRAM and NAND chips and adding more capacious versions, much like we’ve seen multiple times with enthusiasts soldering on more VRAM to graphics cards.
With the ongoing transition to custom Apple system-on-chips (SoCs), it will get even harder to upgrade Apple PCs. But one Twitter user points to “maintenance engineers” that did just that.
By any definition, such modifications void the warranty, so we strongly do not recommend doing them on your own: It obviously takes a certain level of skill, and patience, to pull off this type of modification.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
With a soldering station (its consumer variant is not that expensive at $60), DRAM memory chips and NAND flash memory chips, (which are close to impossible to buy on the consumer level), the engineers reportedly upgraded the Apple M1-based Mac Mini with 8GB of RAM and 256GB of storage to 16GB and 1TB, respectively, by de-soldering the existing components and adding more capacious chips. According to the post, no firmware modifications were necessary.
Chinese maintenance engineers can already expand the capacity of the Apple M1. The 8GB memory has been expanded to 16GB, and the 256GB hard drive has been expanded to 1TB. pic.twitter.com/2Fyf8AZfJRApril 4, 2021
See more
Using their soldering station, the engineers removed 8GB of GDDR4X memory and installed chips with a 16GB capacity. Removing the NAND chips from the motherboard using the same method was not a problem. The chips were then replaced with higher-capacity devices.
The details behind the effort are slight, though the (very) roughly translated Chinese text in one of the images reads, “The new Mac M1 whole series the first time 256 and upgrade to 1TB, memory is 8L 16G, perfect! This is a revolutionary period the companies are being reshuffled. In the past, if you persevered, there was hope, but today, if you keep on the original way, a lot of them will disappear unless we change our way of thinking. We have to evolve, update it, and start again. Victory belongs to those who adapt; we have to learn to make ourselves more valuable.”
Of course, Apple is not the only PC maker to opt for SoCs and soldered components. Both Intel and AMD offer PC makers SoCs, and Intel even offers reference designs for building soldered down PC platforms.
use? It’s an important question, and while the performance we show in our
GPU benchmarks
hierarchy is useful, one of the true measures of a GPU is how efficient it is. To determine GPU power efficiency, we need to know both performance and power use. Measuring performance is relatively easy, but measuring power can be complex. We’re here to press the reset button on GPU power measurements and do things the right way.
There are various ways to determine power use, with varying levels of difficulty and accuracy. The easiest approach is via software like
GPU-Z
, which will tell you what the hardware reports. Alternatively, you can measure power at the outlet using something like a
Kill-A-Watt
power meter, but that only captures total system power, including PSU inefficiencies. The best and most accurate means of measuring the power use of a graphics card is to measure power draw in between the power supply (PSU) and the card, but it requires a lot more work.
We’ve used GPU-Z in the past, but it had some clear inaccuracies. Depending on the GPU, it can be off by anywhere from a few watts to potentially 50W or more. Thankfully, the latest generation AMD Big Navi and Nvidia Ampere GPUs tend to report relatively accurate data, but we’re doing things the right way. And by “right way,” we mean measuring in-line power consumption using hardware devices. Specifically, we’re using Powenetics software in combination with various monitors from TinkerForge. You can read our Powenetics project overview for additional details.
Image 1 of 2
Image 2 of 2
Tom’s Hardware GPU Testbed
After assembling the necessary bits and pieces — some soldering required — the testing process is relatively straightforward. Plug in a graphics card and the power leads, boot the PC, and run some tests that put a load on the GPU while logging power use.
We’ve done that with all the legacy GPUs we have from the past six years or so, and we do the same for every new GPU launch. We’ve updated this article with the latest data from the GeForce RTX 3090, RTX 3080, RTX 3070, RTX 3060 Ti, and RTX 3060 12GB from Nvidia; and the Radeon RX 6900 XT, RX 6800 XT, RX 6800, and RX 6700 XT from AMD. We use the reference models whenever possible, which means only the EVGA RTX 3060 is a custom card.
If you want to see power use and other metrics for custom cards, all of our graphics card reviews include power testing. So for example, the RX 6800 XT roundup shows that many custom cards use about 40W more power than the reference designs, thanks to factory overclocks.
Test Setup
We’re using our standard graphics card testbed for these power measurements, and it’s what we’ll use on graphics card reviews. It consists of an MSI MEG Z390 Ace motherboard,
Intel Core i9-9900K CPU
, NZXT Z73 cooler, 32GB Corsair DDR4-3200 RAM, a fast M.2 SSD, and the other various bits and pieces you see to the right. This is an open test bed, because the Powenetics equipment essentially requires one.
There’s a PCIe x16 riser card (which is where the soldering came into play) that slots into the motherboard, and then the graphics cards slot into that. This is how we accurately capture actual PCIe slot power draw, from both the 12V and 3.3V rails. There are also 12V kits measuring power draw for each of the PCIe Graphics (PEG) power connectors — we cut the PEG power harnesses in half and run the cables through the power blocks. RIP, PSU cable.
Powenetics equipment in hand, we set about testing and retesting all of the current and previous generation GPUs we could get our hands on. You can see the full list of everything we’ve tested in the list to the right.
From AMD, all of the latest generation Big Navi / RDNA2 GPUs use reference designs, as do the previous gen RX 5700 XT, RX 5700 cards,
Radeon VII
,
Vega 64
and
Vega 56
. AMD doesn’t do ‘reference’ models on most other GPUs, so we’ve used third party designs to fill in the blanks.
For Nvidia, all of the Ampere GPUs are Founders Edition models, except for the EVGA RTX 3060 card. With Turing, everything from the
RTX 2060
and above is a Founders Edition card — which includes the 90 MHz overclock and slightly higher TDP on the non-Super models — while the other Turing cards are all AIB partner cards. Older GTX 10-series and GTX 900-series cards use reference designs as well, except where indicated.
Note that all of the cards are running ‘factory stock,’ meaning there’s no manual
overclocking
or
undervolting
is involved. Yes, the various cards might run better with some tuning and tweaking, but this is the way the cards will behave if you just pull them out of their box and install them in your PC. (RX Vega cards in particular benefit from tuning, in our experience.)
Our testing uses the Metro Exodus benchmark looped five times at 1440p ultra (except on cards with 4GB or less VRAM, where we loop 1080p ultra — that uses a bit more power). We also run Furmark for ten minutes. These are both demanding tests, and Furmark can push some GPUs beyond their normal limits, though the latest models from AMD and Nvidia both tend to cope with it just fine. We’re only focusing on power draw for this article, as the temperature, fan speed, and GPU clock results continue to use GPU-Z to gather that data.
GPU Power Use While Gaming: Metro Exodus
Due to the number of cards being tested, we have multiple charts. The average power use charts show average power consumption during the approximately 10 minute long test. These charts do not include the time in between test runs, where power use dips for about 9 seconds, so it’s a realistic view of the sort of power use you’ll see when playing a game for hours on end.
Besides the bar chart, we have separate line charts segregated into groups of up to 12 GPUs, and we’ve grouped cards from similar generations into each chart. These show real-time power draw over the course of the benchmark using data from Powenetics. The 12 GPUs per chart limit is to try and keep the charts mostly legible, and the division of what GPU goes on which chart is somewhat arbitrary.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
Kicking things off with the latest generation GPUs, the overall power use is relatively similar. The 3090 and 3080 use the most power (for the reference models), followed by the three Navi 10 cards. The RTX 3070, RX 3060 Ti, and RX 6700 XT are all pretty close, with the RTX 3060 dropping power use by around 35W. AMD does lead Nvidia in pure power use when looking at the RX 6800 XT and RX 6900 XT compared to the RTX 3080 and RTX 3090, but then Nvidia’s GPUs are a bit faster so it mostly equals out.
Step back one generation to the Turing GPUs and Navi 1x, and Nvidia had far more GPU models available than AMD. There were 15 Turing variants — six GTX 16-series and nine RTX 20-series — while AMD only had five RX 5000-series GPUs. Comparing similar performance levels, Nvidia Turing generally comes in ahead of AMD, despite using a 12nm process compared to 7nm. That’s particularly true when looking at the GTX 1660 Super and below versus the RX 5500 XT cards, though the RTX models are closer to their AMD counterparts (while offering extra features).
It’s pretty obvious how far AMD fell behind Nvidia prior to the Navi generation GPUs. The various Vega and Polaris AMD cards use significantly more power than their Nvidia counterparts. RX Vega 64 was particularly egregious, with the reference card using nearly 300W. If you’re still running an older generation AMD card, this is one good reason to upgrade. The same is true of the legacy cards, though we’re missing many models from these generations of GPU. Perhaps the less said, the better, so let’s move on.
GPU Power with FurMark
FurMark, as we’ve frequently pointed out, is basically a worst-case scenario for power use. Some of the GPUs tend to be more aggressive about throttling with FurMark, while others go hog wild and dramatically exceed official TDPs. Few if any games can tax a GPU quite like FurMark, though things like cryptocurrency mining can come close with some algorithms (but not Ehterium’s Ethash, which tends to be limited by memory bandwidth). The chart setup is the same as above, with average power use charts followed by detailed line charts.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
The latest Ampere and RDNA2 GPUs are relatively evenly matched, with all of the cards using a bit more power in FurMark than in Metro Exodus. One thing we’re not showing here is average GPU clocks, which tend to be far lower than in gaming scenarios — you can see that data, along with fan speeds and temperatures, in our graphics card reviews.
The Navi / RDNA1 and Turing GPUs start to separate a bit more, particularly in the budget and midrange segments. AMD didn’t really have anything to compete against Nvidia’s top GPUs, as the RX 5700 XT only matched the RTX 2070 Super at best. Note the gap in power use between the RTX 2060 and RX 5600 XT, though. In gaming, the two GPUs were pretty similar, but in FurMark the AMD chip uses nearly 30W more power. Actually, the 5600 XT used more power than the RX 5700, but that’s probably because the Sapphire Pulse we used for testing has a modest factory overclock. The RX 5500 XT cards also draw more power than any of the GTX 16-series cards.
With the Pascal, Polaris, and Vega GPUs, AMD’s GPUs fall toward the bottom. The Vega 64 and Radeon VII both use nearly 300W, and considering the Vega 64 competes with the GTX 1080 in performance, that’s pretty awful. The RX 570 4GB (an MSI Gaming X model) actually exceeds the official power spec for an 8-pin PEG connector with FurMark, pulling nearly 180W. That’s thankfully the only GPU to go above spec, for the PEG connector(s) or the PCIe slot, but it does illustrate just how bad things can get in a worst-case workload.
The legacy charts are even worse for AMD. The R9 Fury X and R9 390 go well over 300W with FurMark, though perhaps that’s more of an issue with the hardware not throttling to stay within spec. Anyway, it’s great to see that AMD no longer trails Nvidia as badly as it did five or six years ago!
Analyzing GPU Power Use and Efficiency
It’s worth noting that we’re not showing or discussing GPU clocks, fan speeds or GPU temperatures in this article. Power, performance, temperature and fan speed are all interrelated, so a higher fan speed can drop temperatures and allow for higher performance and power consumption. Alternatively, a card can drop GPU clocks in order to reduce power consumption and temperature. We dig into this in our individual GPU and graphics card reviews, but we just wanted to focus on the power charts here. If you see discrepancies between previous and future GPU reviews, this is why.
The good news is that, using these testing procedures, we can properly measure the real graphics card power use and not be left to the whims of the various companies when it comes to power information. It’s not that power is the most important metric when looking at graphics cards, but if other aspects like performance, features and price are the same, getting the card that uses less power is a good idea. Now bring on the new GPUs!
Here’s the final high-level overview of our GPU power testing, showing relative efficiency in terms of performance per watt. The power data listed is a weighted geometric mean of the Metro Exodus and FurMark power consumption, while the FPS comes from our GPU benchmarks hierarchy and uses the geometric mean of nine games tested at six different settings and resolution combinations (so 54 results, summarized into a single fps score).
This table combines the performance data for all of the tested GPUs with the power use data discussed above, sorts by performance per watt, and then scales all of the scores relative to the most efficient GPU (currently the RX 6800). It’s a telling look at how far behind AMD was, and how far it’s come with the latest Big Navi architecture.
Efficiency isn’t the only important metric for a GPU, and performance definitely matters. Also of note is that all of the performance data does not include newer technology like ray tracing and DLSS.
The most efficient GPUs are a mix of AMD’s Big Navi GPUs and Nvidia’s Ampere cards, along with some first generation Navi and Nvidia Turing chips. AMD claims the top spot with the Navi 21-based RX 6800, and Nvidia takes second place with the RTX 3070. Seven of the top ten spots are occupied by either RDNA2 or Ampere cards. However, Nvidia’s GDDR6X-equipped GPUs, the RTX 3080 and 3090, rank 17 and 20, respectively.
Given the current GPU shortages, finding a new graphics card in stock is difficult at best. By the time things settle down, we might even have RDNA3 and Hopper GPUs on the shelves. If you’re still hanging on to an older generation GPU, upgrading might be problematic, but at some point it will be the smart move, considering the added performance and efficiency available by more recent offerings.
Outriders, the online shooting, looting, and superpower-slinging game from People Can Fly, finally has a way to pause, but to do it you’ll need to be using an Nvidia graphics card (via Kotaku). Despite working as a single-player game, Outriders requires an internet connection to play, which means pausing in the middle of a battle was impossible until this workaround. Even with your menu open, enemies could still attack you.
Using Ansel, which is a feature of Nvidia GeForce RTX graphics cards that enables a kind of photo mode even in games without one built in, you can “pause” Outriders by pressing “Alt F2” on the fly, and get up and take care of business. Because Ansel is specific to the Nvidia’s Geforce Experience software, pausing is limited to PC players, which means anyone playing on console or with a different brand of graphics card is out of luck.
The handling of pausing and single-player content in Outriders is similar to Destiny 2, towhich it shares some aesthetic and mechanical similarities. Destiny 2 sells a battle pass and yearly expansions with new story content, and it justifies — at least in part — it’s online-only requirements with the promise of new weekly and monthly changes in the form of live events and other features.
The difference is that Outriders is very explicitly sold as a more traditional single-player game, with the game’s publisher Square Enix addressing the issue on its site, “Outriders is a complete experience out of the box,” it writes. For some reason, an internet connection is still required, which, beyond hindering a basic feature like pausing the game, also seemed to contribute to Outriders’ launch on April 2nd being kind of a mess. Players had issues connecting with the game’s servers to play in single-player and multiplayer, for which developer People Can Fly acknowledged and apologized publicly.
The game seems to be working fine now, and this weird Nvidia loophole means the experience of playing single-player could be a little bit more comfortable, but Outriders definitely illustrates the ongoing problems of making a game online-only.
Epic is further stitching together its various platforms with a new Fortnite integration for its social video app Houseparty that lets you stream your gameplay to friends. The integration builds on an existing one that uses Houseparty’s video chatting capabilities to bring live video chat into Fortnite, and now this essentially does the reverse.
That way, your friends can see you live both through your mobile phone camera and also the feed of your active Fortnite game. Think of it a bit like Twitch streaming without all the fuss and just for your friends instead of the broader public. Epic says the feature supports streaming from a PlayStation 4, PlayStation 5, or PC right now. “We will let everyone know if we’re able to support more platforms in the future,” Epic says in its blog post.
Epic owns Houseparty, which streamlines fast and easy group video chat, following a 2019 acquisition, and the game maker has used the app to boost the social feature sets of its various gaming platforms. A few months following the acquisition, Epic began using Houseparty for improved Fortnite cross-platform audio chat, and now a full bridge between the game and the app exists.
For a breakdown of how to enable Houseparty gameplay streaming and video chat, check out Epic’s FAQ here.
Nintendo’s official Pro Controller for the Switch is generally a pretty useful accessory, but it has its problems: the D-pad is unreliable, and it doesn’t really offer any “pro-level” functionality. 8BitDo’s latest controller improves on both of those issues while coming in at a lower price.
The 8BitDo Pro 2 is an upgraded version of the SN30Pro Plus, already a well-regarded Switch controller. It uses Bluetooth and also works with PCs and mobile devices; there’s a physical control for flipping between Switch, X-input, D-input, and Mac. You can use it as a wired controller with a USB-C cable, too. I did try using it with my PC, but I feel like it makes more sense on the Switch due to the Japanese-style button layout with B on the bottom and A on the right. Or maybe I’m just too used to using Xbox controllers on the PC.
Aesthetically, it looks kind of like a cross between a SNES pad and a PlayStation controller, with a lozenge-shaped body, two handles, and symmetrically aligned analog sticks. The unit I have is decked out in a PlayStation-inspired gray colorway, though there’s also an all-black option and a beige model that evokes the original Game Boy.
It’s not a huge controller, but it feels comfortable in my large hands, with easy access to all of the buttons and triggers. Just as importantly for me, the D-pad is good. It feels more or less like a SNES pad, and its placement above the left analog stick makes it more appropriate for games where it’s a primary input option. I’d much rather use the Pro 2 than Nintendo’s Pro Controller for just about any 2D game on the Switch.
The Pro 2’s key feature over its predecessor is the customizable back buttons that you can press with your middle finger. These are a common element of enthusiast-focused controllers today, from Microsoft’s Elite controllers to third-party offerings like the Astro C40 for the PS4. Sony also released an attachment that brings similar functionality to the DualShock 4.
These buttons are useful because they allow you to enter commands without taking your thumbs off the sticks. Most first-person shooters, for example, assign jumping to a face button, which means it can be awkward to activate while aiming at the same time. With controllers like the Pro 2, you can set a back button to work the same way as a given face button, freeing you up to design more flexible control schemes. The Pro 2 makes it much easier to manipulate the camera in the middle of a Monster Hunter Rise battle, which might be worth the asking price alone.
The back buttons on the Pro 2 are responsive and clicky, activating with a slight squeeze. You can assign them through 8BitDo’s Ultimate Software app, which is now available for the Pro 2 on iOS and Android as well as PCs. It’s not quite as simple as some pro controller setups that let you remap the buttons directly on the controller itself, but it does support multiple profiles and works well enough. Beside button assignments, the app can also be used to modify the controller’s vibration strength and stick sensitivity.
You do miss out on some of the Switch Pro Controller’s features with the 8BitDo Pro 2. While the rumble is solid, it doesn’t feel as precise as Nintendo’s HD Rumble in supported games. The Pro 2 also lacks an NFC reader, so it won’t work with Amiibo figurines. And it can’t be used to power the Switch on, which is common to most third-party controllers across various platforms.
For $49.99, though, those omissions are understandable. That’s $20 less than Nintendo’s equivalent option, let alone the pro controllers you’d find for the Xbox or PlayStation in the $180–$200 range. And all things considered, I’d take the 8BitDo Pro 2 over the official Nintendo controller most days of the week.
The 8BitDo Pro 2 will start shipping on April 12th.
(Pocket-lint) – The Asus TUF Dash F15 is another of the company’s ultra-thin gaming laptops, which sports some serious specs in a compact, lightweight and portable frame.
Available in two colours with a small mix of specs options and some nifty design accents, the TUF Dash F15 is interesting enough on paper, but is it worth a buy? We’ve been gaming and working with it for a couple of weeks to find out.
A compact frame that packs power
Up to Nvidia GeForce RTX 3070 GPU, 8GB GDDR6 RAM
Up to Intel Core i7-11370H processor
Up to 32GB DDR4 3200Mhz RAM
Up to 1TB M.2 NVMe
In classic Asus fashion, the TUF Dash F15 features some nifty tech packed into a compact frame. That chassis has been put through the usual military standard durability tests, which in reality results in a solid frame that feels robust and well built. It doesn’t bend or flex easily during use and yet is light enough to carry around with you, or position on your lap when gaming.
Outwardly the TUF Dash F15 is also easy on the eyes. It’s available in two different colours – Moonlight White or Eclipse Gray – with understated accents on the shell and an equally subtle backlit keyboard.
Super-narrow bezels also ensure maximum screen real-estate and “minimal distraction” – though this does come at the expense of a webcam (ugh!).
Hidden within that frame is some powerful tech with options that include some of the best from Nvidia and Intel. This means the TUF Dash F15 is a capable gaming machine that can take advantage of ray tracing and DLSS, while also maximising performance with Dynamic Boost and keeping things running quietly with Whisper Mode.
Naturally, the specs of this gaming laptop mean you can push the visuals up to maximum, but still get frame rates high enough to make the most of the 240hz screen. The Nvidia GeForce RTX 3070 is more than capable of driving this 15.6-inch display at Full HD resolution and delivering smooth gameplay experiences with satisfying visuals.
With this spec, you can also manage streaming to Twitch and the like if you want, while the addition of a RJ45 connection means you’ll have a solid connection when doing so.
How to start streaming your gaming: All the gear you need
The keyboard on this laptop is fairly basic compared to other Asus laptops we’ve tested though. At least in terms of RGB lighting anyway. There are very basic settings here, with just a few effects and no per-key illumination. It does, however, have some nicely accented WASD keys which help those stand out.
Pro grade gaming screen options
15.6-inch Full HD (1920 x 1080) anti-glare IPS display
Adaptive-sync panel – up to 240Hz refresh rate
Colour gamut: 100% sRGB, 75.35% Adobe
Benchmarks: Timespy, Timespy Extreme, Port Royal, Firestrike Ultra, Firestrike Extreme, PC Mark
Despite only being 15.6-inches, the panel on this gaming laptop gives the impression of something larger. The thin bezels mean the screen stands out nicely and didn’t lead us to feel like we were straining to see our targets in Rainbow Six Siege or struggling fighting skeletons in Valheim.
The viewing angles on this screen are also satisfying, as are the colours. The Adaptive-Sync tech means the panel is also synchronised nicely with the GPU which results in ultra-smooth gaming visuals.
As with other Asus gaming laptops, the TUF Dash F15 lets you use Armoury Crate to tweak the visuals. There are various settings that adjust the colours of the screen to suit your mood or need. This includes settings for Vivid, Cinema, RTS, FPS, and Eye Care. You can tweak what you’re seeing to maximise the look and feel of a game or eliminate eye-taxing blue light if you’re simply using the laptop for work.
Armoury Crate also lets you do things like monitor system performance, frequencies and temperatures, and switch between the various performance modes to increase power or reduce fan noise.
Performance-wise, the TUF Dash F15 does a good job. It wasn’t quite as impressive as the ROG Strix G15 we tested recently but still manages some decent frame-rates.
Where that laptop managed 64fps on Assassin’s Creed Odyessy, this TUF Dash F15 averaged 50fps. Similarly, the G15 pushed 200fps in Rainbow Six Siege while the TUF Dash F15 got around the 150fps mark. Still, those aren’t performance levels to be sniffed at on the maximum settings – but shows that the slender frame has an impact overall.
Convenient connectivity?
1x Thunderbolt 4 (USB 4, supports DisplayPort)
3x USB 3.2 Gen 1 Type-A, 1x HDMI 2.0b
1x 3.5mm jack, noise-cancelling mic
1x RJ45 LAN port
Wi-Fi 6(802.11ax)
Bluetooth 5.1
Continuing a trend of usefulness, the TUF Dash F15 sports a decent number of ports and connectivity options. For those serious gamers looking to stream or game with a solid connection, there’s an Ethernet port, but the machine is also Wi-Fi 6 capable – which means a solid and satisfying connection whatever you’re doing.
There’s also no shortage of USB ports. Though we will note Asus has chosen to place two of them on the right-hand side, which is a pain when you’re trying to use a dedicated gaming mouse rather than the lacklustre trackpad for your gaming sessions.
Yes, we didn’t get on with the trackpad on this laptop. It’s finicky and frustrating and the fact that two out of the three USB Type-A ports are on the right means you need a decent amount of desk space to comfortably game and not have wires get in the way – unless you have a wireless mouse.
That’s not the only connection niggle either. Once again, if you want to use DisplayPort to output to an external monitor you’ll need to buy an adapter as it’s only available via USB-C. There’s no standard DisplayPort or Mini DisplayPort connection – which is a pain if you’re planning on gaming in VR.
As with other recent thin and light gaming laptops from Asus, there’s also the distinct lack of a webcam. This is an odd choice in our mind considering how many Teams, Skype and Zoom video calls we’ve all been having in the last year. If you’re purely using it for gaming though, then it’s not a bother – as you’ll want a better accessory separate anyway.
Best webcam: Top cameras for video calling
The TUF Dash F15 has speakers that are capable enough to overpower its cooling fans and a two-way noise cancellation mic setup which means you can be heard if you’re using the built-in mic to chat to friends. It’s still worth investing in a decent gaming headset if you really want to get lost in the games – as on max settings the fans are far from quiet and you will eventually get fed up with the white noise whirr from them.
Battery life
76WHr li-ion battery
200W AC charger
One area the TUF Dash F15 impresses is battery life. We could get through most of the day working and browsing and we also managed hours and hours of Netflix watching before the machine ran out of juice. In a gaming specific laptop away from the plug that’s an unusual accomplishment.
The best Chromebook 2021: Our pick of the top Chrome OS laptops for school, college and more
By Dan Grabham
·
We did note a performance hit when playing on battery alone – and that was a more significant one than we encountered with the Strix G15. But then if you want to make the most of the display you’ll be using it plugged in for gaming anyway.
But for general day-to-day use, this laptop won’t disappoint and you certainly won’t find yourself running for the plug every five minutes.
Verdict
All told, the Asus TUF Dash F15 manages to live up to expectations. It’s a decent performer with some nice specification options – at a price tag that isn’t going to make you cry.
With the right games you’ll get some seriously impressive frame rates to make the most of the fast-refresh screen. When maxing out those games this laptop doesn’t get too hot or loud either, all while lasting for a decent innings on battery alone.
What more could you want? Well, there are other options that can squeeze out yet more performance – but it’ll depend on just how much more you’re willing or able to spend for that performance bump.
Also consider
Asus ROG Strix G15
squirrel_widget_4304199
A more premium device with a heftier price tag to match, but it’s really a magnificent gaming laptop. There’s more RGB for a start, better performance overall, and a lot more style.
Read our review
Razer Blade 15 Advanced
squirrel_widget_2693967
If understated externals are your thing, then this Razer might be another alternative. Again, it’s another powerhouse, but this laptop is a pleaser in multiple areas – apart from the massive price, but of course.
Thermaltake’s Divider 300TG is attractive, but lacks the quality and performance needed to stand out in today’s market. It didn’t perform well thermally or acoustically in our testing, making it tough to recommend.
For
+ Unusual, but fresh design
+ Complete front IO
Against
– Thermally disappointing
– Intake fans have little effect on temps, are noisy, and speed cannot be controlled
– Material quality lacking
– Glass frame is closer to turquoise than white
– 5.7-inch max CPU cooler height
– Difficult to remove sticker on glass side panel
– Frustrating side panel installation
– No support for top-mounted radiators
Features and Specifications
The vast majority of new ATX cases these days come with large slabs of tempered glass as side panels. The alternative seems to be a solid steel panel, but what if you want something in the middle?
That’s the idea behind Thermaltake’s Divider 300TG. Specifically, today on our test bench is the Divider 300TG ARGB Snow Edition. This chassis has both tempered glass and steel for its side panel, creatively slicing both in half for a fresh look. Pricing is set at $115 for this Snow Edition (or about $5 less for the black model) with all the bells and whistles, which sets expectations high.
So without further ado, let’s dig in to find out whether it’s worthy of a spot on our Best PC Cases list.
Thermaltake Divider 300TG Specifications
Type
Mid-Tower ATX
Motherboard Support
Mini-ITX, Micro-ATX, ATX
Dimensions (HxWxD)
18.7 x 8.7 x 18.1 inches (475 x 220 x 461 mm)
Max GPU Length
15.4 inches, 14.2 with front radiator (360 mm, 390 mm without front radiator)
CPU Cooler Height
5.7 inches (145 mm)
Max PSU Length
7.1 inches, 8.7 inches without HDD cage (180 mm, 220 mm)
External Bays
✗
Internal Bays
2x 3.5-inch
5x 2.5-inch
Expansion Slots
7x
Front I/O
2x USB 3.0, USB-C, 3.5 mm Audio + Mic
Other
2x Tempered Glass Panel, Fan/RGB Controller
Front Fans
3x 120 mm (Up to 3x 120mm)
Rear Fans
1x 120mm (Up to 1x 120mm)
Top Fans
None (Up to 1x 120mm)
Bottom Fans
None
Side Fans
Up to 2x 120mm
RGB
Yes
Damping
No
Warranty
3 Years (2 years for fans)
Thermaltake Divider 300TG Features
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Touring around the outside of the chassis, two things immediately stand out: One is of course the slashed side panel, but on the other side you’ll spot an air intake. As we’ll see later, you can mount two extra 120mm fans here or mount an all-in-one liquid cooler.
However, while all may look okay in the photos, the quality of the materials is quite disappointing. The sheet metal is thin, and the glass’s frame isn’t actually white – it’s closer to turquoise, which is a bit odd given that the chassis is named ‘snow edition,’ and it’s not a great look contrasting with the actual white of the rest of the chassis.
The case’s IO resides at the top, cut through the steel panel. Here you’ll spot two USB 3.0 ports, a USB Type-C port, and discrete microphone and headphone jacks – a complete set that’s much appreciated. You’ll also spot the power and reset switches. But as we’ll find out later, the reset switch doesn’t serve as a reset button.
Image 1 of 2
Image 2 of 2
To remove the case’s paneling, you first remove the steel part of the slashed side panel, and then the glass. The steel part is removed by undoing two thumbscrews at the back, after which it awkwardly falls out of place. The same goes for the side panel on the other side; undo two screws and it falls out of the chassis – and re-installation is just as clunky, as the screws don’t line up nicely with the threads. The glass panels are clamped in place by a handful of push-pins, so de-installation and re-installation is as easy as pulling the panels off or pushing them back into place.
Thermaltake Divider 300TG Internal Layout
With the chassis stripped down, you’ll spot a fairly standard layout with room for up to an ATX-size motherboard. The only unusual thing about the main compartment is the cover on the right, which either houses three 2.5-inch drives or can be removed to make space for two extra intake fans and an AIO.
Image 1 of 2
Image 2 of 2
Switch to the other side of the chassis, and you’ll spot the fan bracket we spoke of, along with two 2.5-inch SSD mounts behind the motherboard tray. In the PSU area there is also room for two 3.5-inch drives.
Thermaltake Divider 300TG Cooling
While there wasn’t much to talk about regarding the case’s general features, there is plenty to discuss when it comes to cooling. From the factory, the chassis comes with a total of four fans installed, which seems quite lavish. The front intake fans are three 120mm RGB spinners, while the rear exhaust fan is a simple 3-pin spinner without any lighting features.
But, behind the motherboard tray there is also a fan controller hub, where you can spot the reset switch header plugging in at the bottom. All four fans can be plugged into this hub, though the front trio come plugged in from the factory with very unusual connectors. As we’ll detail further on later, the RGB is controlled through the reset switch, and the fans offer no speed control.
The hub is powered by SATA power. There is an LED-out header on the hub, and an M/B-in header for connecting the RGB up to your motherboard with the included cable. The RGB effects included with the chassis’ controller are quite jumpy with infrequent changes, so it’s nice to see it tie into your motherboard’s control system.
The exhaust fan can be plugged into the motherboard, as it’s a 3-pin spinner but other than that, it’s safe to say that the chassis’ intake fan speeds cannot be controlled, which is a real let-down as they’re quite noisy.
Graphics cards can be up to 14.2 inches (360mm long), or 15.4 inches (390 mm) without a front radiator in place. This is plenty, but the space isn’t very wide: CPU coolers can only be up to 5.7 inches (145 mm) tall due to the side panel design, which isn’t much. Our Noctua cooler barely fit, so you’ll want to be careful with wide GPUs and tall CPU coolers.
For liquid cooling, it’s tight, but there is space for a front-mounted 360mm radiator or a side-mounted 240 mm radiator–but you’ll have to pick between one or the other. Also, be careful with side-mounted radiators, as they’ll likely bump into long GPUs. Most standard-length GPUs shouldn’t have an issue here, but if you’re using a bigger GPU, you’re probably better off using the front mount, as counterintuitive as that might seem.
Genesis Noir is set at the start of existence, reimagining the Big Bang and the natural expansion of the universe as the violent end result of a love triangle gone wrong. It’s a marriage of jazz and film noir in a point-and-click adventure game that sometimes works better as an audiovisual plaything than it does a series of puzzles. But the game’s visual style is a clear standout.
The monochromatic, abstract, line-drawn look that makes Genesis Noir so unique comes from a host of influences: classic film, Italian literature, other indie games. The real mystery to solve was combining those ideas in a way that makes sense — and it actually ended in a playable game.
Genesis Noir creative lead Evan Anthony’s jumping-off point was Cosmicomics, a collection of science-inspired short stories by Italian author Italo Calvino. “I think just initially I was trying to translate what I was envisioning when I was reading that book,” Anthony tells The Verge. Calvino’s writing, even in the translation to English is “very visually rich storytelling,” Anthony says.
Those ideas started solidifying into a style when Anthony and technical lead Jeremy Abel saw Umbro Blackout.Made by Buck, a creative studio specializing in animation, Umbro Blackout tells the story of a soccer player visiting New York during the blackout of 1977. “It’s just very simple black-and-white graphics. I wanted to bring that kind of clever transitions and juxtaposition of settings and scenes into an interactive space,” Anthony remembers.
The actual genesis of Genesis Noir startedon newsprint paper. Anthony and Abel sketched the early character designs that would become Genesis Noir’s initial trio of film noir archetypes — turned “gods” in the language of the game — in charcoal. They’d go on to become No-Man, the player character and a manifestation of Time; Miss Mass, a jazz singer femme fatale and representation of gravity; and Golden Boy, a star saxophonist and sort of creator god whose attempt to murder Miss Mass sets off the events of the game.
Indie games served as guidance for Anthony and Abel as well, neither of whom had a background in games or designed a level before. You can find some Kentucky Route Zeroin Genesis Noir’s interactions, fixed perspectives, and transitions between scenes. Kids, the short film / video game hybrid from 2019, was also a big north star for doing a lot with a little. “[Kids] was definitely a great reference in like what you could do with one asset,” Anthony says. “Like how we can be economical, very economical with the assets that we have,” Abel continues.
Even though both Abel and Anthony had embarked on some ambitious creative projects before — notably, a Google Maps pinball machine for Google — the visuals of Genesis Noir were a different beast. They realized that game engines aren’t really designed with hyper-stylized visuals in mind. “There’s a reason why games don’t really look like this,” Anthony says.
Some of that difficulty admittedly came from the pair’s inexperience and the abstract direction they decided to take the game. One early level called “Seeding,” set in the first microseconds of the Big Bang, had to be reworked multiple times from Anthony’s initial idea of a level made from “vibrating lines and strings.” Creating a level of wobbling lines that could not only run, but players would actually want to explore, was a hurdle.
According to Abel, the biggest technical considerations they had to manage were performance and translating the art Anthony was creating in Flash to the game’s engine. On some platforms like the Switch, that meant adapting and faking some visual effects. For the 2D animations themselves, Abel also had to create more bespoke solutions. “I worked with a friend of mine to develop a solution to actually export all the animations from Flash and draw them as vector art in the engine,” he says.
Anthony and Abel’s multiyear journey to create GenesisNoir provided them with some valuable lessons in design and a delightful finished product.Even if you don’t relate to the variety of sources Genesis Noir is pulling from, playing the game really captures the feeling of an interactive animated film. And the goal of expressing big, abstract concepts with a familiar genre and setting pays dividends, too. I won’t spoil where Genesis Noir goes in its final levels, but there’s more here than science facts and noir story beats. “I think my goal was to find some poetry between comparing two very disparate things,” Anthony says. “I’m very happy with how everyone’s perceived it.”
Genesis Noir is available now on PC, Xbox Game Pass, and Nintendo Switch.
Now that the street price of an Nvidia RTX 3070 is over $1,200, it’s temping to think scalpers, bots and cryptocurrency miners got them all — but the latest Steam Hardware Survey suggests at least some of them are making their way into gaming PCs, too. As TechSpot reports, the 3070 in particular appears to have become Steam’s fastest growing GPU, and the 17th-most popular graphics accelerator on Steam with a 1.29 percent share overall.
The RTX 3080 also commands 0.87 percent, the RTX 3060 Ti has 0.39 percent, and the pricey RTX 3090 claims 0.34 percent of the market — for a total of 2.89 percent Ampere cards.
That not bad, especially considering AMD’s rival RX 6000 series doesn’t have a single spot on the entire 95-GPU leaderboard. Apparently, more gamers are using the 2011 Nvidia GeForce GTX 550 Ti (0.16 percent) than any of AMD’s latest cards.
More than a quarter of Steam PC gamers are now using AMD CPUs instead of Intel, though, so that’s probably some consolation.
If secondhand sales numbers are anything to go by, I would expect the newer RTX 3060 to contribute to Nvidia’s tally next month.
If you’ve been eagerly waiting for Lego Star Wars: The Skywalker Sagato fulfill the incredible promise of its stirring trailer from last August (see above), I have some bittersweet news: the game’s been delayed again. Developer TT Games tweeted on Friday that “we won’t be able to make our intended Spring release date,” which itself was a delay from the game’s original 2020 launch window.
Bittersweet, because the delay does comes with the promise that it’ll be the company’s “biggest and best-ever LEGO game,” which is exactly what that trailer looked like last fall.
It’s been sixteen years since the first Lego Star Wars, and I can certainly wait a bit longer if there’s a chance of achieving that goal. We’ve seen far too many games shoved out before they were ready, and I can think of at least one notable reason any developer might need a bit more time now.
It may mean one fewer game for new PS5 and Xbox Series X / S buyers to take advantage of their new consoles in the short term, though. (It’s coming to Nintendo Switch, PS4, Xbox One and PC as well.)
There’s no new release date for Lego Star Wars: The Skywalker Saga yet.
The Apple II released in June 1977 was one of the first successful mass-produced computers and Apple’s first personal computer aimed squarely at the consumer market. The hardware was designed primarily by Steve Wozniak and the case by Steve Jobs, who we both know as the founders of Apple. In 1977, there were three machines vying for attention and inclusion in our lives: the Apple II, the Commodore PET and the Tandy TRS-80. In the USA, the Apple II was adopted and loved by a generation of coders who took their first steps with this great machine.
Emulating an Apple II
The cost of original Apple II hardware has skyrocketed and so to take our first steps with this great machine, we look to emulation via microm8.
1. Download and install microm8 for your operating system.
2. Open the microm8 executable. On first boot it will need to update, so be patient.
Once it has finished updating, microm8 will restart and present a rather retro 3D menu (see above).
3. Select Applesoft BASIC to open the BASIC interpreter.
BASIC is a general purpose high level language and the original version was designed by John G. Kemeny and Thomas E. Kurtz and was released at Dartmouth College in 1964. In basic terms (no pun intended), BASIC is a human readable language that uses words common in the English language.
BASIC is the Python of the 1970s and 80s. Machines such as the Apple II were designed to boot straight into BASIC and from there we can write code and basic file operations.
Creating a Number Guessing Game
For our BASIC project we shall create a number guessing game. we have ten chances to guess the correct number before the game ends. If we guess too high, the game will tell us so, the same is true if we guess too low.
1. Line 10, create a variable, N and inside the variable store a random integer between 0 and 50.
10 N = INT(50*RND(1))
2. Lines 20 and 30, write two instructions to the player. The first informs them that they have ten attempts to guess the number, and then it asks for their guess.
20 PRINT "YOU HAVE 10 TRIES TO GUESS THE CORRECT NUMBER"
30 PRINT "WHAT IS YOUR GUESS?"
3. Line 40. Create a for loop that will iterate ten times.
40 FOR C = 1 TO 10
4. Line 50. Capture the user’s answer into a variable, G.
50 INPUT G
5. Lines 60 to 80. Create three conditional tests. Each test will check the value of the user’s answer, G, with the randomly generated answer. If the answer is too high or low then a message is printed to the user. If their guess matches the random value, the code jumps out of the for loop.
60 IF G > N THEN PRINT "TOO HIGH";
70 IF G < N THEN PRINT "TOO LOW";
80 IF G = N THEN GOTO 130
6. Line 90, check the value of the variable C, which counts from 1 to 10. If the value of C is not 10, then print “TRY AGAIN” and then for line 100 iterate the for loop by one and the code will loop back to line 50.
90 IF C<>10 THEN PRINT " TRY AGAIN"
100 NEXT C
7. Line 110, create a “bad ending” for the game. If the player doesn’t guess the number then lines 110 and 120 will be our game over screen.
110 PRINT " I'M SORRY YOU FAILED...GAME OVER!"
120 END
8. Lines 130 and 140 are the “good ending” of the game, activated when the player wins the game.
130 PRINT "YOU GUESSED CORRECTLY"
140 END
To run the game, type RUN and try to guess the correct number.
Playing a Game on the Apple II
The Apple II had many great games, some of which were ports from arcades and other consoles while others were made directly for the Apple II. Luckily for us microM8 comes with an extensive catalogue of games builtin.
1. From the microM8 menu screen press B to open the disk catalog.
2. In the Disk catalog look for “appleii” and double left mouse click to open.
3. Click on the “disk images” folder.
4. Click on Games and then select a letter from the list to show games starting with that letter. Select your game and enjoy!
This story originally appeared in an issue of Linux Format Magazine
Today saw new test results being published for an as-yet-unannounced Intel CPU, and the results are low enough compared to what’s already on the market to make us wonder what Team Blue’s plan is here.
Puget Systems, a well-known maker of PCs and workstations that also publishes benchmark results for exotic hardware in real-world applications, has revealed some early test results of Intel’s unannounced Core i7-1195G7 processor. The CPU might be Intel’s ‘off-roadmap’ semi-custom offering available to select clients, or a member of its yet-to-be-unveiled Tiger Lake Refresh family.
The 11th Gen Core i7-1195G7 processor is a quad-core CPU based on the Willow Cove architecture that is equipped with Intel’s Xe-LP GPU with 80 or 96 execution units. The chip has the same capabilities as all the other Core i7-11x5Gx ‘Tiger Lake’ products with an up to 28W TDP, but since it sits above the current flagship (the model i7-1185G7), it likely has higher base and boost clock frequencies.
Puget Systems tested a PC based on the Intel Core i7-1195G7 clocked at 2.90 GHz, which is a bit of an odd frequency as the current flagship Core i7-1185G7 has higher a TDP-up frequency of 3.0 GHz. Therefore, it is not really surprising that the i7-1195G7-powered system nicked a slightly lower score (859 overall, 94 active, 77.8 passive) than Puget’s i7-1185G7-based PC (868 overall, 93.4 active, 80.2 passive) in PugetBench for Lightroom Classic 0.92, Lightroom Classic 10.2. Both systems were equipped with 16GB of LPDDR4X-4266 memory.
At this point, we don’t know whether Intel’s planning a full-blown Tiger Lake Refresh lineup with higher clocks and some additional features, or just plans to fill some gaps in the Tiger Lake family it has today. Last year, Intel planned to release versions of its Tiger Lake processors with LPDDR5 support, which would be beneficial for integrated graphics. But cutting clock speeds on such CPUs would be a strange choice.
From a manufacturing perspective, Intel can probably launch speedier versions of its TGL CPUs. Like other chipmakers, Intel performs continuous process improvements (CPI) through the means of statistical process control (SPC) to increase yields and reduce performance variations. With tens of millions of Tiger Lake processors sold, Intel has gathered enough information on how it can improve yields and reduce performance variability, which opens doors to frequency boosts. Furthermore, Intel has quite a few model numbers left unused in the 11th Gen lineup, so introducing new parts might be just what the company planned originally.
Since the Core i7-1195G7 has not yet been launched, Intel has declined comment about this part, even though it clearly exists in the labs of at least some PC makers.
Sourced from Twitter; The day has finally come where higher core count six-core and eight-core CPUs are ready to overtake aging dual-core and quad-core solutions in market share. Thanks to Steam’s hardware survey, we are able to get precise details on how popular both AMD and Intel’s hexacore and octa-core processors have become over the past several years.
This huge uptick in six and eight-core popularity is in part thanks to AMD’s strategy of bringing as many cores as possible to both desktop and mobile platforms over the past few years with the Zen architecture, with Intel following suit.
As of this moment, Steam’s chart reveals that quad cores are still in the lead, by roughly 10%. However, since the end of 2017, the popularity of six-core processors has been growing consistently, by a whopping 10% of market share per year. Easily surpassing dual-core popularity in mid-2019.
It makes a lot of sense. Over the past few years in the desktop market; AMD and Intel’s six-core processors have become some of the best CPUs on the market, with excellent price-to-performance ratios and excellent gaming performance.
In fact, the gaming performance of modern six-core chips like AMD’s Ryzen 5 5600X and Intel’s Core i5-10600K is so good that each chip is just a couple of percentage points lower than both Intel and AMD’s flagship 10 core and 16 core parts.
Plus, the recent rise of mobile hexa-core CPUs from both AMD and Intel have boosted 6-core adoption rates significantly, as laptops have a much larger market share overall compared to desktops.
At this rate, hexa-core and quad-core processors should attain equal market share by the end of this year, with hexa-core CPUs continuing to gain popularity well into 2022.
8 Core Popularity
Regarding the market performance of octo-core CPUs, the popularity is definitely lower than six-core parts. However, they are still on a consistent uptrend that is very aggressive.
As of right now, 8-core chips are neck and neck with dual-core CPUs in popularity, and should surpass dual-core market share very soon. However, octo-core chips are still well away from competing against quad-core CPUs, which still maintain the lead in market share.
The popularity of 8-core chips skyrocketed in late 2018, which coincides with Coffee
Lake Refresh, which is where we saw Intel’s first-ever mainstream 8 core CPU arrive on the scene, the Core i9-9900K.
Plus, at this time AMD was also releasing a new eight-core chip, the Ryzen 7 2700X. This was built on the new (at the time) Zen+ architecture.
In 2018, the mobile market also saw a major change as well, with Intel pushing out mobile 8 core chips for the first time in history. With AMD following suit less than a year later.
From late 2018 to 2020, octo-core chips gained about 5-6% of market share in Steam’s hardware survey. But during 2020-2021, that changed from 5-6% to almost 15%.
Be aware that these results are for all types of CPUs, including both desktop and mobile chips. While us DIY PC builders like to think we own the show, in terms of market share, we really don’t. In fact, it’s a small margin at best.
In fact, about 46% of the total market share belongs to desktops, and this includes both OEM and DIY markets. Around 50% of traffic belongs to laptops.
Are We Headed Towards A Six-Core vs. Eight-Core Popularity Contest?
Overall, it’s good to see quad cores and dual cores dying out, as their capabilities have become less and less useful in a world pushing towards more and more multi-threaded workloads.
Now, it remains to be seen whether six-core market share and eight-core market share will start competing against each other. Not to mention additional competitors like 10, 12, and 16 core processors, which will undoubtedly gain mainstream popularity at some point in the future.
For some very awkward reason, Intel has not posted a version of its Xe-LP graphics driver for its Rocket Lake processors on its website. The drivers are also not available from Intel’s motherboard partners, which causes quite some confusion as this essentially renders the new Rocket Lake’s Intel UHD Graphics featuring the Xe-LP architecture useless. However, there is a workaround for those who need it.
Intel’s main angle with its Rocket Lake processors for desktops is gaming, which is why it praises its Core i9-11900K and Core i7-11700K CPUs with a 125W TDP. Such systems rarely use integrated graphics, so enthusiasts do not really care about the availability of Xe-LP drivers for their chips. But the Rocket Lake family also includes multiple 65W and 35W processors that are designed for mainstream or low-power machines that use integrated GPUs.
For some reason, there are no drivers for Rocket Lake’s Intel UHD Graphics integrated GPU based on the Xe-LP architecture on Intel’s website, as noticed by AdoredTV. Intel’s motherboard partners do offer different versions of Intel’s Graphics Driver (which adds to the confusion) released in 2021, but none of them officially supports Rocket Lake’s integrated graphics, according to VideoCardz.
The absence of the Xe-LP drivers for Rocket Lake processors from official websites is hardly a big problem as there is an easy workaround for the problem. Instead of using an automatic driver installation wizard that comes in an .exe format, you can download a .zip file with the same driver (version 27.20.100.9316 at press time), then install it using Windows 10’s Update Driver feature with a Have Disk option, then hand pick the Intel Iris Xe Graphics.
Since Rocket Lake’s integrated GPU is based on the same architecture as Tiger Lake’s GPU, the graphics core will work just as it should. This option will work with experienced DIYers, but it might be tricky for an average user.
Unlike do-it-yourselfers, OEMs and PC makers will not use a workaround as the latest official driver has never been validated for the Rocket Lake family. Fortunately, at least some OEMs have access to ‘official’ Rocket Lake graphics drivers.
“We have drivers flowing to OEMs for a while now,” said Lisa Pierce, Intel vice president and director of graphics software engineering, in a Twitter post on April 2. “The delay was in a public posting with our unified graphics driver flow and we will work it post ASAP.”
She did not elaborate on when exactly the driver will be posted to Intel.com and whether it needs to pass WHQL validation before that. Meanwhile, on April 1 Pierce said that Rocket Lake drivers were several weeks away.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.