We already know from an inadvertent leak by Intel itself that the company is preparing five notebook GPU models based on its Xe-HPG architecture. Still, the company’s plans for desktop PCs were not clear at all. This week, Igor’sLab attempted to fill in some gaps with information obtained from a slide that’s purportedly from an Intel DG2 presentation.
Up to 512 EUs
If the unofficial information is to be believed, Intel is readying five discrete desktop GPU SKUs with similar configurations as their notebook counterparts. The top-of-the-range gaming SKU1 is said to feature 512 execution units (EUs), 16GB of GDDR6 memory with a 256-bit interface, and a 275W TDP. Other gaming parts are the SKU2 with 384 EUs and 12GB of GDDR6 memory with a 192-bit bus and the SKU3 with 256 EUs and 8GB of memory featuring a 128-bit interface. All gaming GPUs are expected to come in a 43×37.5 mm BGA2660 package.
Intel is also preparing low-end discrete parts (SKU4 and SKU5) with 128 or 96 EUs, 4GB of RAM, and a 64-bit memory bus. These lower-end models will come in a 29×29 mm BGA1379 package and will be aimed mostly at notebooks. Yet, some low-end desktops can use these GPUs, too.
Intel’s Xe-HPG architecture is expected to inherit energy-efficient blocks from the Xe-LP architecture, clock speed optimizations designed for Xe-HP/Xe-HPC GPUs for data centers and supercomputers, high-speed internal interfaces, hardware-accelerated ray-tracing support, and a GDDR6-powered memory subsystem. Overall, the Xe-HPG will resemble Intel’s existing GPUs to some degree but will run faster and support additional capabilities. Intel’s Xe-HPG GPUs are set to be produced by TSMC.
Launching in 2022?
Given that the Xe-HPG is set to inherit relatively small Xe-LP’s EUs, it is surprising that Intel’s top-of-the-range desktop configuration for DG2 only features 512 EUs. Intel will also reportedly launch its lower-end DG2 GPUs later in 2021 with higher-end gaming SKUs expected to be available only in early 2022, which was also unexpected. This information looks odd as recently the company started an Xe-HPG marketing campaign.
Intel of course does not comment on unreleased products, so everything is unofficial and should be taken with a grain of salt.
HP has recently announced the first Radeon RX 6000M-powered laptop, the Omen 16 2021. Aside from being the first laptop with a Radeon RX 6000M GPU, it is also the first 16-inch Omen laptop.
The HP Omen 16 laptop will come with up to an Intel Core i7-11800H processor or 8-core AMD Ryzen 9 5900HX. As for memory and storage, it can feature up to 32GB of DDR4-3200 and up to 1TB PCIe Gen 4×4 SSD or up to two 1TB PCIe SSDs in Raid 0, respectively.
The other GPU options are from Nvidia and can go up to an RTX 3070. Taking the RTX 3070 laptop GPU performance into account, we expect to see a similar option from AMD, which would most likely be the RX 6700M.
Alongside the Omen 16, HP also announced a few other laptops. One of them is the new Omen 17, which has been revamped to feature up to an RTX 3080 16GB laptop GPU and up to an Intel Core i9-11900H CPU. The other one belongs to HP new sub-brand called Victus. Designed as an entry-level gaming laptop, it will come with up to an RTX 3060 6GB or Radeon RX 5500M and up to Core i7-11800H or Ryzen 7 5800H.
The announcement didn’t include only laptops. HP introduced the new Omen 25i gaming monitor featuring an 8-bit FHD IPS panel with 165Hz refresh rate, AMD FreeSync Premium Pro and Nvidia G-Sync compatibility, VESA DisplayHDR 400, and DCI-P3 90% coverage. Moreover, it also revealed a new add-on for the Omen Gaming Hub called Omen Oasis, which allows up to 16-person calls and streams.
KitGuru says: If you’re looking for a new gaming laptop, it might be worth waiting a little bit longer as Radeon RX 6000M laptop GPUs are finally starting to roll out.
The Intel Iris Xe DG1 graphics card has made a surprising appearance. A US retailer began listing a CyberPowerPC listing, which appears to be the very first system to feature Intel’s desktop graphics card.
The system (via VideoCardz) is an entry-level gaming PC, priced at $750 and bundled with a keyboard and mouse. The main components include an Intel DG1 graphics card, an Intel Core i5-11400F processor, 8GB of RAM, and a 500GB NVMe SSD drive.
The Intel DG1 graphics card inside the system features 80 EUs (640 shading units) and 4GB of LPDDR4X memory on a 128-bit memory bus. For the GPU to work, an Intel B460, H410, B365, or H310C motherboard with a “special BIOS” is needed.
Despite looking like a rather basic gaming system, this desktop marks the entrance of the third competitor in the desktop graphics card market. Now with the DG1 heading into the hands of consumers, we can look ahead to the release of DG2, which should provide decent competition up against AMD and Nvidia.
KitGuru says: Intel is beginning to break into the desktop graphics market – did you ever think this day would come?
Today we are back with another extensive performance analysis, as we check out the recently-released Days Gone. As the latest formerly PlayStation-exclusive title to come to the PC, we test thirty graphics cards in this game to find out exactly what sort of GPU you need to play at maximum image quality settings. Has this game launched in a better state than when Horizon Zero Dawn first came to PC? Let’s find out.
Watch via our Vimeo channel (below) or over on YouTube at 2160p HERE
The first thing to know about Days Gone is that it is developed by Sony’s Bend Studio, and is built on Unreal Engine 4. Interestingly though, it uses DirectX 11, and there’s no option for DX12. That means there’s no ray tracing or DLSS features in Days Gone, something which is becoming more unusual these days.
In terms of visual settings, there are a number of options in the display menu. Textures, lighting, shadows and more can all be adjusted, while it’s great to see a field of view (FOV) slider as well as a render scale setting. There’s also a selection of quick presets – Low, Medium, High and Very High – and for our benchmarking today we opted for the Very High preset, with V-Sync of course disabled.
Driver Notes
AMD GPUs were benchmarked with the 21.5.2 driver.
Nvidia GPUs were benchmarked with the 466.47 driver.
Test System
We test using the a custom built system from PCSpecialist, based on Intel’s Comet Lake-S platform. You can read more about it over HERE, and configure your own system from PCSpecialist HERE.
CPU
Intel Core i9-10900K
Overclocked to 5.1GHz on all cores
Motherboard
ASUS ROG Maximus XII Hero Wi-Fi
Memory
Corsair Vengeance DDR4 3600MHz (4 X 8GB)
CL 18-22-22-42
Graphics Card
Varies
System Drive
500GB Samsung 970 Evo Plus M.2
Games Drive
2TB Samsung 860 QVO 2.5″ SSD
Chassis
Fractal Meshify S2 Blackout Tempered Glass
CPU Cooler
Corsair H115i RGB Platinum Hydro Series
Power Supply
Corsair 1200W HX Series Modular 80 Plus Platinum
Operating System
Windows 10 2004
Our 1-minute benchmark pass came from quite early on in the game, as Deacon is riding on the back of Boozer’s motorbike, headed to Crazy Willie’s. This represents a reasonably demanding section of the game based on the first hour or so that I played through, and it is also highly repeatable which makes it great for benchmarking multiple GPUs.
1080p Benchmarks
1440p Benchmarks
2160p (4K) Benchmarks
Closing Thoughts
By and large, Days Gone is an impressive PC port that almost everyone will be happy with. I say almost everyone, as currently my main issue with the game is related to visible stuttering when using an RDNA 2 GPU. This didn’t happen for other AMD cards though, or Nvidia GPUs, so hopefully it is a quick fix for AMD’s driver team or the game’s developers.
As a DX11 title built on Unreal Engine 4, if we had to guess before testing the game, we would’ve thought Nvidia GPUs would perform the best, and that is certainly true. RTX 2070 Super is significantly faster than the RX 5700 XT, for example, while RTX 3070 also beats out the RX 6800 across the board, which isn’t something we usually see.
Even then, the game does run well across a wide variety of hardware. GTX 1060 and RX 580, for instance, aren’t far off from hitting 60FPS at 1080p with maximum image quality settings, with just a few small tweaks to the IQ needed to hit that figure. VRAM doesn’t appear to be in high demand either, with both the 4GB and 8GB versions of the RX 5500 XT performing almost identically.
If you do want to drop down some image quality settings, the game’s options scale well. We found that the High preset offered 35% more performance than Very High (which is more than enough to get a GTX 1060 averaging over 60FPS at 1080p), while you can almost double frame rates using the Low preset when compared to Very High.
The only other issue I noticed is what appears to be an animation hitching problem in the game, which is particularly noticeable when riding a motorbike – the game feels like it is slowing down but then correcting itself by speeding up again. This wasn’t a game breaker for me but it was most noticeable when frame rates were below 60FPS – the higher the frame rate, the less I noticed the issue.
Discuss on our Facebook page HERE.
KitGuru says: Days Gone is definitely in a better state at launch than what we saw when Horizon Zero Dawn hit PCs in 2020. There’s a couple of issues to be fixed, but by and large this game performs well across a good range of graphics cards.
We’ve been hearing a lot about DDR5 memory in recent weeks, with several companies giving sneak peeks at what’s to come. GeIL may have the fastest DDR5 memory out of the gate, as the company has already planned 7200MHz kits.
GeIL announced its next-generation DDR5 memory this week. Polaris RGB Gaming Memory will be shipping in Q4 2021, which lines up nicely with rumours that Intel will release Alder Lake later this year with DDR5 memory support. GeIL’s DDR5 memory kits will be available in capacities ranging from 16GB all the way up to 128GB.
The speeds are the headline maker here. The DDR5 memory specification starts at 4800MHz at CL40, but overclocked memory is being worked on, including 6000MHz CL32, 6400MHz CL32, 6800MHz CL36 and 7200MHz CL36 kits, all available with and without RGB depending on preference.
Just recently, the DDR4 memory overclocking world record was set at 7200MHz, but this was achieved through extreme cooling and tuning methods. When we shift to DDR5, it looks like speeds beyond 7GHz will be achievable by the masses, rather than being limited to professional overclocking scenarios.
KitGuru Says: Not only does this confirm that the first DDR5 memory platform will launch this year, but it gives us a really good idea of what speeds to expect from this generation shift.
Intel is about to release Optane H20 memory for laptop systems. Announced late last year, Intel’s Optane H20 memory aims to deliver the best of RAM and SSD storage in a single solution, accelerating loading times and data transfers.
Available with 512GB and 1TB capacities, Intel Optane H20 memory packs both QLC 3D NAND and Optane technologies into a module featuring an M.2 2280 form factor. Whether you choose a 512GB or a 1TB H20 memory module, both feature 32GB of Optane memory. Those interested in the Intel Optane H20 should know that this memory is only compatible with systems equipped with 11th Gen Intel Core processors.
Rated sequential read and write speeds are similar to a PCIe 3.0 SSD, featuring up to 3400MB/s reading speeds and up to 2100MB/s write speeds. Random 4K speeds vary between 65-390K IOPS while reading and 40-280K IOPS on writing. The drive’s endurance isn’t on par with the best PCIe 3.0/4.0 SSDs, but 185/370TBW should still enough for most users. The rated MTBF is set at about 1.6M hours, and all drives come with a 5-year warranty.
Intel has scheduled the release date of Optane H20 memory for June 20th, but pricing is still unknown.
KitGuru says: Are you thinking about acquiring a new laptop with Intel Optane H20 memory? What type of workloads would you use it for?
HP, aka Hewlett Packard, is one of the most well-known tech companies in the world. They produce nearly every product you can think of: laptops, desktops, printers, enterprise hardware, and solid-state drives.
We’ve previously reviewed the HP P700 Portable SSD, which impressed with outstanding performance and high transfer rates. Today’s review is for the HP P500 Portable SSD, which is a much more cost affordable design for people who aren’t as focused on performance.
The HP P500 is actually produced by HP business partner BIWIN Storage, a large Chinese OEM for SSD solutions with 25 years of experience in the storage and microelectronics business. They were granted authorization from HP to produce SSDs in their name. Internally, the HP P500 is built using a UFS flash chip paired with the appropriate glue chips and USB interface. UFS is a highly popular storage standard with cell phones, tablets, and digital cameras. It was invented as a high-performance alternative to SD memory cards for multi-gigabit transfer rates.
The HP P500 uses a UFS 2.1 compatible storage chip from Samsung, which means it’s not the latest revision 3.1, so slower speeds are expected. For external connectivity, HP opted for the fast USB 3.1 Gen 2 interface, which is handled by a Silicon Motion SM3350 controller acting as a USB-to-UFS bridge. In a move typical for most portable SSDs, the P500 does not include a DRAM cache chip.
We review the HP P500 in the 1 TB variant, which retails for $115, but it is also available in capacities of 250 GB (price unknown) and 500 GB ($75). Warranty is set to three years for all these models. The HP P500 is available in four colors.
High performance memory kits have evolved over the last few years, both in styling and technology. Styling has shifted to heavier heat sinks, LED light bars, and fancy RGB control software. The technology has done what it inevitably will by producing greater speeds and densities at generally lower cost as DDR4 has matured. The latest processors and graphics cards have been almost impossible to get over the last six months, but memory pricing and availability has remained steady, which makes now the perfect time for Acer to launch a brand-new line of DDR4 memory under their Predator brand. You may recognize the Predator brand from their highly successful gaming monitors or range of gaming laptops and desktops. You may even know the brand because of the Thronos all-In-one gaming chair.
Acer has branched out into a wide variety of gaming products and peripherals. Now, Acer is taking the plunge into core hardware with the aid of business partner BIWIN Storage, a large Chinese OEM with 25 years of experience in the storage and microelectronics business. Acer has granted them permission to produce memory kits under the Predator brand.
The pair of Predator Talos kits I have for testing today each feature 16 GB (2x 8 GB) at 3600 MHz, 18-20-20-42 timings, and 1.35 V. 3600 MHz has become the new gold standard for Ryzen builds, driving new focus into memory kits targeting a previously obscure specification. Let’s see how the Predator Talos holds up in this ultra-competitive segment!
Solid-state drives have a number of advantages when compared to hard drives, which include performance, dimensions, and reliability. Yet, for quite a while, HDDs offered a better balance between capacity, performance, and cost, which is why they outsold SSDs in terms of unit sales. Things have certainly changed for client PCs as 60% of new computers sold in Q1 2021 used SSDs instead of HDDs. That said, it’s not surprising that SSDs outsold HDDs almost 3:2 in the first quarter in terms of unit sales as, in 2020, SSDs outsold hard drives (by units not GBs), by 28 perecent.
Unit Sales: SSDs Win 3:2
Three makers of hard drives shipped as many as 64.17 million HDDs in Q1 2021, according to Trendfocus. Meanwhile, less than a dozen SSD suppliers, including those featured in our list of best SSDs, shipped 99.438 million solid-state drives in the first quarter, the same company claims (via StorageNewsletter).
Keeping in mind that many modern notebooks cannot accommodate a hard drive (and many desktops are shipped with an SSD by default), it is not particularly surprising that sales of SSDs are high. Furthermore, nowadays users want their PCs to be very responsive and that more or less requires an SSD. All in all, the majority of new PCs use SSDs as boot drives, some are also equipped with hard drives and much fewer use HDDs as boot drives.
Exabyte Sales: HDDs Win 4.5:1
But while many modern PCs do not host a lot of data, NAS, on-prem servers, and cloud datacenters do and this is where high-capacity NAS and nearline HDDs come into play. These hard drives can store up to 18TB of data and an average capacity of a 3.5-inch enterprise/nearline HDD is about 12TB these days nowadays. Thus, HDD sales in terms of exabytes vastly exceed those of SSDs (288.3EB vs 61.5EB).
Meanwhile, it should be noted that the vast majority of datacenters use SSDs for caching and HDDs for bulk storage, so it is impossible to build a datacenter purely based on solid-state storage (3D NAND) or hard drives.
Anyhow, as far as exabytes shipments are concerned, HDDs win. Total capacity of hard drives shipped in the first quarter 2021 was 288.28 EB, whereas SSDs sold in Q1 could store ‘only’ 66 EB s of data.
Since adoption of SSDs both by clients and servers is increasing, dollar sales of solid-state drives are strong too. Research and Markets values SSD market in 2020 at $34.86 billion and forecasts that it will total $80.34 billion by 2026. To put the numbers into context, Gartner estimated sales of HDDs to reach $20.7 billion in 2020 and expected them to grow to $22.6 billion in 2022.
Samsung Leads the Pack
When it comes to SSD market frontrunners, Samsung is an indisputable champion both in terms of unit and exabytes shipments. Samsung sold its HDD division to Seagate in 2011, a rather surprising move then. Yet, the rationale behind the move has always been there for the company that is the No. 1 supplier of NAND flash memory. Today, the move looks obvious.
Right now, Samsung challenges other SSD makers both in terms of unit (a 25.3% market share) and exabyte (a 34.3% chunk of the market) shipments. Such results are logical to expect as the company sells loads of drives to PC OEMs, and high-capacity drives to server makers and cloud giants.
Still, not everything is rosy for the SSD market in general and Samsung in particular due to shortage of SSD controllers. The company had to shut down its chip manufacturing facility that produces its SSD and NAND controllers in Austin, Texas, earlier this year, which forced it to consider outsourcing of such components. Potentially, shortage of may affect sales of SSDs by Samsung and other companies.
“Shortages of controllers and other NAND sub-components are causing supply chain uncertainty, putting upwards pressure on ASPs,” said Walt Coon, VP of NAND and Memory Research at Yole Développement. “The recent shutdown of Samsung’s manufacturing facility in Austin, Texas, USA, which manufactures NAND controllers for its SSDs, further amplifies this situation and will likely accelerate the NAND pricing recovery, particularly in the PC SSD and mobile markets, where impacts from the controller shortages are most pronounced.”
Storage Bosses Still Lead the Game
Western Digital follows Samsung in terms of SSD units (18.2%) and capacity (15.8%) share to a large degree because it sells loads of drives for applications previously served by HDDs and (perhaps we are speculating here) mission-critical hard drives supplied by Western Digital, HGST (as well as Hitachi and IBM before that).
The number three SSD supplier is Kioxia (formerly Toshiba Memory) with a 13.3% unit market share and a 9.4% exabyte market share, according to TrendFocus. Kioxia has inherited many shipment contracts (particularly in the business/mission-critical space) from Toshiba. Kioxia’s unit shipments (a 13.3% market share) are way lower when compared to those of its partner Western Digital (to some degree because the company is more aimed at the spot 3D NAND and retail SSD markets).
Being aimed primarily at high-capacity server and workstation applications, Intel is the number three SSD supplier in terms of capacity with an 11.5% market share, but when it comes to unit sales, Intel controls only 5% of the market. This situation is not particularly unexpected as Intel has always positioned its storage business as a part of its datacenter platform division, which is why the company has always been focused on high-capacity NAND ICs (unlike its former partner Micron) for advanced server-grade SSDs.
Speaking of Micron, its SSD unit market share is at an 8.4%, whereas its exabytes share is at 7.9%, which is an indicator that the company is balancing between the client and enterprise. SK Hynix also ships quite a lot of consumer drives (an 11.8% market share), but quite some higher-end enterprise-grade SSDs (as its exabytes share is 9.1%).
Seagate is perhaps one exception — among the historical storage bosses — that controls a 0.7% of the exabyte SSD market and only 0.3% of unit shipments. The company serves its loyal clientele and has yet to gain significant share in the SSD market.
Branded Client SSDs
One interesting thing about the SSD market is that while there are loads of consumer-oriented brands that sell flash-powered drives, they do not control a significant part of the market either in terms of units nor in terms of exabytes, according to Trendfocus.
Companies like Kingston, Lite-On, and a number of others make it to the headlines, yet in terms of volume, they control about 18% of the market, a significant, but not a definitive chunk. In terms of exabytes, their share is about 11.3%, which is quite high considering the fact that most of their drives are aimed at client PCs.
Summary
Client storage is going solid state in terms of unit shipments due to performance, dimensions, and power reasons. Datacenters continue to adopt SSDs for caching as well as business and mission-critical applications.
Being the largest supplier of 3D NAND (V-NAND in Samsung’s nomenclature), Samsung continues to be the leading supplier of SSDs both in terms of volumes and in terms of capacity shipments. Meanwhile, shortage of SSD controllers may have an impact on the company’s SSD sales.
Based on current trends, SSDs are set to continue taking unit market share from HDDs. Yet hard drives are not set to give up bulk storage.
If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.
It’s so, so, so much better. But the moment Apple showed off the second-generation Siri Remote, it was obvious that this would be a huge improvement over its detested predecessor. It’s easy to tell which way is right side up when you reach for it. The clickable touchpad area that dominated the upper third of the prior remote has been replaced by a more intuitive D-pad. The Siri button has been pushed to the remote’s right side, almost guaranteeing that you’ll never unintentionally trigger Apple’s voice assistant. And now there’s a proper power button for your TV.
Listing all of these “upgrades” on the new $59 Siri Remote really illustrates just how disappointing the old one that somehow lasted six years on the market was. Before this big redesign, the most Apple did in that time was to try to cure the “which side is up?” confusion by adding a white rim around one of the buttons. “Can’t innovate anymore, my ass.”
But this? This new Siri Remote is a very good remote. There’s nothing exceptional about it, but it’s functional, accessible, and painless to use. If you used those words to describe the original Siri Remote, you’d be in the minority.
It feels really nice, too. Apple makes the remote from a unibody aluminum shell that’s taller, heavier, and considerably thicker than the old Siri clicker. It’s slightly narrower than the black remote but still feels larger on the whole — and that’s a positive. The previous Siri remote was so thin that it was easily lost to the deepest reaches of the couch. I don’t see that being as much of a problem with the new, chunkier hardware.
The Siri Remote is in keeping with Apple’s renewed fondness for hard edges. With the remote gripped in hand, you never really feel the edges on the front, but you do at the back. The back metal is curved, but there’s still a hard edge at both sides. As long as you don’t squeeze the remote too tight, it should prove reasonably comfortable.
Instead of putting what basically amounted to a trackpad on the top section of the remote, Apple has switched to a much more traditional directional pad. Within that circular D-pad is a touch-sensitive center button that still lets you swipe around content or move in any direction just like you could before. (And yes, you can still play with the subtle movement of app icons on the home screen by gently nudging your thumb around.) But some streaming apps didn’t work perfectly with that input method, so Apple is now including the far more precise D-pad.
This choose-your-preferred-navigation method — Apple calls it the “clickpad with touch surface” — has a very short learning curve. Initially, I would inadvertently activate the touchpad when I just meant to move my finger from down to up or vice versa on the D-pad. That didn’t last long, but if it winds up a bigger hassle for you, there’s an option in the remote’s settings menu to assign the center button to “click only,” which gives the D-pad all navigation duties.
Apple has also come up with a clever jogwheel function that lets you circle a finger around the outer ring to scrub through videos at faster or slower speeds depending on how quickly you’re thumbing around the circle. It’s a direct callback to the days of the iPod clickwheel and does a great job helping you land on an exact moment in a video.
But I must confess something: I had an embarrassing few hours where I couldn’t figure out how to make this work. Eventually, I learned the trick: after pausing a video, you’ve got to rest your finger on the D-pad momentarily before you start circling around it. An animation will pop up in the progress bar (with a little dot that indicates where your finger is) to let you know you’re in jogwheel mode. If you just pause the video and immediately start the circular movement, it doesn’t do the right thing. Don’t be like me and unnecessarily factory reset your Apple TV 4K because of this.
The buttons themselves all have a satisfying click and don’t feel the least bit mushy. The clickpad is quieter when pressed than the buttons below it, which are each significantly noisier than any other remotes I had to compare against, be it for a Roku, Chromecast, or otherwise. Again, it’s not a problem unless you’re sensitive to that sort of thing, but you’ll absolutely hear the volume rocker when you’re turning up a certain scene in a movie or show. The Siri button on the side is whisper quiet; you still have to press and hold it down whenever you’re doing a voice command.
You might also have to overcome some muscle memory challenges since the mute button is now where play / pause was situated on the old remote. The “menu” button has been rebadged as “back” but does the same functions as before, which means, in most cases, the new icon makes a ton more sense. The buttons aren’t backlit, but it’s easy enough to memorize them by feel once you’ve used the remote for a while.
But as good as the new Siri Remote is, it feels like Apple missed some opportunities that frankly seem like low-hanging fruit. The most glaring is that there’s no way to locate the remote if you’re unable to find it. As I said earlier, the bigger dimensions should make for fewer instances where the remote gets misplaced, but some way of having it alert you to its location would’ve been nice. “Hey Siri, where’s my remote?” seems like such an easy thing to make happen, but that voice query won’t do you any good or make the remote beep. And unlike Apple’s recently introduced AirTags, there’s no ultra-wideband chip in the remote to help pinpoint its position in a room. If you’re finding that the remote goes MIA constantly, you might just have to settle for a case that combines an AirTag with the Siri Remote. But having a simple, straightforward remote locator feature is one area where Roku objectively beats out Apple.
A less impactful gripe is the lack of an input button for switching between HDMI sources; the Apple TV automatically becomes the active input when you power it on or wake it from sleep. But an input button would’ve at least made life easier for people switching between an Apple TV and an Xbox or PlayStation. As a result, I just can’t quit my LG TV’s remote, much as I wish I could. Most of my devices automatically grab the TV’s attention when they’re switched on, but a button is foolproof.
I can complain about buttons being absent, but I can also praise Apple for the same reason: there are no branded shortcut buttons whatsoever on the Siri Remote. Not even Netflix can lock down its own button, whereas you’d be hard-pressed to find another streaming box remote without that logo somewhere.
The Siri Remote still charges with Apple’s Lightning connector — despite now being thick enough to house a USB-C jack. USB seems more natural for this type of scenario, but what do I know? I’m just one man who’s elated to have a reliable, sensibly designed remote control again. Apple is going to keep doing Apple things. I was not able to test the new remote with third-party charging stands designed for the old one, but I wouldn’t be surprised if that industry catches up with the new design in the near future.
The gyroscope and accelerometer from the previous Siri Remote are history, so you won’t be able to use this one for Apple Arcade games that rely on those sensors. But it’s unlikely many people were gaming with it to begin with; tvOS now supports many third-party gamepads, including the latest Xbox and PlayStation controllers, if you hadn’t heard.
Any way you slice it, the new Siri Remote is a win on every level. It’s inconceivable that we put up with the last one for so many years, but its time has come. And the remote control taking its place is extremely good at doing remote control things. Much as how Apple’s M1 MacBooks would have earned perfect scores if they’d had competent webcams, the Siri Remote would be flirting with perfection if it just had some way of letting you easily find the thing. Or if the buttons were backlit. My review of the new Apple TV 4K is coming soon, but if you’ve already got the last model, this is the only real must-have upgrade to go for.
For those looking for the best motherboard for a compact Rocket Lake build, we’ll be diving deep here to examine and test three Mini-ITX motherboards based on Intel’s latest mainstream chipset, Z590. We’ll take a close look at the ASRock Z590 Phantom Gaming-ITX/TB4 (~$350), Asus ROG Strix Z590-I Gaming WiFi ($369.99), and the Gigabyte Z590I Aorus Ultra ($321.49). We couldn’t get our hands on the MSI Z590I Unify Gaming ($369.99) in time for this article, but we expect that board to arrive in the coming weeks and post a review when we can.
When you’re shopping for a Mini-ITX motherboard (see more on motherboard form factors here), chances are the case you use is going to be compact as well. This means limited CPU cooling options and, due to the size of the board, fewer Memory and PCIe slots and M.2 sockets. That said, these tiny builds can be portable powerhouses when done right. But you need to have solid power delivery and cooling and pick the right ITX board for your needs, as using an add-in card (beyond the GPU) to supplement any missing ports isn’t possible. We’ll take a detailed look at the three boards we have and see which is the best option overall.
In our testing, all boards performed well, easily mixing in with our other test results, including full-size and more-expensive options. Out of the box, the ASRock board is the most hamstrung by Intel’s power limits, while the other boards tend to run a bit more free in comparison. But all you need to do to get the ASRock up to par with the other boards is to raise its power limits. The performance difference was negligible outside of the long-running tests, where the turbo time/limits come into play. Gaming performance was similar among all the boards, as was memory bandwidth and latency testing. Outside of a couple of outliers, all boards performed similarly, especially when the playing field was leveled by removing the stock limits.
All three of our Mini-ITX boards include two DRAM slots, a single PCIe slot, two M.2 sockets, 2.5 GbE and Wi-Fi. The difference between these boards boils down to appearance, Wi-Fi speeds, audio codec, SATA port count, power delivery capability, rear IO port type/count and price. We’ll dig into the features and other details on each board below, starting with the ASRock Z590 Phantom Gaming-ITX/TB4. Below are the specifications from ASRock.
Along with the motherboard, the ASRock Z590 Phantom Gaming-ITX/TB4 box includes a slim collection of accessories, though there’s enough to get you started. Below is a complete list of all included extras.
Support CD / Quick installation Guide
Wi-Fi Antenna
(2) SATA cables
(2) Screw package for M.2 sockets
Image 1 of 3
Image 2 of 3
Image 3 of 3
After we took this little guy out of the box, we see a densely packed Mini-ITX board that comes with almost all of the features typically found on a full-size motherboard. The ITX/TB4 sports a matte-black PCB, along with a heatpipe-connected heatsink for the VRM. You can spot the Phantom Gaming theme easily, with branding located above vent holes on the rear IO as well as the chipset/M.2 heatsink, just above the PCIe slot. On the RGB LED front, the Z590 PG-ITX/TB4 has three LEDs on the underside of the board, behind the PCIe slot. ASRock’s Polychrome Sync application controls the lighting. Overall, I like the board’s appearance. It’s is improved over the last generation and won’t have any issues fitting in with most build themes.
Typically when discussing motherboards, they’re split into the top half and bottom half. But since these boards are so small, we’ll work in a clockwise motion starting on the left with the IO cover. Here we see the metal cover hiding all of the rear IO bits, as well as a small fan designed to dissipate heat. The top and the rear IO plate have holes in them to circulate the air from the fan.
Across the top of the motherboard is the 8-pin EPS connector (required) to power the CPU. Just to the right is a 3-pin ARGB header and three 4-pin fan headers. The CPU and Chassis fan headers support 1A/12W, while the CPU_Opt/Water pump connector doubles that to 2A/24W. ASRock states the CPU_OPT/W_Pump header auto-detects if a 3-pin or 4-pin fan is in use. I would like to see all of the fan/pump headers auto-detect what’s attached.
Moving past the VRMs to the right side of the board, there are several headers, ports and slots. Starting with the two unreinforced DRAM slots, support is listed to 64GB with speeds up to 4266+(OC). Surprisingly this is lower than many ATX size boards (typically, these smaller boards offer better RAM clocking capabilities) and lower than the other two boards in this roundup. That said, the ASRock board ran our DDR4 4000 sticks with minimal adjustments (that same VccIO Memory +0.10 for the ASRock board we looked at previously), so for the majority of users, the limit (on paper) won’t be an issue.
On the right edge, from top to bottom, is the 24-pin ATX connector for board power, front panel header and 4-pin RGB header, USB 3.2 Gen 1 header, three SATA ports (supports RAID0, 1, 5 and 10), and a front-panel USB 3.2 Gen 2×2 Type-C header. Above the 24-pin ATX are the debug LEDs that tell you where the board got hung up in the POST process. Since there isn’t any room for the 2-character LED that provides you more detailed information, this is a good value-add for troubleshooting.
On the bottom of the board is the single reinforced (ASRock Steel Slot) PCIe slot with extra anchors, a better latch, and signal stability improvements (according to the company). The slot runs the full PCIe 4.0 x16 bandwidth when using an 11th generation CPU. Located above the slot is a USB 2.0 header and the front-panel audio header.
The bottom-left corner holds the audio bits. Visible are a couple of audio capacitors (in yellow) while the Realtek ALC1220 hides the IO cover. While this is a premium audio chip, it’s last generation’s flagship; I would like to have seen the newest codec used as we see on the Asus. That said, this solution will still be sufficient for most users.
Just above this slot is a dual-purpose heatsink designed to keep the southbridge chip and an M.2 module cool. Simply unscrew the two visible screws and it exposes the PCIe 4.0 x4 (64 Gbps) M.2 socket. The second M.2 socket sits on the back of the board, supports both PCIe and SATA-based modules, and does not have a heatsink on it. Both sockets support 80mm drives. The M.2 sockets support RAID0 and 1. The manual doesn’t list any lane sharing, which makes sense considering the three SATA ports here, when the chipset provides six natively.
ASRock chose an 8-phase configuration for Vcore on this little board. You won’t find any VRM doublers as in this direct setup. Power flows from the 8-pin EPS to a Renesas ISL69269 12-channel (X+Y+Z=12) controller, then on to the 90A ISL99390 Smart Power Stages. The 720A available for the CPU is enough for stock operation and even overclocking our Core i9-11900K processor (with ambient cooling).
Typically we list all of the buttons and headers along the bottom of the board, but due to the Mini-ITX design, we covered this already during the motherboard tour above.
Taking a look at the integrated rear IO plate, we see it sports the same styling found on the Z490 version: primarily a grey-and-black background with some red highlights matching the Phantom Gaming theme. From left to right are the DisplayPort (v1.4) and HDMI (v2.0) ports for use when working off the integrated GRAPHICS in your CPU. Next are four USB 3.2 Gen 1 (5 Gbps) ports and a single USB 3.2 Gen 2 (10 Gbps) port and one ultra-fast Thunderbolt 4 (40 Gbps) Type-C port. In blue is the Killer E3100G LAN port and a small Clear CMOS button. In the middle, we see holes for venting air from the hidden fan (which is inaudible, by the way), the Wi-Fi 6E antenna mounts and the 5-plug + SPDIF audio stack.
Firmware
The BIOS theme in the Phantom Gaming-ITX/TB4 matches the Z590 PG Velocita we recently reviewed, sporting a black/red theme. As usual, we capture a majority of the BIOS screens to share with you. ASRock includes an Easy Mode for high-level monitoring and adjustments, along with an Advanced section. The BIOS is organized well, with many of the more commonly used functions accessible without drilling down multiple levels to find them. Here you adjust the Memory, Voltage, and CPU details in separate sections, but it’s all on the first page of each section. In the end, the BIOS worked well and was easy to navigate and read.
Image 1 of 27
Image 2 of 27
Image 3 of 27
Image 4 of 27
Image 5 of 27
Image 6 of 27
Image 7 of 27
Image 8 of 27
Image 9 of 27
Image 10 of 27
Image 11 of 27
Image 12 of 27
Image 13 of 27
Image 14 of 27
Image 15 of 27
Image 16 of 27
Image 17 of 27
Image 18 of 27
Image 19 of 27
Image 20 of 27
Image 21 of 27
Image 22 of 27
Image 23 of 27
Image 24 of 27
Image 25 of 27
Image 26 of 27
Image 27 of 27
Software
On the software side, ASRock includes a few utilities that cover overclocking and monitoring (PG-Tuning), audio (Nahimic 3), software for updating drivers and downloading applications (App Shop), and of course, RGB control (Polychrome RGB). We did not run into any issues in our limited use of the applications.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
Before we get to the performance for this board and its competitors, we’ll detail the other two models as well. Next up is the Asus ROG Strix Z590-I Gaming WiFi.
Cambridge’s Award-winning recipe has been refined to include app support and extra sonic clarity and detail
For
Extra ounce of dynamic expression
Great clarity for the level
Slick app support
Against
No noise-cancelling
When Cambridge Audio announced a new model in its inaugural and two-time What Hi-Fi? Award-winning Melomania line-up, we heaved a collective sigh of relief. The Melomania 1 Plus (or Melomania 1+) promise the same look and feel of their decorated older sibling, the original Melomania 1, but with additional app support, customisable EQ settings and the British audio firm’s innovative High-Performance Audio Mode.
There’s a new colourway, too – gone is the ‘stone’ grey hue we lovingly dubbed ‘NHS Grey’. Here, the upgrades are hard to spot to the naked eye, but then again, beauty is usually in the detail. The pricing hasn’t changed, with the Melomania 1 Plus launching at the now-traditional £120 ($140, AU$185).
So how good do they sound, and are they worth upgrading to?
Build and comfort
The fresh white finish of our Melomania 1 Plus charging case sample (also available in black) is a matte affair and a solid upgrade on its predecessor. It feels cool, tactile, more pebble-like and means that fingerprint smudges no longer collect on the perfectly sized case.
Cambridge Audio Melomania 1 Plus tech specs
Bluetooth version 5.0
Finishes x2
Battery life Up to 45 hours (low power)
Dimensions 2.7 x 1.5cm
Weight 5.6g (each)
The five-strong row of LEDs to indicate battery life remains, just below the snappy flip-top lid. The ‘L’ and ‘R’ on each earpiece, underneath the tiny LED light on each, are now written in electric blue lettering. You now get a USB-C fast charging port, too.
Although multiple ear tips were promised to ensure a secure fit, what Cambridge has done is double up on its standard small, medium and large offerings, so you now get two sets of each rather than one.
There are also two sets of medium and large ‘memory foam’ options, but curiously no small option. The memory foam tips are only supplied in black, too – the regular tips are white – which spoils the ice-white aesthetic somewhat.
The bullet-shaped buds are practically identical in build to the Melomania 1 – each weighs the same 4.6g, boasts IPX5 certification against rain and sweat, houses a 5.8mm graphene-enhanced driver and boasts Bluetooth 5.0 connectivity with aptX and AAC codec support.
Features
The Melomania 1 Plus boast up to nine hours of battery life on a single charge plus four extra charges from the case, which adds up to an impressive 45 hours of total playtime when in Low Power mode. In the default High-Performance mode, you’ll get seven hours from a charge or 35 hours in total courtesy of four more blasts from the case, which is still highly competitive.
Pairing is easy using the handy quick start guide. Only one earpiece needs to be paired on your device; the second bud (labelled ‘Handset’) will simply request a connection to it – and that only needs to be done once. During our tests, the connection between both units and our device remains secure and snag-free.
Possibly the biggest upgrade with this new iteration is support for the free Melomania app, which is now considerably more stable than it used to be. With it comes the ability to customise the EQ settings yourself or pick from six presets, check the battery level of each earbud, locate misplaced earbuds on a map, and receive firmware updates.
Touch controls here involve pressing the circular button on each bud and we find these intuitive and useful. Holding down the right one increases volume, while holding down the left lowers it – simple and effective. A single press of either earpiece starts or pauses playback, two skips forward a track (right earbud) or back a song (left earbud), and three presses of the right bud calls up Siri on our iPhone – although note that they can also access the Google Assistant.
These controls are so reliable that we rarely dig out our smartphone when testing them in transit. That should be a given, but it hasn’t always been our experience when testing competing buds at this price.
Cambridge has advised wearers to position the earpieces so that the recessed circle within the circular top surface of the driver housing is at its lowest point so that the MEMS mic in each bud can perform to its fullest. We do so and are able to enjoy clear voice calls.
The good news is that with low power mode deployed, you’ll get a performance that is on a par with the originals.
Sound
Switching back to High-Performance Audio and with all EQ levels unaltered, we’re treated to an impactful and expansive presentation of Kate Bush’s And Dream Of Sheep (a Tidal Master file). The keys feel three-dimensional in our left ear as Bush’s vocal soars through the frequencies centrally, backed by samples of seagulls, pared-back guitar picking, wind instruments and spoken word. When the brooding storm builds, the Melomania 1 Plus deliver it dutifully and with remarkable clarity for this level. This is a small but definite improvement on their older sibling for layering and detail.
Instruments such as the slinking bass, Wurlitzer and saxophone at the outset of Beck’s Debra are organised with precision and given an extra few yards of space within the mix, too. The low-level, call-to-action vocal before the verse is often lost in muddier bass registers of lesser headphones, but not here. Beck’s distinctive voice is emotive and held masterfully in check even as the intensity builds. Through the mids and treble, we’re aware of the step-up in terms of clarity and refinement over the original Melos.
Through heavier tracks such as Eminem’s Stan, the teeming rain sounds natural at the window as Stan’s scrawl cuts through with clarity, underpinned by an accurate and regimented bassline. There are marginal gains to be had over the originals in terms of the dynamic build too. The leading edges of notes are marginally cleaner in the updated set of in-ears, as demonstrated by the initial synth strings in Dr Dre’s Forget About Dre.
In our review of the five-star Panasonic RZ-S500W, we said that in direct comparison, the Cambridge product suffered marginally for detail. That balance is now redressed with the Melomania 1 Plus. Whether you prefer the Panasonic proposition over the Melomanias will likely come down to the former’s noise-cancelling or teardrop design, neither of which feature in the Cambridges. But for an engaging, detailed, expansive listen, the Melomania 1 Plus are very much back in the running for best at this level.
Verdict
Cambridge’s compact, fuss-free and affordable design was a hit with us the first time around in 2019. The addition of a slicker paint-job, app support for EQ customisation and the step-up in sonic detail and refinement – without the anticipated price hike – only makes us want to heap extra praise upon the new Melomania 1 Plus.
While the original Melomania 1 can now be had for a significant discount, we’d still point you towards this updated model. There’s no noise-cancelling onboard, but those who don’t need shouldn’t hesitate to add these latest Melomanias to their shortlist.
When Intel announced its public CPU and CPU microarchitecture roadmap last August, it did not formally confirm that its upcoming Sapphire Rapids processor will use its upcoming Golden Cove microarchitecture. For some reason, Intel was publicly quiet about this fact for months and only this week announced it publicly.
“Sapphire Rapids uses Golden Cove, not Willow Cove,” said Andi Kleen, a Linux engineer at Intel.
Design of CPU cores (which depends on microarchitecture) is closely tied to fabrication process (and vice versa) as it defines transistor performance, power delivery and power consumption. Porting a processor core from one node to another is possible, but it is generally not a good idea as it its performance and power characteristics change significantly. For example, since Intel’s Ice Lake/Willow Cove cores were originally developed for the company’s second generation 10 nm manufacturing technology, Intel has never ported its Ice Lake-SP CPU to a more advanced 10 nm SuperFin process.
Intel has always planned to use its 10 nm Enhanced SuperFin node for its codenamed Alder Lake processor based on the Golden Cove microarchitecture for client PCs as well as its codenamed Sapphire Rapids CPU for servers and data centers. To that end, the confirmation that Intel’s high-performance 2021 CPUs are based on Golden Cove is not a surprise at all.
The latest unofficial details about Intel’s fourth Generation Xeon Scalable “Sapphire Rapids” indicate that the CPUs could have 72 to 80 cores, a significant increase when compared to today’s third Generation Xeon Scalable ‘Ice Lake-SP’ processors that have up to 40 cores. Also, the new CPUs will support PCIe 5.0 interface with CXL 1.1 on top as well as DDR5 and HBM2E memory.
Ever wonder how long Intel’s 11th Generation Rocket Lake processor can survive without a CPU cooler? Well, famous chip photographer Fritzchens Fritz has killed a Core i5-11400 for the sake of science.
The Core i5-11400, which is the current budget CPU king, arrives wielding six Cypress Cove cores clocked at 2.6 GHz. The hexa-core chip features a 4.4 GHz boost clock and a 65W TDP (thermal design power). Bear in mind that 65W is the PL1 (Power Level 1) rating, which is the Core i5-11400’s power consumption at the base clock. In reality, the processor is rated with a 154W PL2 (Power Level 2) that corresponds to the power draw during boost.
Fritz mentioned that it was impossible to run the Core i5-11400 at stock because Rocket Lake isn’t designed or optimized for low power consumption. The author had to modify the processor’s operating parameters to prevent it from going into an emergency shutdown.
The author started by fixing the operating clock speed to 800 MHz. He then disabled Hyper-Threading, the iGPU and AVX altogether. Additionally, he also lowered the VCCSA with a -0.200mV offset and drop the memory speed down to DDR4-1333. Fritz performed a couple of single-and multi-threaded tests to evaluate the Core i5-11400’s thermal behavior.
If you zoom into the thermal camera, you can see how each core inside the Core i5-11400 reacts differently to the type of workload. It’s pretty cool to see the Cypress Cove cores jump around during the single-threaded test.
The point of the experiment is to see how the Core i5-11400 operates without a heatsink. In case you’re curious, the Core i5-11400 at 800 MHz with Hyper-Threading and AVX scored 106 and 116 points in the single-and multi-core tests, respectively, in Cinebench R15.
The Core i5-11400, which retails for $188.99, obviously didn’t survive Fritz’ torture. However, the dead chip will be put to good use as Fritz will likely delight us with some beautiful die shots of the Rocket Lake-S part pretty soon.
As the launch dates for Nvidia’s GeForce RTX 3070 Ti and GeForce RTX 3080 Ti edge closer, more information about their general specifications are revealed. As reported by VideoCardz, Leadtek and Palit Microsystems have added general specifications of their upcoming GeForce RTX 3080 Ti graphics boards to Korean National Radio Research Agency (RRA) database. As this is not an official announcement, a modicum of scepticism is required.
In the RRA listing there is an indication of 12GB of GDDR6X memory for the Titanium variant of the RTX 3080, as discovered by @harukaze5719. Last week an alleged leaked MSI presentation that covered the company’s upcoming Suprim-series graphics cards confirmed memory configurations of Nvidia’s long-rumored GeForce RTX 3070 Ti and GeForce RTX 3080 Ti graphics cards: 8GB of memory for the former and 12GB of GDDR6X memory for the latter.
We have previously reported, citing unofficial sources, that Nvidia’s GeForce RTX 3080 Ti is based on Nvidia’s GA102 GPU with 10,240 CUDA cores, has a 384-bit memory interface, and carries 12GB of GDDR6X memory. Performance of the unit should be close to that of Nvidia’s range-topping GeForce RTX 3090 in cases where the latter’s 24GB of onboard memory is not a factor. Meanwhile, for applications that require significant amounts of DRAM, the RTX 3090 will be unbeatable.
The GeForce RTX 3070 Ti is based on Nvidia’s GA104 graphics processor with 6,144 CUDA cores and a 256-bit interface that will be used for 8GB of 19 Gbps GDDR6X memory, according to unofficial information.
It is expected that Nvidia will introduce at least the GeForce RTX 3080 Ti on May 31 and start its sales in June. Still, we advise to take everything unofficial with a grain of salt as plans tend to change.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.